I’m bemused/amused to see this once-obscure (to non-speakers of Japanese) term now getting verbed in an Apple ad that’s racked up nearly 20 million views since Friday. (“This is the strangest life I’ve ever known…”)
Holy crap! Now my stuff looks positively lethargic ¯\_(ツ)_/¯, but what the heck, strap in & enjoy:
DIY Photography writes,
Johnny Schaer (Johnny FPV) is a pro drone racer. His drones are designed to be light, quick, nimble, fly upside down and through all kinds of crazy flightpaths that DJI’s drones could never achieve. And when somebody with the skill of Johnny turns on the camera, that’s when you get results like the video above.
To shoot the footage, Johnny used a drone built around the AstroX X5 Freestyle Frame (JohnnyFPV edition, obviously) frame with a GoPro Hero 7. It has no GPS, no gimbal, no stabilisation, no collision avoidance, none of those safety features that make more commercial drones predictable and easy to fly.
ZOMG, AE! So surreal, and surreally great.
David Oyelowo presents Scientific and Engineering Awards to David Simons, Daniel Wilk, James Acquavella, Michael Natkin and David M. Cotter for the design and development of the Adobe After Effects software for motion graphics.
And Dave’s mom is right: They do deserve “that Nobel Prize.” 😌
I’m so pleased to say that my team’s face-tracking tech (which you may have seen powering AR effects in YouTube Stories and elsewhere) is now available for developers to build upon:
ARCore’s new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.
“Why do you keep looking at King Midas’s wife?” my son Finn asked as I was making this GIF the other day. :-p
Check out details & grab the SDKs:
We can’t wait to see what folks build with this tech, and we’ll share more details soon!
We’re working to advance the global adoption of renewable energy by creating kites that efficiently harness energy from the wind. After more than a decade developing our energy kite technology on land, I’m thrilled to share that we’re now partnering with Shell to bring Makani to offshore environments. As we take this next step towards commercialization, we’ll also be moving on from the Moonshot Factory, our home for the last five years, to become an independent business within Alphabet.
Watch it soar:
Happy VD! Per Design Taxi,
The company has teamed up with OpenType font converter Fontself to release one unique color typeface per day through 15 February 2019 for Adobe Illustrator and InDesign.
The five packs are designed by artists from around the world, and celebrate nature, culture, architecture, and even unicorns.
The composite red, green, and blue value of every pixel in a digital photo is created through a process is called demosaicing.
Enhance Details uses an extensively trained convolutional neural net (CNN) to optimize for maximum image quality. We trained a neural network to demosaic raw images using problematic examples […] As a result, Enhance Details will deliver stunning results including higher resolution and more accurate rendering of edges and details, with fewer artifacts like false colors and moiré patterns. […]
We calculate that Enhance Details can give you up to 30% higher resolution on both Bayer and X-Trans raw files using Siemens Star resolution charts.
Hmm—I’m having a hard time wrapping my head around the resolution claim, at least based on the results shown (which depict an appreciable but not earth-shattering change). Having said that, I haven’t put the tech to the test, but I look forward to doing so.
I’m really pleased to see that augmented reality navigation has gone into testing with Google Maps users:
On the Google AI Blog, the team gives some insights into the cool tech at work:
We’re experimenting with a way to solve this problem using a technique we call global localization, which combines Visual Positioning Service (VPS), Street View, and machine learning to more accurately identify position and orientation. […]
VPS determines the location of a device based on imagery rather than GPS signals. VPS first creates a map by taking a series of images which have a known location and analyzing them for key visual features, such as the outline of buildings or bridges, to create a large scale and fast searchable index of those visual features. To localize the device, VPS compares the features in imagery from the phone to those in the VPS index. However, the accuracy of localization through VPS is greatly affected by the quality of the both the imagery and the location associated with it. And that poses another question—where does one find an extensive source of high-quality global imagery?
Read on for the full story.
No, for real. The Verge writes,
What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
But yes… Legos. See what you can make of this:
Almost looks deceivingly pleasant & prosperous in these lovely aerials:
Pyongyang is by far the weirdest and strangest place I have ever been to. At the same time it’s also one of the the most interesting and intriguing places and unlike anywere else I have ever been to. You go there with 100 questions and you return with 1000!