“We designed and refined FPV drones since 5 years now. When Kilian spoke about his idea of putting a GoPro Fusion on one of our drones, we were intrigued but thrilled about this new challenge. The design and the flying of this set up are so different than what we are used to, there were loads of crashes but the end result is so refreshing and pushes the drone shot to the next level.” Pierre, engineer at Cinematic Flow
Now, how about a look behind the scenes?
We trust the cam, we ensure a light flying and tighten the buttocks to export!
Wow—check out this amazing sneak peek from Adobe’s Long Mai (see paper):
Enables any photograph to be turned into a live photo; animating the image in 3D, simulating the realistic effect of flying through the scene.
This is especially dear to my heart.
As a brand new Photoshop PM (in 2002—gah!), one of my first trips was back to NYC to visit motion graphics artists. Touring one shop I was amazed to glimpse a technique I’d never seen, using Photoshop to break 2D photos into layers, fill in gaps, and then animate the results in After Effects. Later that year the work came to the big screen in The Kid Stays in the Picture, the documentary that now lends its name to this ubiquitous parallax effect.
Here Yorgo Alexopoulos talks about how he developed the technique & how he’s leveraged it in later works:
So, while we wait for Adobe’s new tech to ship, how could one do this by hand? Below, artist Joe Fellows gives a brief, highly watchable demo of how it’s done (although it physically pains me to see him using the Pen tool to make selections & no Content-Aware Fill to at least block in the gaps):
Man, I used to hate demoing alongside After Effects during internal Adobe events: We had Photoshop, sure—but they were Photoshop on wheels. You could just pencil them in for the Top Gun trophy nearly every time.
Making Content-Aware Fill work at all is hard—but making it effective over multiple frames (“temporally coherent,” in our nerdy parlance)? Well, that requires FM technology—F’ing Magic. Here’s a naive implementation (not from Adobe):
Cool, artsy—but generally not so useful. And here (at 1:50:44) it is as the After Effects team intends to ship it next year (first sneak-peeked last year as Project Cloak):
Special props to Jason Levine for vamping through the calculation phase & then going full “When Harry Met Sally deli scene” at the conclusion. As a friend noted, “I’ll have what he’s having.” 😝
“Even as Amazon Alexa, Google Assistant, Siri and other voice assistants have taken off like wildfire,” writes Khoi Vinh, “designers working in voice have been stymied by the nearly complete lack of voice tools oriented around the design process. All that changes today.”
Check out this 50-second demo, and see Khoi’s post for the backstory on how this tech came to Adobe & its tools.
[Y]ou just take a photo in Portrait mode using your compatible dual-lens smartphone, then share as a 3D photo on Facebook where you can scroll, pan and tilt to see the photo in realistic 3D… Everyone will be able to see 3D photos in News Feed and VR today, while the ability to create and share 3D photos begins to roll out today and will be available to everyone in the coming weeks.
Check out their post for tips on composing a 3D-friendly image (e.g. include lots of foreground/background separation; avoid transparent objects like drinking glasses).
“Been waiting to build this since the beginning of Google Photos :)” tweeted Dave Lieb, product lead for Google Photos. As TechCrunch writes,
[U]sing A.I. technologies and facial recognition is a next step, and one that makes Google Photos an even more compelling app. In practice, it means that you wouldn’t have to manually share photos with certain people ever again – you can just set up a Live Album once, and then allow the automation to take over.
Oh, and with the newly announced Google Home Hub, people (e.g. my folks) can have an auto-updating picture frame showing specific people (e.g. our kids).