My little brother is a trucker, and although I can’t imagine a solution like this working for the rural routes he drives, it’ll be interesting to see how it might work for long-haul highways. Check out the idea (not cheap, but potentially highly impactful):
10 years ago we put a totally gratuitous (but fun!) 3D view of the layers stack into Photoshop Touch. You couldn’t actually edit in that mode, but people loved seeing their 2D layers with 3D parallax.
More recently apps are endeavoring to turn 2D photos into 3D canvases via depth analysis (see recent Adobe research), object segmentation, etc. That is, of course, an extension of what we had in mind when adding 3D to Photoshop back in 2007 (!)—but depth capture & extrapolation weren’t widely available, and it proved too difficult to shoehorn everything into the PS editing model.
Now Mental Canvas promises to enable some truly deep expressivity:
I do wonder how many people could put it to good use. (Drawing well is hard; drawing well in 3D…?) I Want To Believe… It’ll be cool to see where this goes.
Semantic segmentation + tracing FTW!
By using machine learning to understand the scene, Project Make it Pop makes it easy to create and customize an illustration by distinguishing between the background and the foreground as well as recognizing connected shapes and structures.
And you’ve gotta stick around for the whole thing, or just jump to around 2:52 where I literally started saying “WTF…?”
What if Photoshop’s breakthrough Smart Portrait, which debuted at MAX last year, could work over time?
One may think this is an easy task as all that is needed is to apply Smart Portrait for every frame in the video. Not only is this tedious, but also visually unappealing due to lack of temporal consistency.
In Project Morpheus, we are building a powerful video face editing technology that can modify someone’s appearance in an automated manner, with smooth and consistent results.
Check it out:
I plan to highlight several of the individual technologies & try to add whatever interesting context I can. In the meantime, if you want the whole shebang, have at it!
I kinda can’t believe it, but the team has gotten the old gal (plus Illustrator) running right in Web browsers!
VP of design Eric Snowden writes,
Extending Illustrator and Photoshop to the web (beta) will help you share creative work from the Illustrator and Photoshop desktop and iPad apps for commenting. Your collaborators can open and view your work in the browser and provide feedback. You’ll also be able to make basic edits without having to download or launch the apps.
Creative Cloud Spaces (beta) are a shared place that brings content and context together, where everyone on your team can access and organize files, libraries, and links in a centralized location.
Creative Cloud Canvas (beta) is a new surface where you and your team can display and visualize creative work to review with collaborators and explore ideas together, all in real-time and in the browser.
From the FAQ:
Adobe extends Photoshop to the web for sharing, reviewing, and light editing of Photoshop cloud documents (.psdc). Collaborators can open and view your work in the browser, provide feedback, and make basic edits without downloading the app.
Photoshop on the web beta features are now available for testing and feedback. For help, please visit the Adobe Photoshop beta community.
So, what do you think?
“Folded optics” & computational zoom FTW! The ability to apply segmentation and selective blur (e.g. to the background behind a moving cyclist) strikes me as especially smart.
On a random personal note, it’s funny to see demo files for features like Magic Eraser and think, “Hey, I know that guy!” much like I did with Content-Aware Fill eleven (!) years ago. And it’s fun that some of the big brains I got to work with at Google have independently come over to collaborate at Adobe. It’s a small, weird world.
I know I posted about it just the other day, but the design of this system is legit interesting. I thought it was especially cool that one can remove the side grips, attach them to the monitor, and control the whole rig from literally miles away (!).
For anyone who’s ever flown a drone but felt insufficiently self-conscious & at risk, let the good times fly!
The Jetson ONE measures 2,845 mm long, 2,400 mm wide, 1,030 mm high, and weighs 86 kg, and is capable of flying a pilot weighing up to 95 kg. It is also collapsible to 900 mm wide when not in use.
Includes LIDAR & a parachute for a cool $92k.
Built-in gimbal, 8K rez, LIDAR rangefinder for low-light focusing—let’s go!
It commands a pro price tag, too. Per The Verge:
The 6K version costs $7,199, the 8K version is $11,499, and both come with a decent kit: the gimbal, camera, LIDAR range finder, a monitor and hand grips / top handle, a carrying case, and a battery (the 8K camera also comes with a 1TB SSD). In the realm of production cameras and stabilization systems, that’s actually on the lower end (DJI’s cinema-focused Ronin 2 stabilizer costs over $8,000 without any camera attached, and Sony’s FX9 6K camera costs $11,000 for just the body), but if you were hoping to use the LIDAR focus system to absolutely nail focus in your vlogs, you may want to rethink that.
It’s that thing where you wake up, see some exciting research, tab over to Slack to share it with your team—and then notice that the work is from your teammates. 😝
Check out StyleAlign from my teammate Eli Shechtman & collaborators. Among other things, they’ve discovered interesting, useful correspondences in ML models for very different kinds of objects:
We find that the child model’s latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing
Here’s a little taste of what it enables:
And to save you the trouble of looking up the afore-referenced Ghostbusters line, here ya go. 👻
The visualizations for StyleNeRF tech are more than a little trippy, but the fundamental idea—that generative adversarial networks (GANs) can enable 3D control over 2D faces and other objects—is exciting. Here’s an oddly soundtracked peek:
And here’s a look at the realtime editing experience:
I was so excited to build an AR stack for Google Lens, aiming to bring realtime magic to billions of phones’ default camera. Sadly, after AR Playground went out the door three years ago & the world shrugged, Google lost interest.
At least they’re letting others like Snap grab the mic.
Dubbed “Quick Tap to Snap,” the new feature will enable users to tap the back of the device twice to open the Snapchat camera directly from the lock screen. Users will have to authenticate before sending photos or videos to a friend or their personal Stories page.
I wish Apple would offer similar access to third-party camera apps like Halide Camera, etc. Its absence has entirely killed my use of those apps, no matter how nice they may be.
Finding my grandmother’s home in Ireland was one of the weirder adventures I’ve experienced. Directions were literally “Go to the post office and ask for directions.” This worked in 1984, but we visited again in 2007, the P.O. was defunct, so we literally had to ask some random neighbor on the road—who of course knew the way!
Much of the world similarly operates without the kind of street names & addresses most of us take for granted, and Google and others are working to enable Plus Code addresses to help people get around. Check out how it works:
Previously, creating addresses for an entire town or village could take years. Address Maker shortens this time to as little as a few weeks — helping under-addressed communities get on the map quickly, while also reducing costs. Address Maker allows organizations to easily assign addresses and add missing roads, all while making sure they work seamlessly in Google Maps and Maps APIs. Governments and NGOs in The Gambia, Kenya, India, South Africa and the U.S. are already using Address Maker, with more partners on the way. If you’re part of a local government or NGO and think Address Maker could help your community, reach out to us here g.co/maps/addressmaker.
Take a moment, won’t you, to enjoy some ethereal undersea beauty with me?
Many, many years ago, en route home from Legoland, we spied a crazy-looking photography rig atop a car on the freeway, so naturally the boys had to recreate it in Lego when we got home:
I know it’s a little OT for this blog, but as I’m always fascinated with clever little design solutions, I really enjoyed this detailed look at the iconic SR-71 Blackbird. I had no idea about things like it having a little periscope, or that its turn radius is so great that pivoting 180º at speed would necessitate covering the distance between Dayton, Ohio & and Chicago (!). Enjoy:
Things the internet loves:
Let’s do this:
Elsewhere, I told my son that I finally agree with his strong view that the live-action Lion King (which I haven’t seen) does look pretty effed up. 🙃
Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)
If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯
Anyway, back in 2021, take ‘er away, Ryan Dumlao:
Let’s say you dig AR but want to, y’know, actually create instead of just painting by numbers (just yielding whatever some filter maker deigns to provide). In that case, my friend, you’ll want to check out this guidance from animator/designer/musician/Renaissance man Dave Werner.
I had a ball schlepping all around Death Valley & freezing my butt off while working with Russell back in January, and this seminar sounds fun:
Oct 12, 2021; 7:00 – 8:30pm Eastern
Russell Preston Brown is the senior creative director at Adobe, as well as an Emmy Award-winning instructor. His ability to bring together the world of design and software development is a perfect match for Adobe products. In Russell’s 32 years of creative experience at Adobe, he has contributed to the evolution of Adobe Photoshop with feature enhancements, advanced scripts and development. He has helped the world’s leading photographers, publishers, art directors and artists to master the software tools that have made Adobe’s applications the standard by which all others are measured.
Tauntauns & wampas & Sno-Cats, oh my!
I’d never seen any of this footage & I really enjoyed it:
My colleagues Jingwan, Jimei, Zhixin, and Eli have devised new tech for re-posing bodies & applying virtual clothing:
Our work enables applications of posed-guided synthesis and virtual try-on. Thanks to spatial modulation, our result preserves the texture details of the source image better than prior work.
Check out some results (below), see the details of how it works, and stay tuned for more.
Hard to believe that it’s been almost seven years since my team shipped Halloweenify face painting at Google, and hard to believe how far things have come since then. For this Halloween you can use GANs to apply & animate all kinds of fun style transfers, like this:
I dunno, but it’s got me feeling kinda Zucked up…
They’re using using deepfakes for scripted micro-storytelling:
The new 10-episode Snap original series “The Me and You Show” taps into Snapchat’s Cameos — a feature that uses a kind of deepfake technology to insert someone’s face into a scene. Using Cameos, the show makes you the lead actor in comedy skits alongside one of your best friends by uploading a couple of selfies. […]
The Cameos feature is based on tech developed by AI Factory, a startup developing image and video recognition, analysis and processing technology that Snap acquired in 2019. […]
According to Snap, more than 44 million Snapchat users engage with Cameos on a weekly and more than 16 million share Cameos with their friends.
I dunno—to my eye the results look like a less charming version of the old JibJab templates that were hot 20 years ago, but I’m 30 years older than the Snapchat core demographic, so what do I know?