Well, they do call themselves a camera company… ¯\_(ツ)_/¯ This little contraption looks incredibly lightweight (pocketable, even) and easy to use. Visual quality (particularly stabilization) seems a little borderline, but I dig its person-centric nature, including tracking & AR effects (segmentation, cloning, etc.). Check out a great review—including a man-machine “romantic montage” (!):
This kind of loving madness reminds me fondly of the papercraft animations that Adobe’s Russell Brown used to do for things like the Star Wars intro crawl.
Great to see Adobe AI getting some love:
Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.
Despite Pokemon Go’s continuing (and to me, slightly baffling) success, I’ve long been much more bullish on Snap than Niantic for location-based AR. That’s in part because of their very cool world lens tech, which they’ve been rolling out more widely. Now they’re opening up the creation flow:
“In 2019, we started with templates of 30 beloved sites around the world which creators could build upon called Landmarkers… Today, we’re launching Custom Landmarkers in Lens Studio, letting creators anchor Lenses to local places they care about to tell richer stories about their communities through AR.”
At its Lens Fest event, the company announced that 250,000 lens creators from more than 200 countries have made 2.5 million lenses that have been viewed more than 3.5 trillion times. Meanwhile, on Snapchat’s TikTok clone Spotlight, the app awarded 12,000 creators a total of $250 million for their posts. The company says that more than 65% of Spotlight submissions use one of Snapchat’s creative tools or lenses.
On a related note, Disney is now using the same core tech to enable group AR annotation of the Cinderella Castle. Seems a touch elaborate:
- Park photographer takes your pic
- That pic ends up in your Disney app
- You point that app at the castle
- You see your pic on the castle
- You then take a pic of your pic on the castle… #YoDawg
As someone remarked on Twitter, “This looks better than any Marvel movie since the first Iron Man.” 🙃
They’d love to get your vote for “Best Use of AI & Machine Learning,” should the spirit move ya!
Honestly, from DALL•E innovations to classic mind-blowers like this, I feel like my brain is cooking in my head. 🙃 Take ‘er away, science:
Bonus madness (see thread for details):
I’ve admired Theo Jansen’s weird & wonderful Strandbeest creations since discovering them some 15 years ago, even getting to meet some in person in SF with the kids in 2016:
Heh—I love this kind of silly mashup. (And now I want to see what kind of things DALL•E would dream up for prompts like “medieval grotesque Burger King logo.”)
This tech can synthesize both photorealistic imagery & depth information. Here’s a characteristically charming overview from Two-Minute Papers:
My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)
I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)
I’m no 3D artist (had I but world enough and time…), but I sure love their work & anything that makes it faster and easier. Perhaps my most obscure point of pride from my Photoshop years is that we added per-layer timestamps into PSD files, so that Pixar could more efficiently render content by noticing which layers had actually been modified.
Anyway, now that Adobe has made a much bigger bet on 3D tooling, it’s great to see new support for Substance Painter coming to Unreal Engine:
The Substance 3D plugin (BETA) enables the use of Substance materials directly in Unreal Engine 5 and Unreal Engine 4. Whether you are working on games, visualization and or deploying across mobile, desktop, or XR, Substance delivers a unique experience with optimized features for enhanced productivity.
Work faster, be more productive: Substance parameters allow for real-time material changes and texture updates.
Substance 3D for Unreal Engine 5 contains the plugin for Substance Engine.
The Substance Assets platform is a vast library containing high-quality PBR-ready Substance materials and is accessible directly in Unreal through the Substance plugin. These customizable Substance files can easily be adapted to a wide range of projects.
To quote this really cool Adobe video PM who also lives in my house 😌, and who just happens to have helped bring Frame.io into Adobe,
Super excited to announce that Frame.io is now included with your Creative Cloud subscription. Frame panels are now included in After Effects and Premiere Pro. Check it out!
Take advantage of the industry's most powerful video review and collaboration tools all in one place. Introducing https://t.co/JdJeu2YuK6 for Creative Cloud – now included in #PremierePro and #AfterEffects. https://t.co/5xPF0xLYjN pic.twitter.com/aqolPm90MZ— Adobe Video & Motion (@AdobeVideo) April 12, 2022
From the integration FAQ:
Frame.io for Creative Cloud includes real-time review and approval tools with commenting and frame-accurate annotations, accelerated file transfers for fast uploading and downloading of media, 100GB of dedicated Frame.io cloud storage, the ability to work on up to 5 different projects with another user, free sharing with an unlimited number of reviewers, and Camera to Cloud.
I generally really enjoyed HBO’s Peacemaker series—albeit, as I told the kids, if even I found the profanity excessive, insofar as “too much salt spoils the soup.” I really enjoyed the whacked-out intro music & choreography:
Here the creators give a peek into how it was made:
And here a dance troupe in Bangladesh puts their spin on it:
The San Francisco Soapbox Derby—going on tomorrow in McLaren Park—looks like a blast:
Here’s hoping that this year’s participants can capture some of past years’ old-school hippie charm. 😌
Driving through the Southwest in 2020, we came across this dark & haunting mural showing the nearby Navajo Generation Station:
Now I see that the station has been largely demolished, as shown in this striking drone clip:
There’s no way this is real, is there?! I think it must use NFW technology (No F’ing Way), augmented with a side of LOL WTAF. 😛
Here’s an NYT video showing the system in action:
The NYT article offers a concise, approachable description of how the approach works:
A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of avocado photos, for example, it can learn to recognize an avocado. DALL-E looks for patterns as it analyzes millions of digital images as well as text captions that describe what each image depicts. In this way, it learns to recognize the links between the images and the words.
When someone describes an image for DALL-E, it generates a set of key features that this image might include. One feature might be the line at the edge of a trumpet. Another might be the curve at the top of a teddy bear’s ear.
Then, a second neural network, called a diffusion model, creates the image and generates the pixels needed to realize these features. The latest version of DALL-E, unveiled on Wednesday with a new research paper describing the system, generates high-resolution images that in many cases look like photos.
Though DALL-E often fails to understand what someone has described and sometimes mangles the image it produces, OpenAI continues to improve the technology. Researchers can often refine the skills of a neural network by feeding it even larger amounts of data.
I can’t wait to try it out.
A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.
In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:
As a fan of extremely dumb, simple gags, I hope that you too will enjoy these six seconds:
More elaborate, also fun:
By popular demand, here’s a recording of my colleagues’ Wednesday panel discussion: