Niantic has been unable to replicate that success [of Pokemon Go]. In 2019 it launched Harry Potter: Wizards Unite, which failed to find an audience and shut down earlier this year. Games based on the board game Catan and the Nintendo series Pikmin were also unsuccessful.
Ugh. Do people want experiences like this? Somehow they’ve continued to pay a billion+ dollars per year for Pokemon Go (!!), which hasn’t seemingly changed in its nearly six years of life—but so far it’s the exception that proves the rule.
But who knows: maybe AR wearables will change the game—and in the meantime Niantic & the NBA have just announced NBA All-World, which will “place NBA fans into the real-world metaverse.”
Speaking of Hany Farid, his team has devised a way to spot telltale signs of image synthesis:
This ability to synthesize highly realistic images is likely to pose new challenges to the photo-forensic community. This initial exploration of the geometric consistency of DALE•E-2 synthesized images reveals that while DALL•E-2 exhibits some basic understanding of perspective geometry, synthesized images contain consistent geometric inconsistencies which, while not always visually obvious, should prove forensically useful.
Longtime Adobe collaborator Prof. Hany Farid writes,
Led by the incredibly talented Matyáš Boháček, we have built a new behavioral model that combines both facial and gestural characteristics to distinguish a real person from a deep-fake impersonator. We show the efficacy of this model in protecting Ukrainian President Zelenskyy from deep-fake imposters.
We describe a facial and gestural behavioral model that captures distinctive characteristics of Zelenskyy’s speaking style. Trained on over eight hours of authentic video from four different settings, we show that this behavioral model can distinguish Zelenskyy from deep-fake imposters.This model can play an important role–particularly during the fog of war–in distinguishing the real from the fake.
As I’ve noted previously, this essay from Slack founder Stewart Butterfield is a banger. You should read the whole thing if you haven’t—or re-read it if you have—and care about building great products. In my new role exploring the crazy, sometimes scary world of AI-first creativity tools, I find myself meditating on this line:
Who Do We Want Our Customers to Become?… We want them to become relaxed, productive workers… masters of their own information and not slaves… who communicate purposively.
I want customers to be fearless explorers—to F Around & Find Out, in the spirit of Walt Whitman:
Yes, this is way outside Adobe’s comfort zone—but I didn’t come back here to be comfortable. Game on.
Although it’s just one piece of a large puzzle, the Content Authenticity Initiative is working to help toolmakers add content credentials that help establish the original of digital media & disclose what edits have been done to it.
If you make imaging-related tools, check out this in-depth workshop exploring Adobe’s three open-source products for adding CAI support:
I love seeing how Anthony Schmidt, a 13yo photographer with autism, treats his neuroatypicality & resulting hyperfocus as a blessing. It’s a point I try to gently impress upon my own obsessive son about our unusual brains. Check out Anthony’s story & his pretty damn impressive model-car photography!
I’ve gathered links to some of the topics we discussed:
Don’t Give Your Users Shit Work. Seriously. But knowing just where to draw the line between objectively wasteful crap (e.g. tedious file format conversion) and possibly welcome labor (e.g. laborious but meditative etching) isn’t always easy. What happens when you skip the proverbial 10,000 hours of practice required to master a craft? What happens when everyone in the gym is now using a mech suit that lifts 10,000 lbs.?
“Vemödalen: The Fear That Everything Has Already Been Done,” is demonstrated with painful hilarity via accounts like Insta Repeat. (And to make it meta, there’s my repetition of the term.) “So we beat on, boats against the current, borne back ceaselessly into the past…” Or as Marshawn Lynch might describe running through one’s face, “Over & over, and over & over & over…”
The disruption always makes me think of The Onion’s classic “Dolphins Evolve Opposable Thumbs“: “Holy f*ck, that’s it for us monkeys.” My new friend August replied with the armed dolphin below. 💪👀
A group of thoughtful creators recently mused on “What AI art means for human artists.” Like me, many of them likened this revolution to the arrival of photography in the 19th century. It immediately devalued much of what artists had labored for years to master—yet in doing so it freed them up to interpret the world more freely (think Impressionism, Cubism, etc.).
Content-Aware Fill was born from the amazing PatchMatch technology (see video). We got it into Photoshop by stripping it down to just one piece (inpainting), and I foresee similar streamlined applications of the many things DALL•E-type tech can do (layout creation, style transfer, and more).
Longtime generative artist Mario Klingemann used GPT-3 to coin a name for Promptomancy. I wonder how long these incantations & koans will remain central, and how quickly we’ll supplement or even supplant them with visual affordances (presets, sliders, grids, etc.).
O.C.-actor-turned-author Ben McKenzie wrote a book on crypto that promises to be sharp & entertaining, based on the interviews with him I’ve heard.
The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.
And here’s a fun detail:
Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.
I’ll share more details as I see tutorials, etc. arrive.
Obviously I’m almost criminally obsessed with DALL•E et al. (sorry if you wanted to see my normal filler here 😌). Here’s an accessible overview of how we got here & how it all works:
The vid below gathers a lot of emerging thoughts from sharp folks like my teammate Ryan Murdock & my friend Mario Klingemann. “Maybe the currency is ideas [vs. execution]. This is a future where everyone is an art director,” says Rob Sheridan. Check it out:
The technology’s ability not only to synthesize new content, but to match it to context, blows my mind. Check out this thread showing the results of filling in the gap in a simple cat drawing via various prompts. Some of my favorites are below: