Monthly Archives: June 2022

Niantic cancels Transformers AR game, lays off scores of people

I’ve long been bewildered & bearish regarding Niantic, and about location-based AR games in general, even when they’re paired with AAA franchises (RIP Minecraft Earth). Now pile Transformers onto the dead-wagon:

Niantic has been unable to replicate that success [of Pokemon Go]. In 2019 it launched Harry Potter: Wizards Unite, which failed to find an audience and shut down earlier this year. Games based on the board game Catan and the Nintendo series Pikmin were also unsuccessful.

Ugh. Do people want experiences like this? Somehow they’ve continued to pay a billion+ dollars per year for Pokemon Go (!!), which hasn’t seemingly changed in its nearly six years of life—but so far it’s the exception that proves the rule.

But who knows: maybe AR wearables will change the game—and in the meantime Niantic & the NBA have just announced NBA All-World, which will “place NBA fans into the real-world metaverse.”

¯\_(ツ)_/¯

Training machines to spot DALL•E-made work

Speaking of Hany Farid, his team has devised a way to spot telltale signs of image synthesis:

This ability to synthesize highly realistic images is likely to pose new challenges to the photo-forensic community. This initial exploration of the geometric consistency of DALE•E-2 synthesized images reveals that while DALL•E-2 exhibits some basic understanding of perspective geometry, synthesized images contain consistent geometric inconsistencies which, while not always visually obvious, should prove forensically useful.

Sniffing out deepfakes by knowing your moves

Longtime Adobe collaborator Prof. Hany Farid writes,

Led by the incredibly talented Matyáš Boháček, we have built a new behavioral model that combines both facial and gestural characteristics to distinguish a real person from a deep-fake impersonator. We show the efficacy of this model in protecting Ukrainian President Zelenskyy from deep-fake imposters.

The paper says,

We describe a facial and gestural behavioral model that captures distinctive characteristics of Zelenskyy’s speaking style. Trained on over eight hours of authentic video from four different settings, we show that this behavioral model can distinguish Zelenskyy from deep-fake imposters.This model can play an important role–particularly during the fog of war–in distinguishing the real from the fake.

“Who Do We Want Our Customers to Become?”

As I’ve noted previously, this essay from Slack founder Stewart Butterfield is a banger. You should read the whole thing if you haven’t—or re-read it if you have—and care about building great products. In my new role exploring the crazy, sometimes scary world of AI-first creativity tools, I find myself meditating on this line:

Who Do We Want Our Customers to Become?… We want them to become relaxed, productive workers… masters of their own information… who communicate purposively.

I want customers to be fearless explorers—to F Around & Find Out, in the spirit of Walt Whitman:

Yes, this is way outside Adobe’s comfort zone—but I didn’t come back here to be comfortable. Game on.

How-to for app developers combating misinformation

Although it’s just one piece of a large puzzle, the Content Authenticity Initiative is working to help toolmakers add content credentials that help establish the original of digital media & disclose what edits have been done to it.

If you make imaging-related tools, check out this in-depth workshop exploring Adobe’s three open-source products for adding CAI support:

Content credentials, currently integrated into Adobe Photoshop, will now be available for other products and services through three simple open-source tools. Dave Kozma, Eric Scouten, and Gavin Peacock from the CAI team will show off the new JavaScript SDK, Rust Toolkit, and Command Line Utility. CAI Lead Product Designer and C2PA UX Task Force Co-Chair Pia Blumenthal will walk through the UX guidelines, use cases, and trust signals these new tools enable.

Fun, these clone wars are

Cristóbal Valenzuela from Runway ML shared a fun example of what’s possible via video segmentation & overlaying multiple takes of a trick:

As a commenter noted, the process (shown in this project file) goes like this:

  1. Separate yourself from the background in each clip
  2. Throw away all backgrounds but one, and stack up all the clips of just you (with the success on top).

Coincidentally, I just saw Russell Brown posting a fun bonus-limbed image:

New podcast: DALL•E & You & Me

On Friday I had a ball chatting with Brian McCullough and Chris Messina on the arrival of DALL•E & other generative-imaging tech on the Techmeme Ride Home podcast. The section intro begins at 31:30, with me chiming in at 35:45 & riffing for about 45 minutes. I hope you enjoy listening as much as I enjoy talking (i.e. one heck of a lot 😅), and I’d love to know what you think.

Image credit: August Kamp + DALL•E

I’ve gathered links to some of the topics we discussed:

  • Don’t Give Your Users Shit Work. Seriously. But knowing just where to draw the line between objectively wasteful crap (e.g. tedious file format conversion) and possibly welcome labor (e.g. laborious but meditative etching) isn’t always easy. What happens when you skip the proverbial 10,000 hours of practice required to master a craft? What happens when everyone in the gym is now using a mech suit that lifts 10,000 lbs.?
  • Vemödalen: The Fear That Everything Has Already Been Done,” is demonstrated with painful hilarity via accounts like Insta Repeat. (And to make it meta, there’s my repetition of the term.) “So we beat on, boats against the current, borne back ceaselessly into the past…” Or as Marshawn Lynch might describe running through one’s face, “Over & over, and over & over & over…”
  • As Louis CK deftly noted, “Everything is amazing & nobody’s happy.”
  • The disruption always makes me think of The Onion’s classic “Dolphins Evolve Opposable Thumbs“: “Holy f*ck, that’s it for us monkeys.” My new friend August replied with the armed dolphin below. 💪👀
  • A group of thoughtful creators recently mused on “What AI art means for human artists.” Like me, many of them likened this revolution to the arrival of photography in the 19th century. It immediately devalued much of what artists had labored for years to master—yet in doing so it freed them up to interpret the world more freely (think Impressionism, Cubism, etc.).
  • Content-Aware Fill was born from the amazing PatchMatch technology (see video). We got it into Photoshop by stripping it down to just one piece (inpainting), and I foresee similar streamlined applications of the many things DALL•E-type tech can do (layout creation, style transfer, and more).
  • StyleCLIP is my team’s effort to edit faces via text by combining OpenAI’s CLIP (part of DALL•E’s magic sauce) with NVIDIA’s StyleGAN. You can try it out here.
  • Longtime generative artist Mario Klingemann used GPT-3 to coin a name for Promptomancy. I wonder how long these incantations & koans will remain central, and how quickly we’ll supplement or even supplant them with visual affordances (presets, sliders, grids, etc.).
  • O.C.-actor-turned-author Ben McKenzie wrote a book on crypto that promises to be sharp & entertaining, based on the interviews with him I’ve heard.
  • Check out the DALL•E-made 3D Lego Teslas that, at a glance, fooled longtime Pixar vet Guido Quaroni. I also love these gibberish-filled ZZ Top concert posters.
  • My grand-mentee (!) Joanne is the PM for DALL•E.
  • Bill Atkinson created MacPaint, blowing my 1984 mind with breakthroughs like flood fill. The arrival of DALL•E feels so similar.

Lightroom now supports video editing

At last…!

Per the team blog (which lists myriad other improvements):

The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.

And here’s a fun detail:

Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.

I’ll share more details as I see tutorials, etc. arrive.

DALL•E text as Italian comedic gibberish

Amidst my current obsession with AI-generated images, I’ve been particularly charmed by DALL•E’s penchant for rendering some delightfully whacked-out text, as in these concert posters:

https://twitter.com/jnack/status/1531011391687041025?s=20&t=eE-VbkQXqVMP-8nLsY5zmQ

This reminded me of an old Italian novelty song meant to show non-native English speakers what the language sounds like to non-speakers. Enjoy. 😛

Just scratching the surface on generative inpainting

I’m having a ball asking the system to create illustrations, after which I can select regions and generate new variations. Click/tap if needed to play the animation below:

It’s a lot of fun for photorealistic work, too. Here I erased everything but Russell Brown’s face, then let DALL•E synthesize the rest:

And check out what it did with a pic of my wife & our friend (“two women surrounded by numerous sugar skulls and imagery from día de los muertos, in the style of salvador dalí, digital art”). 💃🏻💀👀

“The AI that creates any picture you want, explained”

Obviously I’m almost criminally obsessed with DALL•E et al. (sorry if you wanted to see my normal filler here 😌). Here’s an accessible overview of how we got here & how it all works:

The vid below gathers a lot of emerging thoughts from sharp folks like my teammate Ryan Murdock & my friend Mario Klingemann. “Maybe the currency is ideas [vs. execution]. This is a future where everyone is an art director,” says Rob Sheridan. Check it out:

[Via Dave Dobish]

“Content-Aware Fill… cubed”: DALL•E inpainting is nuts

The technology’s ability not only to synthesize new content, but to match it to context, blows my mind. Check out this thread showing the results of filling in the gap in a simple cat drawing via various prompts. Some of my favorites are below:

Also, look at what it can build out around just a small sample image plus a text prompt (a chef in a sushi restaurant); just look at it!