Fool me thrice? Insta360 GO 3 arrives

Having really enjoyed my Insta360 One X, X2, and X3 cams over the years, I’ve bought—and been burned by—the tiny GO & GO2:

And yet… I still believe that having an unobtrusive, AI-powered “wearable photographer” (as Google Clips sought to be) is a worthy and potentially game-changing north star. (See the second link above for some interesting history & perspective). So, damn if I’m not looking at the new GO 3 and thinking, “Maybe this time Lucy won’t pull away the football…”

Here’s Casey Neistat’s perspective:

Guiding Photoshop’s Generative Fill through simple brushing

Check out this great little demo from Rob de Winter:


The steps are, he writes,

  1. Draw a rough outline with the brush tool and use different colors for all parts.
  2. Go to Quick Mask Mode (Q).
  3. Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
  4. Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
  5. Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!

Google uses generative imaging for virtual try-on

In my time at Google, we tried and failed a lot to make virtual try-on happen using AR. It’s extremely hard to…

  • measure bodies (to make buying decisions based on fit)
  • render virtual clothing accurately (placing virtual clothing over real clothing, or getting them to disrobe, which is even harder!; simulating materials in realtime)
  • get a sizable corpus of 3D assets (in a high-volume, low-margin industry)

Outside of a few limited pockets (trying on makeup, glasses, and shoes—all for style, not for fit), I haven’t seen anyone (Amazon, Snap, etc.) crack the code here. Researcher Ira Kemelmacher-Shlizerman (who last I heard was working on virtual mirrors, possibly leveraging Google’s Stargate tech) acknowledges this:

Current techniques like geometric warping can cut-and-paste and then deform a clothing image to fit a silhouette. Even so, the final images never quite hit the mark: Clothes don’t realistically adapt to the body, and they have visual defects like misplaced folds that make garments look misshapen and unnatural.

So, it’s interesting to see Google trying again (“Try on clothes with generative AI”):

This week we introduced an AI-powered virtual try-on feature that uses the Google Shopping Graph to show you how clothing will look on a diverse set of real models.

Our new guided refinements can help U.S. shoppers fine-tune products until you find the perfect piece. Thanks to machine learning and new visual matching algorithms, you can refine using inputs like color, style and pattern.

They’ve posted a technical overview and a link to their project site:

Inspired by Imagen, we decided to tackle VTO using diffusion — but with a twist. Instead of using text as input during diffusion, we use a pair of images: one of a garment and another of a person. Each image is sent to its own neural network (a U-net) and shares information with each other in a process called “cross-attention” to generate the output: a photorealistic image of the person wearing the garment. This combination of image-based diffusion and cross-attention make up our new AI model.

They note that “We don’t promise fit and for now focus only on visualization of the try on. Finally, this work focused on upper body clothing.”

It’s a bit hard to find exactly where one can try out the experience. They write:

Starting today, U.S. shoppers can virtually try on women’s tops from brands across Google, including Anthropologie, Everlane, H&M and LOFT. Just tap products with the “Try On” badge on Search and select the model that resonates most with you.

“Don’t give up on the real world”: A great new campaign from Nikon

I’m really enjoying this new campaign from Nikon Peru:

“This obsession with the artificial is making us forget that our world is full of amazing natural places that are often stranger than fiction.

“We created a campaign with real unbelievable natural images taken with our cameras, with keywords like those used with Artificial Intelligence.”

Check out the resulting 2-minute piece:

And here are some of the stills, courtesy of PetaPixel:

New experimental filmmaking from Paul Trillo

Paul is continuing to explore what’s possible by generating short clips using Runway’s Gen-2 text-to-video model. Check out the melancholy existential musings of “Thank You For Not Answering”:

And then, to entirely clear your mental palate, there’s the just deeply insane TRUCX!

Adobe will offer Firefly indemnification

Per Reuters:

Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.

In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.

“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.

Demo: Using Firefly for poster creation

My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:

Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!

Russell + GenFill, Part II

When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅

On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:

On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:

And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:

AI: Russell Brown talks Generative Fill

I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:

Check out his foundational Introduction to Generative Fill:

And then peep some tips specifically on getting desired shapes using selections:

Stay tuned for more soon!

LinkedIn Learning tackles Firefly

Check out this new course from longtime Adobe expert Jan Kabili:

Adobe Firefly is an exciting new generative AI imaging tool from Adobe. With Firefly, you can create unique images and text effects by typing text prompts and choosing from a variety of style inputs. In this course, imaging instructor and Adobe trainer Jan Kabili introduces Firefly. She explains what Firefly can offer to your creative workflow, and what makes it unique in the generative AI field. She demonstrates how to generate images from prompts, built-in styles, and reference images, and shares tips for generating one of a kind text effects. Finally, Jan shows you how to use images generated by Firefly to create a unique composite in Photoshop.

A fun conversation: Firefly & the future

Here’s me, talking fast about anything & everything related to Firefly and possibilities around creative tools. Give ‘er a listen if you’re interested (or, perhaps, are just suffering from insomnia 😌):

Come try Generative Fill on the Web, no wait required!

There’s a roughly zero percent chance that you both 1) still find this blog & 2) haven’t already seen all the Generative Fill coverage from our launch yesterday 🎉. I’ll have a lot more to say about that in the future, but for now, you can check out the module right now and get a quick tour here:

https://twitter.com/jnack/status/1660971909327224832?s=20

And here’s a rad little workflow optimization I’m proud we were able to sneak in:

Wes Anderson’s Avatar, + “The World’s First Bootcamp for AI Filmmaking”

The folks at Curious Refuge are back on their bullshit, in the best sense, dropping a new AI-powered Wes Anderson homage:

They’ve also launching a new AI filmmaking course:

Welcome to AI Filmmaking from Curious Refuge. This is the world’s first online course for showing you how to use AI to create films. Our training will cover various aspects of the production process from prompt engineering to animation and movement. We’d love for you to join our course and unlock your inner artist. $499 $399 Per Artist – Pre-Sale Special

Skybox scribble: Create 360º immersive views just by drawing

Pretty slick stuff! This very short vid is well worth watching:

With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!

AI: A fun conversation with Bilawal & me

I had a ball catching up with my TikTok-rockin’ Google 3D veteran friend Bilawal Sidhu on Twitter yesterday. We (okay, mostly I) talked for the better part of 2.5 hours (!), which you can check out here if you’d like. I’ll investigate whether there’s a way to download, transcribe, and summarize our chat. 🤖 In the meantime, I’d love to hear any comments it brings to mind.

https://twitter.com/bilawalsidhu/status/1657417284598607875?s=20

Firefly Faves FTW!

🎉

Note that this is just a first step: favorites are stored locally in your browser, not (yet) synced with the cloud. We want to build from here to enable really easy sharing & discovery of great presets. Stay tuned, and please let us know what you think!

My interview & demos with Deke

Hah, OMG:

  • No one should be this excited to talk to me.
  • Deke kindly & wildly overstates the scope of my role at Adobe.

But hey, what the hell, I’ll take it!

I had a lot of fun chatting with my old friend Deke McClelland, getting to show off a new possible module (stylizing vectors), demoing 3D-to-image, and more. Here, have at it if you must. 😅

00:00 Introducing John Nack

00:33 Adobe Firefly Preview #1: Discrete Asset Generation

02:39 One Day, Masked Assets?

04:01 A.I. Assets from the Libraries Panels?

05:32 ControlNet Is Wild

07:41 An A.I. Beer Commercial, from an Alien’s POV

08:19 patreon.com/dekenow Plug

08:39 Adobe Firefly Preview #2: 3D

10:51 Wrapping Your 3D Objects in A.I. Textures

11:56 Firefly Does Not Limit You to a Command-Line Prompt

13:06 Keeping It Safe for Work

15:17 Adobe Firefly Preview #3: Inpainting

16:52 Filling a Background with A.I.

17:38 Remove Is So Much Better Than Content-Aware Fill

19:41 When Can We Upload Custom SVG Logos to Type Effects?

20:40 Check Out Posts by Bilawal Sidhu

21:41 OTF Character Replacement with Wonder Studio

22:08 NeRF Turns Video into a 3D Object

23:29 Select Anything Meets Video

23:52 John’s Blog: jnack.com

24:16 Adobe Firefly Preview #4: Video

25:10 Could I Synthesize Sound FX? Story Boards?

25:38 The Unexpected Power of Cute 3D Characters

28:07 Wrapping Up with Deke and John

Driving Photoshop via AI

Longtime Adobe vet Christian Cantrell continues to build out his Concept.art startup while extending Photoshop via GPT and generative imagining. I can’t keep up with his daily progress on Twitter (recommendation: just go follow him there!), but check out some quick recent demos:

Telling Photoshop to lens flare all the things:

Adding blurs:

Removing backgrounds:

And finally, tying it all together:

AI brilliance: Wes Anderson meets Star Wars

Now this is exactly the kind of thing I want to help bring into the world—not just because it’s delightful unto itself, but because it shows how AI-enabled tools can make the impossible possible, rather than displacing or diminishing artists’ work. It’s not like in some earlier world a talented team would’ve made this all by hand: 99% likely, it simply wouldn’t exist at all.

The 1% exception is exemplified by SNL’s brilliant Anderson parody from a few years back—all written, scouted, shot, and edited in ~3 days, but all requiring the intensive efforts of an incredibly skilled crew. (Oh, and it too features a terrific Owen Wilson spoof.)

“A brief & beautiful bubble in design history”

I always love a good dive into learning not just the what and the how of how things—in this case materials from the US federal government—was designed, but why things were done that way.

This video’s all about the briefly groovy period in which Federal designers let it all hang out. From the NASA Worm, to the EPA’s funkadelic graphics, to, heck, the Department of Labor acting like it just took mushrooms, this was an unquestionably adventurous period. And then it stopped. What went wrong?

The Federal Graphics Improvement Program was an NEA initiative started under Richard Nixon, and its brief reign inspired design conventions, logo revamps, and graphics standards manuals. But it was also just a cash infusion rather than a bureaucratic overhaul. And as a result, we only remember toasty Federal Graphic Design, rather than enjoy its enduring legacy.

Some great demos of Recolor Vectors

Veteran author Deke McClelland has posted a fun 1-minute tour of the new Recolor Vectors module:

And for a deeper dive, check out his 20-minute version:

Meanwhile my color-loving colleague Hep (who also manages the venerable color.adobe.com) joined me for a live stream on Discord last Friday. It’s fun to see her spin on how best to apply various color harmonies and other techniques, including to her own beautiful illustrations:

Sneak peek: Adobe Firefly 3D

I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.

The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:

My demo/narrative starts around the 58:10 mark:

Check out Firefly’s new Recolor Vectors module

Our first new module has just arrived 🎉, so grab your SVGs & make a path (oh my God) to the site.

From the team post:

Vector recoloring in the Firefly beta now enables you to:

  • Enter detailed text descriptions to generate colors and color palette variations in seconds
  • Use a drop-down menu to generate different vector styles that fit your creative needs
  • Gain creative assistance and inspiration by quickly generating color options that bring your visions to life in an instant

As always, we’d love to hear what you think of the tools & what you’d like to see next!

New Lightroom updates enhance noise removal & more

Check ’em out:

From the team blog:

Today, Adobe is unveiling new AI innovations in the Lightroom ecosystem — Lightroom, Lightroom Classic, Lightroom Mobile and Web — that make it easy to edit photos like a pro, so everyone can bring their creative visions to life wherever inspiration strikes. New Adobe Sensei AI-powered features empower intuitive editing and seamless workflows. Expanded adaptive presets and Masking categories for Select People make it easy to adjust fine details from the color of the sky to the texture of a person’s beard with a single click. Additionally, new features including Denoise and Curves in masking help you do more with less to save time and focus on getting the perfect shot.

Adobe announces new Firefly plans for video

Our friends in Digital Video & Audio have lots of interesting irons in the fire!

From the team blog post:

To start, we’re exploring a range of concepts, including:

  • Text to color enhancements: Change color schemes, time of day, or even the seasons in already-recorded videos, instantly altering the mood and setting to evoke a specific tone and feel. With a simple prompt like “Make this scene feel warm and inviting,” the time between imagination and final product can all but disappear.
  • Advanced music and sound effects: Creators can easily generate royalty-free custom sounds and music to reflect a certain feeling or scene for both temporary and final tracks.
  • Stunning fonts, text effects, graphics, and logos: With a few simple words and in a matter of minutes, creators can generate subtitles, logos and title cards and custom contextual animations.
  • Powerful script and B-roll capabilities: Creators can dramatically accelerate pre-production, production and post-production workflows using AI analysis of script to text to automatically create storyboards and previsualizations, as well as recommending b-roll clips for rough or final cuts.
  • Creative assistants and co-pilots: With personalized generative AI-powered “how-tos,” users can master new skills and accelerate processes from initial vision to creation and editing.