Monthly Archives: June 2023

Fool me thrice? Insta360 GO 3 arrives

Having really enjoyed my Insta360 One X, X2, and X3 cams over the years, I’ve bought—and been burned by—the tiny GO & GO2:

And yet… I still believe that having an unobtrusive, AI-powered “wearable photographer” (as Google Clips sought to be) is a worthy and potentially game-changing north star. (See the second link above for some interesting history & perspective). So, damn if I’m not looking at the new GO 3 and thinking, “Maybe this time Lucy won’t pull away the football…”

Here’s Casey Neistat’s perspective:

Guiding Photoshop’s Generative Fill through simple brushing

Check out this great little demo from Rob de Winter:


The steps are, he writes,

  1. Draw a rough outline with the brush tool and use different colors for all parts.
  2. Go to Quick Mask Mode (Q).
  3. Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
  4. Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
  5. Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!

Google uses generative imaging for virtual try-on

In my time at Google, we tried and failed a lot to make virtual try-on happen using AR. It’s extremely hard to…

  • measure bodies (to make buying decisions based on fit)
  • render virtual clothing accurately (placing virtual clothing over real clothing, or getting them to disrobe, which is even harder!; simulating materials in realtime)
  • get a sizable corpus of 3D assets (in a high-volume, low-margin industry)

Outside of a few limited pockets (trying on makeup, glasses, and shoes—all for style, not for fit), I haven’t seen anyone (Amazon, Snap, etc.) crack the code here. Researcher Ira Kemelmacher-Shlizerman (who last I heard was working on virtual mirrors, possibly leveraging Google’s Stargate tech) acknowledges this:

Current techniques like geometric warping can cut-and-paste and then deform a clothing image to fit a silhouette. Even so, the final images never quite hit the mark: Clothes don’t realistically adapt to the body, and they have visual defects like misplaced folds that make garments look misshapen and unnatural.

So, it’s interesting to see Google trying again (“Try on clothes with generative AI”):

This week we introduced an AI-powered virtual try-on feature that uses the Google Shopping Graph to show you how clothing will look on a diverse set of real models.

Our new guided refinements can help U.S. shoppers fine-tune products until you find the perfect piece. Thanks to machine learning and new visual matching algorithms, you can refine using inputs like color, style and pattern.

They’ve posted a technical overview and a link to their project site:

Inspired by Imagen, we decided to tackle VTO using diffusion — but with a twist. Instead of using text as input during diffusion, we use a pair of images: one of a garment and another of a person. Each image is sent to its own neural network (a U-net) and shares information with each other in a process called “cross-attention” to generate the output: a photorealistic image of the person wearing the garment. This combination of image-based diffusion and cross-attention make up our new AI model.

They note that “We don’t promise fit and for now focus only on visualization of the try on. Finally, this work focused on upper body clothing.”

It’s a bit hard to find exactly where one can try out the experience. They write:

Starting today, U.S. shoppers can virtually try on women’s tops from brands across Google, including Anthropologie, Everlane, H&M and LOFT. Just tap products with the “Try On” badge on Search and select the model that resonates most with you.

“Don’t give up on the real world”: A great new campaign from Nikon

I’m really enjoying this new campaign from Nikon Peru:

“This obsession with the artificial is making us forget that our world is full of amazing natural places that are often stranger than fiction.

“We created a campaign with real unbelievable natural images taken with our cameras, with keywords like those used with Artificial Intelligence.”

Check out the resulting 2-minute piece:

And here are some of the stills, courtesy of PetaPixel:

New experimental filmmaking from Paul Trillo

Paul is continuing to explore what’s possible by generating short clips using Runway’s Gen-2 text-to-video model. Check out the melancholy existential musings of “Thank You For Not Answering”:

And then, to entirely clear your mental palate, there’s the just deeply insane TRUCX!

Adobe will offer Firefly indemnification

Per Reuters:

Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.

In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.

“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.

Demo: Using Firefly for poster creation

My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:

Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!