All posts by jnack

Heinz AI Ketchup

Life’s like a mayonnaise soda…
What good is seeing eye chocolate…

Lou Reed

The marketers at Heinz had a little fun noticing that an AI image-making app (DALL•E, I’m guessing) tended to interpret requests for “ketchup” in the style of Heinz’s iconic bottle. Check it out:

ArtStation, Kickstarter, and others share their AI art policies

The whole community of creators, including toolmakers, continues to feel its way forward in the fast-moving world of AI-enabled image generation. For reference, here are some of the statements I’ve been seeing:

  • ArtStation has posted guidance on “Use of AI Software on ArtStation.”
    • Projects tagged using “NoAI” will automatically be assigned an HTML “NoAI” meta tag.
    • Projects won’t be assigned this tag automatically, as the site wants creators to choose whether or not their work is eligible for use in training.
  • Kickstarter has shared “Our Current Thinking on the Use of AI-Generated Image Software and AI Art.
    • “Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.”
    • Key questions they’ll ask include “Is a project copying or mimicking an artist’s work?” and “Does a project exploit a particular community or put anyone at risk of harm?”
  • From 3dtotal Publishing:
    • “3dtotal has four fundamental goals. One of them is to support and help the artistic community, so we cannot support AI art tools as we feel they hurt this community.”
  • Clip Studio Paint will no longer implement an image generator function:
    • “We received a lot of feedback from the community and will no longer implement the image generator palette.”
    • They “fear that this will make Clip Studio Paint artwork synonymous with AI-generated work” and are choosing to prioritize other features.
  • The Society of Illustrators has shared their thoughts:
    • “We oppose the commercial use of Artificially manufactured images and will not allow AI into our annual competitions at all levels.”
    • “AI was trained using copyrighted images. We will oppose any attempts to weaken copyright protections, as that is the cornerstone of the illustration community.”

More NeRF magic: From Michelangelo to NYC

This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:

Then here’s AJ from the NYT doing a neat day-to-night transition:

And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):

https://twitter.com/smallfly/status/1604609303255605251?s=20&t=jdSW1NC_n54YTxsnkkFPJQ

“The Book of Leaves”

Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:

This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.

[Via]

A cool, quick demo of Midjourney->3D

Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:

On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:

In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:

https://twitter.com/jnack/status/1599476677918478337?s=20&t=vu_Q7Wme3Q3Ueqp1WaGUpA

[Via Shi Yan]

Helping artists control whether AI trains on their work

I believe strongly that creative tools must honor the wishes & rights of creative people. Hopefully that sounds thuddingly obvious, but it’s been less obvious how to get to a better state than the one we now inhabit, where a lot of folks are (quite reasonably, IMHO) up in arms about AI models having been trained on their work, without their consent. People broadly agree that we need solutions, but getting to them—especially via big companies—hasn’t been quick.

Thus it’s great to see folks like Mat Dryhurst & Holly Herndon driving things forward, working with Stability.ai and others to define opt-out/-in tools & get buy-in from model trainers. Check out the news:

https://twitter.com/spawning_/status/1603126330261897217

Here’s a concise explainer vid from Mat:

Adobe celebrates its 40th anniversary

It’s wild to look back & realize that I’ve spent roughly a third of my life at this special place, making amazing friends & even meeting my future wife (and future coworker!) on a customer visit. I feel like I should have more profundity to offer, and maybe I will soon, but at the moment I just feel grateful—including for the banger of a party the company threw last week in SF.

Here’s a fun little homage to history, made now via Photoshop 1.0. (I still kinda wish I hadn’t been talked into donating my boxed copy of 1.0 to the Smithsonian! The ‘Dobe giveth…)

PDF to cloud to homegrown tech titan: Adobe celebrates 40th anniversary

DALL•E/Stable Diffusion Photoshop panel gains new features

Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:

More NeRF magic: Dolly zoom & beyond

It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:

Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)

Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):

Disney demos new aging/de-aging tech

Check out the latest magic, as described by Gizmodo:

To make an age-altering AI tool that was ready for the demands of Hollywood and flexible enough to work on moving footage or shots where an actor isn’t always looking directly at the camera, Disney’s researchers, as detailed in a recently published paper, first created a database of thousands of randomly generated synthetic faces. Existing machine learning aging tools were then used to age and de-age these thousands of non-existent test subjects, and those results were then used to train a new neural network called FRAN (face re-aging network).

When FRAN is fed an input headshot, instead of generating an altered headshot, it predicts what parts of the face would be altered by age, such as the addition or removal of wrinkles, and those results are then layered over the original face as an extra channel of added visual information. This approach accurately preserves the performer’s appearance and identity, even when their head is moving, when their face is looking around, or when the lighting conditions in a shot change over time. It also allows the AI generated changes to be adjusted and tweaked by an artist, which is an important part of VFX work: making the alterations perfectly blend back into a shot so the changes are invisible to an audience.

AI-made avatars for LinkedIn, Tinder, and more

As I say, another day, another specialized application of algorithmic fine-tuning. Per Vice:

For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire…

…while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses. 

Meanwhile, the quality of generated faces continues to improve at a blistering pace:

Crowdsourced AI Snoop Doggs (is a real headline you can now read)

The Doggfather recently shared a picture of himself (rendered presumably via some Stable Diffusion/DreamBooth personalization instance)…

…thus inducing fans to reply with their own variations (click tweet above to see the thread). Among the many fun Snoop Doggs (or is it Snoops Dogg?), I’m partial to Cyberpunk…

…and Yodogg:

Some amazing AI->parallax animations

Great work from Guy Parsons, combining Midjourney with Capcut:

And from the replies, here’s another fun set:

Check out frame interpolation from Runway

I meant to share this one last month, but there’s just no keeping up with the pace of progress!

My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use:

Happy Thanksgiving! Pass the tasty inpainting.

Among the many, many things for which I can give thanks this year, I want to express my still-gobsmacked appreciation of the academic & developer communities that have brought us this year’s revolution in generative imaging. One of those developers is our friend & Adobe veteran Christian Cantrell, and he continues to integrate new tech from his new company (Stability AI) into Photoshop at a breakneck pace. Here’s the latest:

Here he provides a quick comparison between results from the previous Stable Diffusion inpainting model (top) & the latest one:

In any event, wherever you are & however you celebrate (or don’t), I hope you’re well. Thanks for reading, and I wish all the best for the coming year!

Dalí meets DALL•E! 👨🏻‍🎨🤖

Among the great pleasures of this year’s revolutions in AI imaging has been the chance to discover & connect with myriad amazing artists & technologists. I’ve admired the work of Nathan Shipley, so I was delighted to connect him with my self-described “grand-mentee” Joanne Jang, PM for DALL•E. Nathan & his team collaborated with the Dalí Museum & OpenAI to launch Dream Tapestry, a collaborative realtime art-making experience.

The Dream Tapestry allows visitors to create original, realistic Dream Paintings from a text description. Then, it stitches a visitor’s Dream Painting together with five other visitors’ paintings, filling in the spaces between them to generate one collective Dream Tapestry. The result is an ever-growing series of entirely original Dream Tapestries, exhibited on the walls of the museum.

Check it out:

My Heritage introduces “AI Time Machine”

Another day, another special-purpose variant of AI image generation.

A couple of years ago, MyHeritage struck a chord with the world via Deep Nostalgia, an online app that could animate the faces of one’s long-lost ancestors. In reality it could animate just about any face in a photo, but I give them tons of credit for framing the tech in a really emotionally resonant way. It offered not a random capability, but rather a magical window into one’s roots.

Now the company is licensing tech from Astria, which itself builds on Stable Diffusion & Google Research’s DreamBooth paper. Check it out:

Interestingly (perhaps only to me), it’s been hard for MyHeritage to sustain the kind of buzz generated by Deep Nostalgia. They later introduced the much more ambitious DeepStory, which lets you literally put words in your ancestors’ mouths. That seems not to have bent the overall needle in awareness, at least in the way that the earlier offering did. Let’s see how portrait generation fares.

Neural JNack has entered the chat… 🤖

Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:

For comparison, here’s the 3D model generated via the photogrammetry approach:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

Feedback, please: AI-powered ideation & collaboration?

A new (to me, at least) group called Kive has just introduced AI Canvas.

Here’s a quick demo:

To my eye it’s similar to Prompt.ist, introduced a couple of weeks ago by Facet:

https://twitter.com/josephreisinger/status/1586042022401409024

I’m curious: Have you checked out these tools, and do you intend to put them to use in your creative processes? I have some thoughts that I can share soon, but in the meantime it’d be great to hear yours.

PetPortrait.ai promises bespoke images of animals

We’re at just the start of what I expect to be an explosion of hyper-specific offerings powered by AI.

For $24, PetPortrait.ai offers “40 high resolution, beautiful, one-of-a-kind portraits of your pets in a variety of styles.” They say it takes 4-6 hours and requires the following input:

  • ~10 portrait photos of their face
  • ~5 photos from different angles of their head and chest
  • ~5 full-body photos

It’ll be interesting to see what kind of traction this gets. The service Turn Me Royal offers more human-made offerings in a similar vein, and we delighted our son by commissioning this doge-as-Venetian-doge portrait (via an artist on Etsy) a couple of years ago:

Podcast: “Why Figma is selling to Adobe for $20 billion, with CEO Dylan Field”

I had the chance to grab breakfast with Figma founder & CEO Dylan Field a couple of weeks ago, and I found him to be incredibly modest and down to earth. He reminded me of certain fellow Brown CS majors—the brilliant & gracious founding team of Adobe After Effects. I can’t wait for them all to meet someday soon.

In any case, I really enjoyed the hour-long interview Dylan did with Nilay Patel of The Verge. Here’s hoping that the Adobe deal goes through as planned & that we get to do great things together!

Midjourney can produce stunning type

At Adobe MAX a couple of weeks ago, the company offered a sneak peek of editable type in Adobe Express being rendered via a generative model:

https://twitter.com/jnack/status/1582818166698217472?s=20&t=yI2t5EpbhqVNWb7Ws9DWxQ

That sort of approach could pair amazingly with this sort of Midjourney output:

I’m not working on such efforts & am not making an explicit link between the two—but broadly speaking, I find the intersection of such primitives/techniques to be really promising.

Adobe 3D Design is looking for 2023 interns

These sound like great gigs!

The 3D and Immersive Design Team at Adobe is looking for a design intern who will help envision and build the future of Adobe’s 3D and MR creative tools.

With the Adobe Substance 3D Collection and Adobe Aero, we’re making big moves in 3D, but it is still early days! This is a huge opportunity space to shape the future of 3D and AR at Adobe. We believe that tools shape our world, and by building the tools that power 3D creativity we can have an outsized impact on our world.

Runway “Infinite Canvas” enables outpainting

I’ve tried it & it’s pretty slick. These guys are cooking with gas! (Also, how utterly insane would this have been to see even six months ago?! What a year, what a world.)

“Mundane Halloween” win: “Person whose skeleton is being estimated by machine learning” 

Happy day to all who celebrate. 😌

The whole thread is hilarious & well worth a look:

A fistful of generative imaging news

Man, I can’t keep up with this stuff—and that’s a great problem to have. Here are some interesting finds from just the last few days:

Adobe “Made In The Shade” sneak is 😎

OMG—interactive 3D shadow casting in 2D photos FTW! 🔥

In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.

Check out AI backdrop generation, right in the Photoshop beta today

One of the sleeper features that debuted at Adobe MAX is the new Create Background, found under Neural Filters. (Note that you need to be running the current public beta release of Photoshop, available via the Creative Cloud app—y’know, that little “Cc” icon dealio you ignore in your menu bar. 🙃)

As this quick vid demonstrates, the filter can not only generate backgrounds based on text, it links to a Behance gallery containing images and popular prompts. You can use these visuals as inspiration, then use the prompts to produce artwork within the plugin:

https://youtu.be/oMVfxyQbO5c?t=74

Here’s the Behance browser:

New Lightroom features: A 1-minute tour, plus a glimpse of the future

The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:

And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!