Welcome to the rabbit hole, my friends. 🙃
What if instead of pushing pixels, you could simply tell your tools what changes you’d like to see? (Cue Kramer voice: “Why don’t you just tell me the movie…??”) This new StyleCLIP technology (code) builds on NVIDIA’s StyleGAN foundation to enable image editing simply by applying various terms. Check out some examples (“before” images in the top row; “after” below along with editing terms).

Here’s a demo of editing human & animal faces, and even of transforming cars:
By no means have I been around here long enough (five whole days!) to grok everything that’s going on here, but as I come up to speed, I’ll do my best to share what I’m learning. Meanwhile I’d love to hear your thoughts on how we might thoughtfully bring techniques like this to life.
It looks like you are working on Adobe Max sneaks full time, which is awesome 🙂
As a person who lives inside Creative Cloud’s video and image apps on a daily basis, I am so pleased you are back at Adobe. Have followed you for as long as you have been blogging.
that’s bananas
This of course makes me want to type in “bananas” just to see what happens.
I’m old enough to remember “pictures don’t lie.”
I’m very conflicted. On one hand, this is incredible. I can see all kinds of uses for it. On the other…it’s flat out scary.
The use of this stuff is already creating fake politicians.
Good technology, bad idea.
My mouth is agape. I can’t tell whether I want to play with it or ban it.
Wow. Speechless.
Please don’t use politicians and celebrities pictures for this stuff.
There’s more than enough of them already, and billions of more interesting faces in the world.