For now the functionality is limited to upscaling, but I have to think that they’ll soon turn on the super cool relighting & restyling tech that enables fun like transforming my dog using just different prompts (click to see larger):
Being able to declare what you want, instead of having to painstakingly set up parameters for materials, lighting, etc. may prove to be an incredibly unlock for visual expressivity, particularly around the generally intimidating realm of 3D. Check out what tyFlow is bringing to the table:
You can see a bit more about how it works in this vid…
Years ago Adobe experimented with a real-time prototype of Photoshop’s Landscape Mixer Neural Filter, and the resulting responsiveness made one feel like a deity—fluidly changing summer to winter & back again. I was reminded of using Google Earth VR, where grabbing & dragging th
Nothing came of it, but in the time since then, realtime diffusion rendering (see amazing examples from Krea & others) and image-to-image restyling have opened some amazing new doors. I wish I could attach filters to any layer in Photoshop (text, 3D, shape, image) and have it reinterpreted like this:
New way to navigate latent space. It preservers the underlying image structure and feels a bit like a powerful style-transfer that can be applied to anything. The trick is to… pic.twitter.com/orFBysBpkT
Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:
Check out the latest work (downloadable for free here) from longtime Adobe veteran (and former VP of product at Stability AI) Christian Cantrell:
The new version of the Concept Art #photoshop plugin is here! Create your own AI-powered workflows by combining hundreds of different imaging models from @replicate — as well as DALL•E 2 and 3 — without leaving @Photoshop. This is a complete rewrite with tons of new features coming (including local inference).
Deke kindly & wildly overstates the scope of my role at Adobe.
But hey, what the hell, I’ll take it!
I had a lot of fun chatting with my old friend Deke McClelland, getting to show off a new possible module (stylizing vectors), demoing 3D-to-image, and more. Here, have at it if you must. 😅
Check out this integration of sketch-to-image tech—and if you have ideas/requests on how you’d like to see capabilities like these get more deeply integrated into Adobe tools, lay ’em on me!
Also, it’s not in Photoshop, but as it made me think of the Photo Restoration Neural Filter in PS, check out this use of ControlNet to revive an old family photo:
I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):
This new capability in Stable Diffusion (think image-to-image, but far more powerful) produces some real magic. Check out what I got with some simple line art:
My friend Bilawal Sidhu made a 3D scan of his parents’ home (y’know, as one does), and he recently used the new ControlNet functionality in Stable Diffusion to restyle it on the fly. Check out details in this post & in the vid below: