Happy day to all who celebrate. 😌
The whole thread is hilarious & well worth a look:
Happy day to all who celebrate. 😌
The whole thread is hilarious & well worth a look:
Man, I can’t keep up with this stuff—and that’s a great problem to have. Here are some interesting finds from just the last few days:
You can take Maverick out of the Tomcat, but you can’t take the tom cat out of Maverick. 😸
[Via CAPT Chris Peppel, USN]
98% unrelated, but possibly amusing:
I always enjoy this kind of quick peek behind the scenes:
OMG—interactive 3D shadow casting in 2D photos FTW! 🔥
In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.
One of the sleeper features that debuted at Adobe MAX is the new Create Background, found under Neural Filters. (Note that you need to be running the current public beta release of Photoshop, available via the Creative Cloud app—y’know, that little “Cc” icon dealio you ignore in your menu bar. 🙃)
As this quick vid demonstrates, the filter can not only generate backgrounds based on text, it links to a Behance gallery containing images and popular prompts. You can use these visuals as inspiration, then use the prompts to produce artwork within the plugin:
Here’s the Behance browser:
I’m really excited to learn more about this development, which I’ve been eagerly awaiting. More control + more speed will make generative imaging truly, broadly useful. I’d like to understand how it compares to techniques like prompt editing.
Here’s a nice three-minute overview:
Motion Library allows you to easily add premade animated motions like fighting, dancing, and running to your characters. Choose from a collection of over 350 motions and watch your puppets come to life in new and exciting ways!
The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:
And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!
Check out my teammates’ new explorations, demoed here on Adobe Express:
Can’t wait for generative AI + editable text in Adobe tools! 🤖🔥 pic.twitter.com/2kZi4rYM21
— John Nack (@jnack) October 19, 2022
Per the blog post:
Generative AI incorporated into Adobe Express will help less experienced creators achieve their unique goals. Rather than having to find a pre-made template to start a project with, Express users could generate a template through a prompt, and use Generative AI to add an object to the scene, or create a unique font based on their description. But they still will have full control — they can use all of the Adobe Express tools for editing images, changing colors, and adding fonts to create the flyer, poster, or social media post they imagine.
LatentSpace.dev promises to turn your images into text prompts that can be used in Stable Diffusion to create new artwork. Watch it work:
It interpreted a pic of my old whip as being, among other things, a “5. 1975 pontiac firebird shooting brake wagon estate.” Not entirely bad! 😌
It seems almost too good to be true, but Google Researchers & their university collaborators have unveiled a way to edit images using just text:
In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.
Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).
I can’t wait to see it in action!
Back at the start of my DALL•E journey, I wished aloud for a diffusion-powered mobile app:
Now, thanks to the openness of Stable Diffusion & WebAR, creators are bringing that vision closer to reality:
I can’t wait to see what’s next!
Easy placement/movement of 3D primitives -> realistic/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it’s been difficult to bring the level of quality & consistency up to what Adobe users demand.
Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:
Check out a fun historical find from Adobe evangelist Paul Trani:
The video below shipped on VHS with the very first version of Adobe Illustrator. Adobe CEO & Illustrator developer John Warnock demonstrated the new product in a single one-hour take. He was certainly qualified, being one of the four developers whose names were listed on the splash screen!
How lucky it was for the world that a brilliant graphics engineer (John) married a graphic designer (Marva Warnock) who could provide constant input as this groundbreaking app took shape.
If you’re interested in more of the app’s rich history, check out The Adobe Illustrator Story:
The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:
The kind of results one can generate from just a series of photos or video frames is truly bonkers:
Here’s a tutorial on how to use it:
Check out Christian Cantrell’s latest work (still free!):
Check out Palette:
Here’s another beautiful, DALL•E-infused collaboration between VFX whiz Paul Trillo & Shyama Golden:
“My whole life has been one long ultraviolent hyperkinetic nightmare,” wrote Mark Leyner in “Et Tu, Babe?” That thought comes to mind when glimpsing this short film by Adam Chitayat, stitched together from thousands of Street View images (see Vimeo page for a list of locations).
I love the idea—indeed, back in 2014 I tried to get Google Photos to stitch together visual segues that could interconnect one’s photos—but the pacing here has my old man brain pulling the e-brake after just some short exposure. YMMV, so here ya go:
[Via]
Easily my favorite thing at Google was getting to work with stone-cold geniuses like Noah Snavely (one of the minds behind Microsoft’s PhotoSynth) and Richard Tucker. Now they & their teammates have produced some jaw-dropping image synthesis tech:
And “hold onto your papers,” as here’s a look into how it all works:
Interior AI enables you to upload an image of your room, then restyle it in various idioms (Modern, Cyberpunk, Art Nouveau, and more).
Amazingly, it was whipped up in very short order:
Impressive stuff, though know that your results—like mine—may vary. 😅
Nice work from my old crew:
With the update that starts rolling out today, you’ll see more videos — including the best snippets from your longer videos that Photos will automatically select and trim so you can relive the most meaningful moments. Even your still photos will feel more dynamic thanks to a subtle zoom that brings movement to your memories. And to bring it all together, next month we’ll start adding instrumental music to some Memories.
Happily, they’ve finally built a subset of the collage editor I spec’d out eight years ago (🧂🤷🏼).
Also,
Soon, you’ll begin to see full Cinematic Memories that transform multiple still photos into an end-to-end cinematic experience, taking you back to that moment in time. Cinematic Memories will also have music, making your photos feel a little more like a movie.
A quarter billion people engage with AR content every day, the company says.
And interestingly, one need not create a complex lens in order to have it pay off:
“The research found that simple AR can be just as performant as a sophisticated, custom Lens in driving both upper and lower-funnel metrics like brand awareness and purchase intent. Brands with the resources to execute a more sophisticated Lens will see additional benefits in mid-funnel brand metrics, including favorability and consideration.”
Photographer Greg Benz has posted a detailed tutorial showing how to use Christian Cantrell’s Stable Diffusion-Photoshop plugin (now available for free via the Adobe Marketplace). Check it out: