Category Archives: Adobe MAX

Hands up for Res Up ⬆️

Speaking of increasing resolution, check out this sneak peek from Adobe MAX:

It’s a video upscaling tool that uses diffusion-based technology and artificial intelligence to convert low-resolution videos to high-resolution videos for applications. Users can directly upscale low-resolution videos to high resolution. They can also zoom-in and crop videos and upscale them to full resolution with high-fidelity visual details and temporal consistency. This is great for those looking to bring new life into older videos or to prevent blurry videos when playing scaled versions on HD screens.

Sneak peek: Project Glyph Ease

Easy as ABC, 123?

Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style from existing vector shapes or ones they hand draw on paper, and this technology automatically create the remaining letters in a consistent style. Once created, designers have flexibility to edit the new font since the letters will appear as live text that can be scaled, rotated or moved in the project.

Project Primrose: Animated fabric (!) from Adobe

The week before MAX, my teammate Christine had a bit of a cough, and folks were suddenly concerned about the Project Primrose sneak: it’d be awfully hard to swap out presenters when the demo surface is a bespoke dress made by & for exactly one person. Thankfully good health prevailed, and she was able to showcase Project Primrose:

Here’s a bit more info about the tech:

We propose reflective light-diffuser modules for non-emissive flexible display systems. Our system leverages reflective-backed polymer-dispersed liquid crystal (PDLC), an electroactive material commonly used in smart window applications. This low-power non-emissive material can be cut to any shape, and dynamically diffuses light. We present the design & fabrication of two exemplar artifacts, a canvas and a handbag, that use the reflective light-diffuser modules. 

Reflect on this: Project See Through burns through glare

Marc Levoy (professor emeritus at Stanford) was instrumental in delivering the revolutionary Night Sight mode on Pixel 3 phones—and by extension on all the phones that quickly copied their published techniques. After leaving Google for Adobe, he’s been leading a research team that’s just shown off the reflection-zapping Project See Through:

Today, it’s difficult or impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.

Check out my MAX talk on the potential of Generative AI in education

I got to spend 30 minutes chatting with educator & author Matt Miller last week, riffing on some tough but important questions around weighty, fascinating stuff like what makes us human, what we value around creativity, and how we can all navigate the creative disruptions that surround us.

Hear how Adobe generative AI solutions are designed to continually evolve, develop, and empower educators and students from kindergarten to university level. Generative AI is expected to have a significant impact on the creativity of students. It has the potential to act as a powerful tool that can inspire and enhance the creative process by generating new and unique ideas. Join Matt Miller, author and educator, and John Nack, principal product manager at Adobe, for this exciting discussion.

In this session, you’ll:

  • Learn how Adobe approaches generative AI
  • Hear experts discuss how AI affects teaching and learning
  • Discover how AI can make learning more personalized and accessible

What if 3D were actually approachable?

That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

What’s even better than Generative Fill? GenFill that moves.

Back in the day, I dreaded demoing Photoshop ahead of the After Effects team: we’d do something cool, and they’d make that cool thing move. I hear echoes of that in Project Fast Fill—generative fill for video.

Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.

Check it out:

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

New Adobe tech promises 3D & materials scanning

Probably needless to say, 3D model creation remains hard AF for most people, and as such it’s a huge chokepoint in the adoption of 3D & AR viewing experiences.

Fortunately we may be on the cusp of some breakthroughs. Apple is about to popularize LIDAR on phones, and with it we’ll see interesting photogrammetry apps like Polycam:

Meanwhile Adobe is working to enable 3D scanning using devices without fancy sensors. Check out Project Scantastic:

They’re also working to improve the digitization of materials—something that could facilitate the (presently slow, expensive) digitization of apparel:

New typographical brushes from Adobe turn paint into editable characters

I’ve long, long been a fan of using brush strokes on paths to create interesting glyphs & lettering. I used to contort all kinds of vectors into Illustrator brushes, and as it happens, 11 years ago today I was sharing an interesting tutorial on creating smokey text:

Now Adobe engineers are looking to raise the game—a lot.

Combining users drawn stroke inputs, the choice of brush, and the typographic properties of the text object, Project Typographic Brushes brings paint style brushes and new-type families to life in seconds.

Check out some solid witchcraft in action:

Photoshop’s new Smart Portrait is pretty amazing

My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:

On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:

Here’s a more in-depth look (starting around 1:46) at controlling the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):

Photoshop’s Sky Replacement feature was well worth the wait

Although I haven’t yet gotten to use it extensively, I’m really enjoying the newly arrived Sky Replacement feature in Photoshop. Check out a quick before/after on a tiny planet image:

Adobe MAX starts tomorrow, and you can attend for free

From Conan O’Brien to Tyler the Creator* to (of course) tons of deep dives into creative tech, Adobe has organized quite the line-up:

Make plans to join us for a uniquely immersive and engaging digital experience, guaranteed to inspire. Three full days of luminary speakers, celebrity appearances, musical performances, global collaborative art projects, and 350+ sessions — and all at no cost.

You can build your session list here. Looking forward to learning a lot this week!

*I’m reminded of Alec Baldwin as Tony Bennett talking about “Wiz Khalifa and Imagine Dragons—what a great, great, random pairing.” I can’t find that episode online, so what the heck, enjoy this one.

ADIM & MAX

Next month’s Adobe MAX conference is shaping up to be a great show.  These sessions seemed worth a mention:

 

  • Russell Brown has adjusted his popular-and long-running ADIM (Art Directors Invitational Master Class) to coincide with MAX.  It’s "the essential two-day, hands-on instructional course that brings top art directors, designers, illustrators, and photographers together to learn advanced tips and techniques using Adobe products."  ADIM takes place Sunday & Monday, Nov. 16-17th, and plenty of details are on Russell’s site.
  • Dr. Woohoo will be presenting three sessions talking about using Flex+AIR to automate CS3/4.  (Here’s some background on that subject if you’re interested.)
  • I’ll be covering Photoshop CS4 on Wednesday the 19th 2-3pm, and Bryan Hughes will be giving his PS session 3:30-4:30 that day.  You can find other Photoshop-related sessions by clicking the "By Session" tab, then choosing Photoshop from the product drop-down.