Fun stuff from Red Giant:
Check out my teammate CJ’s exploration around using ChatGPT to produce expression code for use in After Effects:
Our friends in Digital Video & Audio have lots of interesting irons in the fire!
From the team blog post:
To start, we’re exploring a range of concepts, including:
- Text to color enhancements: Change color schemes, time of day, or even the seasons in already-recorded videos, instantly altering the mood and setting to evoke a specific tone and feel. With a simple prompt like “Make this scene feel warm and inviting,” the time between imagination and final product can all but disappear.
- Advanced music and sound eﬀects: Creators can easily generate royalty-free custom sounds and music to reflect a certain feeling or scene for both temporary and final tracks.
- Stunning fonts, text effects, graphics, and logos: With a few simple words and in a matter of minutes, creators can generate subtitles, logos and title cards and custom contextual animations.
- Powerful script and B-roll capabilities: Creators can dramatically accelerate pre-production, production and post-production workflows using AI analysis of script to text to automatically create storyboards and previsualizations, as well as recommending b-roll clips for rough or final cuts.
- Creative assistants and co-pilots: With personalized generative AI-powered “how-tos,” users can master new skills and accelerate processes from initial vision to creation and editing.
O.G. animator Chris Georgenes has been making great stuff since the 90’s (anybody else remember Home Movies?), and now he’s embracing Adobe Firefly. He’s using it with both Adobe Animate…
…and After Effects:
Paul Trillo is back at it, extending a Chinese restaurant via Stable Diffusion, After Effects, and Runway:
Elsewhere, check out this mutating structure. (Next up: Falling Water made of actual falling water?)
Hah—this kidding-not-kidding piece from Red Giant is pretty great:
The @ButWithRaptors account is full of wonderful silliness:
More wildly impressive inpainting & animation from Paul Trillo:
Creator Paul Trillo (see previous) is back at it. Here’s new work + a peek into how it’s made:
To quote this really cool Adobe video PM who also lives in my house 😌, and who just happens to have helped bring Frame.io into Adobe,
Super excited to announce that Frame.io is now included with your Creative Cloud subscription. Frame panels are now included in After Effects and Premiere Pro. Check it out!
Take advantage of the industry's most powerful video review and collaboration tools all in one place. Introducing https://t.co/JdJeu2YuK6 for Creative Cloud – now included in #PremierePro and #AfterEffects. https://t.co/5xPF0xLYjN pic.twitter.com/aqolPm90MZ— Adobe Video & Motion (@AdobeVideo) April 12, 2022
From the integration FAQ:
Frame.io for Creative Cloud includes real-time review and approval tools with commenting and frame-accurate annotations, accelerated file transfers for fast uploading and downloading of media, 100GB of dedicated Frame.io cloud storage, the ability to work on up to 5 different projects with another user, free sharing with an unlimited number of reviewers, and Camera to Cloud.
Back in 2013 I found myself on a bus full of USC film students, and I slowly realized that the guy seated next to me had created the Take On Me vid. Not long after I was at Google & my friend recreated the effect in realtime AR. Perhaps needless to say, they didn’t do anything with it. ¯\_(ツ)_/¯
In any event, now Action Movie Dad Daniel Hashimoto has created a loving homage as a tutorial video (!).
Here’s the full-length version:
Special hat tip on the old CoSA vibes:
I find myself recalling something that Twitter founder Evan Williams wrote about “value moving up the stack“:
As industries evolve, core infrastructure gets built and commoditized, and differentiation moves up the hierarchy of needs from basic functionality to non-basic functionality, to design, and even to fashion.
For example, there was a time when chief buying concerns included how well a watch might tell time and how durable a pair of jeans was.
Now apps like FaceTune deliver what used to be Photoshop-only levels of power to millions of people, and Runway ML promises to let you just type words to select & track objects in video—using just a Web browser. 👀
Come join my wife & her badass team!
- Senior Software Engineer — Video Acceleration Platform (San Jose, USA)
- Senior Software Engineer — Cloud Video (San Jose or San Francisco, USA)
- Computer Scientist — Video Color (Noida or Bangalore, India)
- Senior Software Engineer — UI Platform / Drover (San Jose, USA)
- Software Engineer — Video Formats (San Jose or Seattle, USA)
- Software Automation Engineer — Video Formats (San Jose or Seattle, USA)
- Senior Software Engineer — Premiere Pro (San Jose or Seattle, USA)
- Software Engineer — Video Render Technology (Noida or Bangalore)
- Senior Product Marketing Manager — Digital Video and Audio (San Jose or Seattle, USA)
The newly upgrades After Effects Roto Brush (see previous), now available in beta form, helps enable wondrous things:
And here it’s used in crafting a little salty political fun:
Back in 2018 I wrote,
Wanna feel like walking directly into the ocean? Try painstakingly isolating an object in frame after frame of video. Learning how to do this in the 90’s (using stone knives & bear skins, naturally), I just as quickly learned that I never wanted to do it again.
Happily the AE crew has kept improving automated tools, and they’ve just rolled out Roto Brush 2 in beta form. Ian Sansevera shows (below) how it compares & how to use it, and John Columbo provides a nice written overview.
In this After Effects tutorial I will explore and show you how to use Rotobrush 2 (which is insane by the way). Powered by Sensei, Roto Brush 2 will select and track the object, frame by frame, isolating the subject automatically.