It’s just a super quick tease, but this vid shows Windows 11 calling Express in order to make an Instagram reel from the user’s photos. Check it out:
My old teammates Richard Tucker, Noah Snavely, and co. have been busy. Check out this quick video & interactive demo:
Excited to share our work on Generative Image Dynamics!
We learn a generative image-space prior for scene dynamics, which can turn a still photo into a seamless looping video or let you interact with objects in the picture. Check out the interactive demo:https://t.co/GLPBVpouJY pic.twitter.com/6h1Qq0kL2G
— Zhengqi Li (@zhengqi_li) September 15, 2023
According to the team, they trained the prior using a dataset of motion trajectories extracted from real-life video sequences that featured natural, oscillating motions like those seen in trees, flowers, candles, and wind-blown clothing. These trajectories can then be applied to convert static images into smooth-looping dynamic videos, slow-motion clips, or interactive experiences that allow users to interact with the elements within the image.
Man, if you feel like you can’t keep up with technology while you do your day job, just please know that the same is true even at the company where one works, regarding one’s old app. At least we have smart folks like Deke McClelland to show us what’s been happening:
Here are four minutes that I promise you won’t regret spending as Nathan Shipley demonstrates DALL•E 3 working inside ChatGPT to build up an entire visual world:
I mean, seriously, the demo runs through creating:
- Initial visuals
- Apparel featuring the logos
- Game art
- Box copy
- Games visualized in multiple styles
- 3D action figures
- and more.
Insane. Also charming: its extremely human inability to reliably spell “Dachshund!”
In case you missed any or all of last week’s news, here’s a quick recap:
Firefly-powered workflows that have so far been limited to the beta versions of Adobe’s apps — like Illustrator’s vector recoloring, Express text-to-image effects, and Photoshop’s Generative Fill tools — are now generally available to most users (though there are some regional restrictions in countries with strict AI laws like China).
Adobe is also launching a standalone Firefly web app that will allow users to explore some of its generative capabilities without subscribing to specific Adobe Creative Suite applications. Adobe Express Premium and the Firefly web app will be included as part of a paid Creative Cloud subscription plan.
Specifically around credits:
To help manage the compute demand (and the costs associated with generative AI), Adobe is also introducing a new credit-based system that users can “cash in” to access the fastest Firefly-powered workflows. The Firefly web app, Express Premium, and Creative Cloud paid plans will include a monthly allocation of Generative Credits starting today, with all-app Creative Cloud subscribers receiving 1,000 credits per month.
Users can still generate Firefly content if they exceed their credit limit, though the experience will be slower. Free plans for supported apps will also include a credit allocation (subject to the app), but this is a hard limit and will require customers to purchase additional credits if they’re used up before the monthly reset. Customers can buy additional Firefly Generative Credit subscription packs starting at $4.99.
All eligible Adobe Stock contributors with photos, vectors or illustrations in the standard and Premium collection, whose content was used to train the first commercial Firefly model will receive a Firefly bonus. This initial bonus, which will be different for each contributor, is based on the all-time total number of approved images submitted to Adobe Stock that were used for Firefly training, and the number of licenses that those images generated in the 12-month period between June 3rd, 2022, to June 2nd, 2023. The bonus is planned to pay out once a year and is currently weighted towards number of licenses issued for an image, which we consider a useful proxy for the demand and usefulness of those images. The next Firefly Bonus is planned for 2024 for new content used for training Firefly.
They’ve also provided info on what’s permissible around submitting AI-generated content:
With Adobe Firefly now commercially available, Firefly-generated works that meet our generative AI submission guidelines will now be eligible for submission to Adobe Stock. Given the proliferation of generative AI in tools like Photoshop, and many more tools and cameras to come, we anticipate that assets in the future will contain some number of generated pixels and we want to set up Adobe Stock for the future while protecting artists. We are increasing our moderation capabilities and systems to be more effective at preventing the use of creators’ names as prompts with a focus on protecting creators’ IP. Contributors who submit content that infringes or violates the IP rights of other creators will be removed from Adobe Stock.
I had fun catching up with folks at the AI Salon (see background) a couple of weeks ago, talking about the past, present, and future of Adobe Firefly. If that’s up your alley, here’s my talk (cued up to my starting point). Note that the content about watermarks & stock contributors predates last week’s “ready for commercial use” announcements.
From Dana Rao, Adobe’s General Counsel & Chief Trust Officer:
Adobe has proposed that Congress establish a new Federal Anti-Impersonation Right (the “FAIR” Act) to address this type of economic harm. Such a law would provide a right of action to an artist against those that are intentionally and commercially impersonating their work or likeness through AI tools. This protection would provide a new mechanism for artists to protect their livelihood from people misusing this new technology, without having to rely solely on laws around copyright and fair use. In this law, it’s simple: intentional impersonation using AI tools for commercial gain isn’t fair.
This is really tricky territory, as we seek to find a balance between enabling creative use of tools & protection of artists. I encourage you to read the whole post, and I’d love to hear your thoughts.
During our Ireland trip a few weeks back, I captured some aerial views of the town from which my great-grandfather emigrated.
As it often does, Luma generated a really nice 3D model from my orbiting footage:
Happy Monday, gang.
“Get cinematic and professional-looking drone Flythroughs in minutes from shaky amateur recorded videos.” The results are slick:
Tangentially, here’s another impressive application of Luma tech—turning drone footage into a dramatically manipulable 3D scene:
Check out this intriguing collaboration with Lupe Fiasco (more interesting than you might think, I promise!):
Just me, my dad, our Irish cousins, and 900 of their closest sheep. ☘️😌☘️
Beautifully put. We’ll be forever in his debt.
“Believe in creativity. Believe in imagination. Believe in innovation. Believe in the future.” As we say farewell to our co-founder Dr. John Warnock, we remember how his incredible ideas touched and transformed countless lives. Thank you for the gift of your creativity. ❤️️ pic.twitter.com/SrxOYohkqu
— Adobe (@Adobe) September 1, 2023
I’m so pleased & even proud (having at least having offered my encouragement to him over the years) to see my buddy Bilawal spreading his wings and spreading the good word about AI-powered creativity.
Check out his quick thoughts on “Channel-surfing realities layered on top of the real world,” “3D screenshots for the real world,” and more:
Favorite quote 😉:
“All they need to do is have a creative vision, and a Nack for working in concert with these AI models”—beautifully said, my friend! 🙏😜. pic.twitter.com/f6oUNSQXul
— John Nack (@jnack) September 1, 2023