Director Matan Cohen-Grumi shows off the radical acceleration in VFX-heavy storytelling that’s possible through emerging tools—including Pika’s new Scene Ingredients:
For 10 years, I directed TV commercials, where storytelling was intuitive—casting characters, choosing locations, and directing scenes effortlessly. When I shifted to AI over a year ago, the process felt clunky—hacking together solutions, spending hours generating images, and… pic.twitter.com/pJUamLFgWI
Instead of generating images with long, detailed text prompts, Whisk lets you prompt with images. Simply drag in images, and start creating.
Whisk lets you input images for the subject, one for the scene and another image for the style. Then, you can remix them to create something uniquely your own, from a digital plushie to an enamel pin or sticker.
The blog post gives a bit more of a peek behind the scenes & sets some expectations:
Since Whisk extracts only a few key characteristics from your image, it might generate images that differ from your expectations. For example, the generated subject might have a different height, weight, hairstyle or skin tone. We understand these features may be crucial for your project and Whisk may miss the mark, so we let you view and edit the underlying prompts at any time.
In our early testing with artists and creatives, people have been describing Whisk as a new type of creative tool — not a traditional image editor. We built it for rapid visual exploration, not pixel-perfect edits. It’s about exploring ideas in new and creative ways, allowing you to work through dozens of options and download the ones you love.
And yes, uploading a 19th-century dog illustration to generate a plushie dancing an Irish jig is definitely the most JNack way to squander precious work time do vital market research. 🙂
I’m a near-daily user of Ideogram to create all manner of images—mainly goofy dad jokes to (ostensibly) entertain my family. Now they’re enabling batch creation to facilitate creation of lots of variations (e.g. versions of a logo):
Check out this wild video-to-video demo from Nathan Shipley:
Sora Remix test: Scissors to crane
Prompt was “Close up of a curious crane bird looking around a beautiful nature scene by a pond. The birds head pops into the shot and then out.” pic.twitter.com/CvAkdkmFBQ
Just a taste of the torrent the blows past daily on The Former Bird App:
Rodin 3D: “Rodin 3D AI can create stunning, high-quality 3D models from just text or image inputs.”
Trellis 3D: “Iterative prompting/mesh editing. You can now prompt ‘remove X, add Y, Move Z, etc.’… Allows decoding to different output formats: Radiance Fields, 3D Gaussians, and meshes.”
Blender GPT: “Generating 3D assets has never been easier. Here’s me putting together an entire 3D scene in just over a minute.”
This might be the world’s lowest-key demo of what promises to be truly game-changing technology!
I’ve tried a number of other attempts at unlocking this capability (e.g. Meta.ai (see previous), Playground.com, and what Adobe sneak-peeked at the Firefly launch in early 2023), but so far I’ve found them all more unpredictable & frustrating than useful. Could Gemini now have turned the corner? Only hands-on testing (not yet broadly available) will tell!
If you or folks you know might be a good fit for one or more of these roles, please check ’em out & pass along info. Here’s some context from design director Mike Davidson.
————
These positions are United States only, Redmond-preferred, but we’ll also consider the Bay Area and other locations:
Diffusion models are ushering in what feels like a golden(-hour) age in relighting (see previous). Among the latest offerings is LumiNet:
[6/7] Here are a few more random relighting!
How accurate are these results? That’s very hard to answer at the moment But our tests on the MIT dataset, our user study, plus qualitative results all point to us being on the right track.
What if your design tool could understand the meaning & importance of words, then help you style them accordingly?
I’m delighted to say that for what I believe is the first time ever, that’s now possible. For the last 40 years of design software, apps have of course provided all kinds of fonts, styles, and tools for manual typesetting. What they’ve lacked is an understanding of what words actually mean, and consequently of how they should be styled in order to map visual emphasis to semantic importance.
In Microsoft Designer, you can now create a new text object, then apply hierarchical styling (primary, secondary, tertiary) based on AI analysis of word importance:
I’d love to hear what you think. You can go to designer.microsoft.com, create a new document, and add some text. Note: The feature hasn’t yet been rolled out to 100% of users, so it may not yet be available to you—but even in that case it’d be great to hear your thoughts on Designer in general.
This feature came about in response to noticing that text-to-image models are not only learning to spell well (check out some examples I’ve gathered on Pinterest), but can also set text with varied size, position, and styling that’s appropriate to the importance of each word. Check out some of my Ideogram creations (which you can click on & remix using the included prompts):
These results of course incredible (imagine seeing any of this even three years ago!), but they’re just flat images, not editable text. Our new feature, by contrast, leverages semantic understanding and applies it to normal text objects.
What we’ve shipped now is just the absolute tip of the iceberg: to start we’re simply applying preset values based on word hierarchy, but you can readily imagine richer layouts, smart adaptive styling, and much more. Stay tuned—and let us know what you’d like to see!
Speaking of Kling, the new Motion Brush feature enables smart selection, generative fill, and animation all in one go. Check out this example, and click into the thread for more:
Kling AI 1.5 Motion Brush is incredible.
You can give different motions to multiple subjects in the same scene.
Accurately rendering clothing on humans, and especially estimating their dimensions to enable proper fit (and thus reduce costly returns), has remained a seductive yet stubbornly difficult problem. I’ve written previously about challenges I observed at Google, plus possible steps forward.
Now Kling is promising to use generative video to pair real people & real outfits for convincing visualization (but not fit estimation). Check it out:
It’s a real joy to see my 15yo son Henry’s interest in design & photography blossom, and last night he fell asleep perusing the giant book of vintage logos we scored at the Chicago Art Institute. I’m looking forward to acquainting him with the groundbreaking work of Saul Bass & figured we’d start here:
We present FlipSketch, a system that brings back the magic of flip-book animation — just draw your idea and describe how you want it to move! …
Unlike constrained vector animations, our raster frames support dynamic sketch transformations, capturing the expressive freedom of traditional animation. The result is an intuitive system that makes sketch animation as simple as doodling and describing, while maintaining the artistic essence of hand-drawn animation.
Oh, I love this one!
FlipSketch can generate sketch animations from static drawings using text prompts!
I’m finding the app (which is free to try for a couple of moves, but which quickly runs out of credits) to be pretty wacky, as it continuously regenerates elements & thus struggles with identity preservation. The hero vid looks cool, though:
BlendBox AI: Seamlessly Blend Multiple Images with Ease
It makes blending images effortless and precise.
The real-time previews let us fine tune edits instantly, and we can generate images with AI or import our own Images.
Hmm—”fix” is a strong word for reinterpreting the creative choices & outcomes of an earlier generation of artists, but it’s certainly interesting to see the divisive Christmas movie re-rendered via emerging AI tech (Midjourney Retexturing + Hailuo Minimax). Do you think the results escape the original’s deep uncanny valley? See more discussion here.
Bonus: Speaking of French fashion & technology, check out punch-card tech from 200+ years ago! (Side note: the machine lent its name to Google & Levis’ Project Jacquard smart clothing.)
We present ReCapture, a method for generating new videos with novel camera trajectories from a single user-provided video. Our method allows us to re-generate the source video, with all its existing scene motion, from vastly different angles and with cinematic camera motion.
They note that ReCapture is substantially different from other work. Existing methods can control camera either on images or on generated videos and not arbitrary user-provided videos. Check it out:
I meant to post this incredibly weird old-ish Chemical Brothers video for Halloween. Seems somehow just as appropriate this morning, imagery+mood-wise.
Everybody needs a good wingman, and when it comes to celebrating the beauty of aviation, I’ve got a great one in my son Henry. Much as we’ve done the last couple of years, this month we first took in the air show in Salinas, featuring the USAF Thunderbirds…
…followed by the Blue Angels buzzing Alcatraz & the Golden Gate at Fleet Week in San Francisco.
In both cases we were treated to some jaw-dropping performances—from a hovering F-35 to choreographed walls of fire—from some of the best aviators in the world. Check ’em out:
Here’s a bit more on how the new editing features work:
We’re testing two new features today: our image editor for uploaded images and image re-texturing for exploring materials, surfacing, and lighting. Everything works with all our advanced features, such as style references, character references, and personalized models pic.twitter.com/jl3a1ZDKNg
I’ve become an Ideogram superfan, using it to create imagery daily, so I’m excited to kick the tires on this new interactive tool—especially around its ability to synthesize new text in the style of a visual reference.
Today, we’re introducing Ideogram Canvas, an infinite creative board for organizing, generating, editing, and combining images.
Bring your face or brand visuals to Ideogram Canvas and use industry-leading Magic Fill and Extend to blend them with creative, AI-generated content. pic.twitter.com/m2yjulvmE2
You can upload your own images or generate new ones within Canvas, then seamlessly edit, extend, or combine them using industry-leading Magic Fill (inpainting) and Extend (outpainting) tools. Use Magic Fill and Extend to bring your face or brand visuals to Ideogram Canvas and blend them with creative, AI-generated elements. Perfect for graphic design, Ideogram Canvas offers advanced text rendering and precise prompt adherence, allowing you to bring your vision to life through a flexible, iterative process.
Filmmaker & Pika Labs creative director Matan Cohen Grumi makes this town look way more dynamic than usual (than ever?) through the power of his team’s tech:
Adobe’s new generative 3D/vector tech is a real head-turner. I’m impressed that the results look like clean, handmade paths, with colors that match the original—and not like automatic tracing of crummy text-to-3D output. I can’t wait to take it for a… oh man, don’t say it don’t say it… spin.
Oh man, for years we wanted to build this feature into Photoshop—years! We tried many times (e.g. I wanted this + scribble selection to be the marquee features in Photoshop Touch back in 2011), but the tech just wasn’t ready. But now, maybe, the magic is real—or at least tantalizingly close!
Being a huge nerd, I wonder about how the tech works, and whether it’s substantially the same as what Magnific has been offering (including via a Photoshop panel) for the last several months. Here’s how I used that on my pooch:
But even if it’s all the same, who cares?
Being useful to people right where they live & work, with zero friction, is tremendous. Generative Fill is a perfect example: similar (if lower quality) inpainting was available from DALL•E for a year+ before we shipped GenFill in Photoshop, but the latter has quietly become an indispensible, game-changing piece of the imaging puzzle for millions of people. I’d love to see compositing improvements go the same way.
As I drove the Micronaxx to preschool back in 2013, Macklemore’s “Can’t Hold Us” hit the radio & the boys flipped out, making their stuffed buddies Leo & Ollie go nuts dancing to the tune. I remember musing with Dave Werner (a fellow dad to young kids) about being able to animate said buddies.
Fast forward a decade+, and now Dave is using Adobe’s recently unveiled Firefly Video model to do what we could only dimly imagine back then: