It’s funny to think of anyone & anything as being an “O.G.” in the generative space—but having been around for the last several years, Runway has as solid a claim as anyone. They’ve just dropped their Gen-4 model. Check out some amazing examples of character consistency & camera control:
Today we’re introducing Gen-4, our new series of state-of-the-art AI models for media generation and world consistency. Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media.
Gen-4 Image-to-Video is rolling out today to all paid… pic.twitter.com/VKnY5pWC8X
— Runway (@runwayml) March 31, 2025
Here’s just one of what I imagine will be a million impressive uses of the tech:
First test with @runwayml‘s Gen-4 early access!
First impressions: I am very impressed! 10 second generations, and this is the only model that could do falling backwards off a cliff. Love it! pic.twitter.com/GZS1B7Wpq0
— Christopher Fryant (@cfryant) March 31, 2025
Meanwhile Higgsfield (of which I hadn’t heard before now) promises “AI video with swagger.” (Note: reel contains occasionally gory edgelord imagery.)
Now, AI video doesn’t have to feel lifeless.
This is Higgsfield AI: cinematic shots with bullet time, super dollies and robo arms — all from a single image.
It’s AI video with swagger.
Built for creators who move culture, not just pixels. pic.twitter.com/dJdQ978Jqd
— Higgsfield AI (@higgsfield_ai) March 31, 2025