Monthly Archives: February 2025

NeRFtastic BAFTAs

The British Academy Film Awards have jumped into a whole new dimension to commemorate the winners of this year’s awards:

The capturing work was led by Harry Nelder and Amity Studio. Nelder used his 16-camera rig to capture the recent winners. The reconstruction software was a combination of a cloud-based platform created by Nelder, which is expected to be released later this year, along with Postshot. Nelder further utilized the Radiance Field method known as Gaussian Splatting for the reconstruction. A compilation video of all the captures, recently posted by BAFTA, was edited by Amity Studio

[Via Dan Goldman]

Lego together creative AI blocks in Flora

Looks promising:

Their pitch:

  • Create workflows, not just outputs. Connect Blocks to shape, refine, and scale your creative process.
  • Collaborate in real time. Work like you would in Figma, but for AI-powered media creation.
  • Discover & clone workflows. Learn from top creatives, build on proven systems and share generative workflows inside FLORA’s Community.

Sigma BF: Clean AF

Refreshingly simple design!

Is it for me? Dunno: lately the only thing that justifies shooting with something other than my phone is a big, fast zoom lens, and I don’t know whether pairing such a thing with this slim beauty would kinda defeat the purpose. Still, I must know more…

Here’s a nice early look at the cam plus a couple of newly announced lenses:

Perhaps image-to-3D was a mistake…

Behold the majesty (? :-)) of CapCut’s new “Microwave” filter (whose name makes more sense if you listen with sound on):

https://youtube.com/shorts/bshQXczbZdw?si=aFwvtgs-fKf2wl8x

As I asked Bilawal, who posted the compilation, “What is this, and how can I know less about it?”

EditIQ edits single long shots into multiples virtual shots

Check it out (probably easier to grok by watching vs. reading a description):

From the static camera feed, EditIQ initially generates multiple virtual feeds, emulating a team of cameramen. These virtual camera shots termed rushes are subsequently assembled using an automated editing algorithm, whose objective is to present the viewer with the most vivid scene content.

Controlling video generation with simple props

Tired: Random “slot machine”-style video generation
Inspired: Placing & moving simple guidance objects to control results:
Check out VideoNoiseWarp:

Analog meets AI in the papercraft world of Karen X Cheng

Check out this fun mixed-media romp, commissioned by Adobe:

And here’s a look behind the scenes:

A cool Firefly image->video flow

For the longest time, Firefly users’ #1 request was to use images to guide composition of new images. Now that Firefly Video has arrived, you can use a reference image to guide the creation of video. Here’s a slick little demo from Paul Trani:

Titles: Severance Season 2

Building on the strong work from the previous season,

Berlin’s Extraweg have created… a full-blown motion design masterpiece that takes you on a wild ride through Mark’s fractured psyche. Think trippy CGI, hypnotic 3D animations, and a surreal vibe that’ll leave you questioning reality. It’s like Inception met a kaleidoscope, and they decided to throw a rave in your brain. [more]

Google Photos will flag AI-manipulated images

These changes, reported by Forbes, sound like reasonable steps in the right direction:

Starting now, Google will be adding invisible watermarks to images that have been edited on a Pixel using Magic Editor’s Reimagine feature that lets users change any element in an image by issuing text prompts.

The new information will show up in the AI Info section that appears when swiping up on an image in Google Photos.

The feature should make it easier for users to distinguish real photos from AI-powered manipulations, which will be especially useful as Reimagined photos continue to become more realistic.

DeepSeek meets Flux in Krea Chat

Conversational creation & iteration is such a promising pattern, as shown through people making ChatGPT take images to greater & greater extremes:


But how do we go from ironic laughs to actual usefulness? Krea is taking a swing by integrating (I think) the Flux imaging model with the DeepSeek LLM:

It doesn’t yet offer the kind of localized refinements people want (e.g. “show me a dog on the beach,” then “put a hat on the dog” and don’t change anything outside the hat area). Even so, it’s great to be able to create an image, add a photo reference to refine it, and then create a video. Here’s my cute, if not exactly accurate, first attempt. 🙂

A mind-blowing Gemini + Illustrator demo

Wow—check out this genuinely amazing demo from my old friend (and former Illustrator PM) Mordy:

In this video, I show how you can use Gemini in the free Google AI Studio as your own personal tutor to help you get your work done. After you watch me using it to learn how to take a sketch I made on paper to recreating a logo in Illustrator, I promise you’ll be running to do the same.

MatAnyone promises incredible video segmentation

What the what?

Per the paper,

We propose MatAnyone, a robust framework tailored for target-assigned video matting. Specifically, building on a memory-based paradigm, we introduce a consistent memory propagation module via region-adaptive memory fusion, which adaptively integrates memory from the previous frame. This ensures semantic stability in core regions while preserving fine-grained details along object boundaries. 

Premiere Pro now lets you find video clips by describing them

I love it: nothing too fancy, nothing controversial, just a solid productivity boost:

Users can enter search terms like “a person skating with a lens flare” to find corresponding clips within their media library. Adobe says the media intelligence AI can automatically recognize “objects, locations, camera angles, and more,” alongside spoken words — providing there’s a transcript attached to the video. The feature doesn’t detect audio or identify specific people, but it can scrub through any metadata attached to video files, which allows it to fetch clips based on shoot dates, locations, and camera types. The media analysis runs on-device, so doesn’t require an internet connection, and Adobe reiterates that users’ video content isn’t used to train any AI models.