On classic cars & the feeling of craft

John Gruber recently linked back to this clip in which designer Neven Mrgan highlights what feels like an important consideration in the age of mass-generated AI “designs”:

I think that was what mattered is that they looked rich, they looked like a lot of work had been put into them. That’s what people latch onto. It seems it’s something that, yes, they should have spent money on, and they should be spending time on right now.

Regardless of what tools were used in the making of a piece, does it feel rich, crafted, thoughtfully made? Does it have a point, and a point of view? As production gets faster, those qualities will become all the more critical for anything—and anyone—wishing to stand out.

An incredible PM role opens up on Photoshop

This could be an awesome opportunity for the right person, who’d get to work on things I’ve wanted the team to do for 15+ years!

We’re looking for an expert technical product manager to lead Photoshop’s foundational architecture and performance strategy. This is a pivotal role responsible for evolving the core technologies that power Photoshop’s speed, stability, and future scalability across platforms.

You’ll drive major efforts to modernize our rendering and compute architecture, migrate legacy systems to more scalable platforms, and accelerate performance through GPU and hardware optimization. This work touches nearly every part of Photoshop, from canvas rendering to feature responsiveness to long-term cross-platform consistency.

This is a principal-level individual contributor role with the potential to grow a team in the future.

“Tell me about a product you hate…”

I interviewed many hundreds of PM candidates at Google, and if things were going well, I’d ask, “Tell me about a product you hate that you use regularly. Why do you hate it?”

This proved to be a great bozo detector. Does this person have curiosity, conviction, passion, unreasonableness? Were they forced into coding & now just want to escape life in the damn debugger, or do they have a semi-pathological need to build stuff they’re proud of? Would I want them in the proverbial foxhole with me? Are they willing to sweep the floor?

Unsurprisingly, most candidates offer shallow, banal answers (“Uh, wow… I mean, I guess the ESPN app is kinda slow…?”), whereas great ones explain not just what sucks, but why it sucks. Like, why—systemically—is every car infotainment system such crap? Those are the PMs I want asking the questions, then questioning the answers.

——-

Specifically the car front, as Tolstoy might say, “Each one is unhappy in its own way.” The most interesting thing, I think, isn’t just to talk about the crappy mismatched & competing experiences, but rather about why every system I’ve ever used sucks. The answer can’t be “Every person at every company is a moron”—so what is it?

So much comes down to the structure of the industry, with hardware & software being made by a mishmash of corporate frenemies, all contending with a soup of regulations, risk aversion (one recall can destroy the profitability of a whole product line), and surprisingly bargain-bin electronics.

Check out this short vid for some great insights from Ford CEO Jim Farley:

“A surrealist design engine no one asked for”

A while back, Sam Harris & Ricky Gervais discussed the impossibility of translating a joke discovered during a dream (“What noise does a monster make?”) back into our consensus waking reality. Like… what?

I get the same vibes watching ChatGPT try to dredge up some model of me and of… humor?… in creating a comic strip based on our interactions. I find it uncanny, inscrutable, and yet consequently charming all at once.

The new Flux rocks for image restoration

Please tell me Adobe is hiding off screen, secretly cooking up magic. Please

Meanwhile, you can try it yourself here.

Stay frosty, UI

Splice (2D/3D design in your browser) has added support for progressive blur & gradients, and the results look awesome.

I haven’t seen anything advance like this in Adobe‘s core apps in maybe 20 years— maybe 25, since Illustrator & Acrobat added support for transparency.

On an aesthetically similar note, check out the launch video for the new version of Sketch (still very much alive & kicking in an age of Figma, it seems):

New Google virtual try-on tech

Take it away, Marques:

To try it yourself:

  • Opt in to get started: Head over to Search Labs and opt into the “try on” experiment.
  • Browse your style: When you’re shopping for shirts, pants or dresses on Google, simply tap the “try it on” icon on product listings.
  • Strike a pose: Upload a full-length photo of yourself. For best results, ensure it’s a full-body shot with good lighting and fitted clothing. Within moments, you can see how the garment will look on you.

“Kafkaesque Workplace Theater”

Sounds like kind of an awful band, doesn’t it? How about “Prompt Washing & the Insight Decay Spiral?” (Take that, Billy Corgan.)

This list from Brad Koch puts a finger directly on some of the maladaptive behaviors we’re seeing in our new cognitive golden age…

“Dynamic Text” is coming to Photoshop

Several years ago, my old teammates shared some promising research on how to facilitate more interesting typesetting. Check out this 1-minute overview:

Ever since the work landed in Adobe Express a while back, I’ve wondered why it hadn’t yet made its way to Photoshop or Illustrator. Now, at least, it looks like it’s on its way to PS:

@howardpinsky Dynamic text is finally coming to #Photoshop! You can try it right now in the Beta. #design #photoshoptutorial ♬ Good People Do Bad Things – The Ting Tings

The feature looks cool, and I’m eager to try it out, but I hope that Adobe will keep trying to offer something more semantically grounded (i.e. where word size is tied to actual semantic importance, not just rectangular shape bounds)—like what we shipped last year:

Higgsfield debuts Ads

Sigh… having quickly exhausted my paid credits, Imma have to up my subscription level, aren’t I? But these are good problems to have. 🙂

Krea introduces “GPT Paint”

Continuing their excellent work to offer more artistic control over image creation, the fast-moving crew at Krea has introduced GPT Paint—essentially a simple canvas for composing image references to guide the generative process. You can directly sketch, and/or position reference images, then combine the input with prompts & style references to fine-tune compositions:

Historically, approaches like this have sounded great but—at least in my experience—have fallen short.

Think about what you’d get from just saying “draw a photorealistic beautiful red Ferrari” vs. feeing in a crude sketch + the same prompt.

In my quick tests here, however, providing a simple reference sketch seems helpful—maybe because GPT-4o is smart enough to say, “Okay, make a duck with this rough pose/position—but don’t worry about exactly matching the finger-painted brushstrokes.” The increased sense of intentionality & creative ownership feels very cool. Here’s a quick test:

I’m not quite sure where the spooky skull and, um, lightning-infused martini came from. 🙂

The explosive titles of “Your Friends & Neighbors”

As Motionographer aptly puts it,

Director John Likens and FX Supervisor Tomas Slancik dissect existential collapse in Your Friends & Neighbors’ haunting opener, blending Jon Hamm’s live-action gravitas with a symphony of digital decay. […]

Shot across two days and polished by world-class VFX artists, the title sequence mirrors Hamm’s crumbling protagonist, juxtaposing his stoic performance against hyper-detailed destruction.

GPT-4o image creation is coming to Designer!

Having created 200+ images in just the last month via this still-new image model (see new blog category that gathers some of them), I’m delighted to say that my team is working to bring it to Microsoft Designer, Copilot, and beyond. From the boss himself:

Fun recent GPT-4o explorations

Just sharing a few things I’ve been trying.
For Easter, my cousin’s sweet pup as sweet treats:

Bespoke felt ornaments FTW:


Creating cozy slippers from an A-10 Warthog:

StarVector: Text/Image->SVG Code

Back at Adobe we introduced Firefly text-to-vector creation, but behind the scenes it was really text-to-image-to-tracing. That could be fine, actually, provided that the conversion process did some smart things around segmenting the image, moving objects onto their own layers, filling holes, and then harmoniously vectorizing the results. I’m not sure whether Adobe actually got around to shipping that support.

In any event, StarVector promises actual, direct creation of SVG. The results look simple enough that it hasn’t yet piqued my interest enough to spend my time with it, but I’m glad that folks are trying.

Rive introduces Vector feathering

I really hope that the makers of traditional vector-editing apps are paying attention to rich, modern, GPU-friendly techniques like this one. (If not—and I somewhat cynically expect that it’s not—it won’t be for my lack of trying to put it onto their radar. ¯\_(ツ)_/¯)

Vibe-animating with Magic Animator?

I know only what you see below, but Magic Animator (how was that domain name available?) promises to “Animate your designs in seconds with AI,” which sounds right up my alley, and I’ve signed up for their waitlist.

That Happy Meal feel

Sure, the environmental impact of this silliness isn’t great, but it’s probably still healthier than actually eating McDonald’s. :-p

Tangentially, I continue to have way too much fun applying different genres to amigos:

Google, Dolphins, and Ai-i-i-i-i!

Three years ago (seems like an eternity), I remarked regarding generative imaging.,

The disruption always makes me think of The Onion’s classic “Dolphins Evolve Opposable Thumbs“: “Holy f*ck, that’s it for us monkeys.” My new friend August replied with the armed dolphin below.

I’m reminded of this seeing Google’s latest AI-powered translation (?!) work. Just don’t tell them about abacuses!

[Via Rick McCawley]

Friday’s Microsoft Copilot event in 9 minutes

The team showed of good new stuff, including—OMG—showing how to use Photoshop! (On an extremely personal level, “This is what it’s like when worlds colliiiide!!”)

As it marks its 50th anniversary, Microsoft is updating Copilot with a host of new features that bring it in line with other AI systems like ChatGPT or Claude. We got a look at them during the tech giant’s 50th anniversary event today, including new search capabilities, Copilot Vision which will be able to analyze real-time video from a mobile camera. Copilot will also now be able to use the web on your behalf. Here’s everything you missed.

  • 00:00 Intro and Copilot Agents
  • 2:07 Copilot for planning
  • 2:30 Copilot AI podcast generating
  • 3:18 Copilot Shopping
  • 3:39 Copilot Vision
  • 4:07 Copilot feature use cases demo
  • 6:16 Researcher, Copilot Studio, custom agents
  • 6:53 Copilot Memory
  • 7:23 Custom Copilot appearances
  • 8:48 Outro

Lego Interstellar is stunning

Wow—I marvel at the insane, loving attention to detail in this shot-for-shot re-creation of a scene from Interstellar:

The creator writes,

I started working on this project in mid-2023, and it has been an incredible challenge, from modeling and animation to rendering and post-production. Every detail, from the explosive detachment of the Ranger to the rotation of the Endurance, the space suits of the minifigures, the insides of the Lander, and even the planet in the background, was carefully recreated in 3D.

Interstellar is one of the films that has moved me the most. Its story, visuals, and soundtrack left a lasting impact on me, and this video is my personal love letter to the movie—and to cinema in general. I wanted to capture the intensity and emotion of this scene in LEGO form, as a tribute to the power of storytelling through film.

Side by side:

Rustlin’ up some Russells

2025 marks an unheard-of 40th year in Adobe creative director Russell Brown’s remarkable tenure at the company. I remember first encountering him via the Out Of Office message marking his 15-year (!) sabbatical (off to Burning Man with Rick Smolan, if I recall correctly). If it weren’t for Russell’s last-minute intervention back in 2002, when I was living out my last hours before being laid off from Adobe (interviewing at Microsoft, lol), I’d never have had the career I did, and you wouldn’t be reading this now.

In any event, early in the pandemic Russell kept himself busy & entertained by taking a wild series of self portraits. Having done some 3D printing with him (the output of which still forms my Twitter avatar!), I thought, “Hmm, what would those personas look like as plastic action figures? Let’s see what ChatGPT thinks.” And voila, here they are.

Click through the tweet below if you’re curious about the making-of process (e.g. the app starting to render him very faithfully, then freaking out midway through & insisting on delivering a more stylized, less specific rendition). But forget that—how insane is it that any of this is possible??

“The Worlds of Riley Harper”

It’s pretty stunning what a single creator can now create in a matter of days! Check out this sequence & accompanying explanation (click on the post) from Martin Gent:

Tools used:

Severance, through the animated lens of ChatGPT

People can talk all the smack they want about “AI slop”—and to be sure, there’s tons of soulless slop going around—but good luck convincing me that there’s no creativity in remixing visual idioms, and in reskinning the world in never-before-possible ways. We’re just now dipping a toe into this new ocean.

See the whole thread for a range of fun examples:

OMG AI KFC

It’s insane what a single creator—in this case David Blagojević—can do with AI tools; insane.

It’s worth noting that creative synthesis like this doesn’t “just happen,” much less in some way that replaces or devalues the human perspective & taste at the heart of the process: everything still hinges on having an artistic eye, a wealth of long-cultivated taste, and the willpower to make one’s vision real. It’s just that the distance between that vision & reality is now radically shorter than it’s ever been.

New generative video hotness: Runway + Higgsfield

It’s funny to think of anyone & anything as being an “O.G.” in the generative space—but having been around for the last several years, Runway has as solid a claim as anyone. They’ve just dropped their Gen-4 model. Check out some amazing examples of character consistency & camera control:


Here’s just one of what I imagine will be a million impressive uses of the tech:

Meanwhile Higgsfield (of which I hadn’t heard before now) promises “AI video with swagger.” (Note: reel contains occasionally gory edgelord imagery.)

Virtual product photography in ChatGPT

Seeing this, I truly hope that Adobe isn’t as missing in action as they seem to be; fingers crossed.

In the meantime, simply uploading a pair of images & a simple prompt is more than enough to get some compelling results. See subsequent posts in the thread for details, including notes on some shortcomings I observed.

See also (one of a million tests being done in parallel, I’m sure):