Monthly Archives: May 2025

Stay frosty, UI

Splice (2D/3D design in your browser) has added support for progressive blur & gradients, and the results look awesome.

I haven’t seen anything advance like this in Adobe‘s core apps in maybe 20 years— maybe 25, since Illustrator & Acrobat added support for transparency.

On an aesthetically similar note, check out the launch video for the new version of Sketch (still very much alive & kicking in an age of Figma, it seems):

New Google virtual try-on tech

Take it away, Marques:

To try it yourself:

  • Opt in to get started: Head over to Search Labs and opt into the “try on” experiment.
  • Browse your style: When you’re shopping for shirts, pants or dresses on Google, simply tap the “try it on” icon on product listings.
  • Strike a pose: Upload a full-length photo of yourself. For best results, ensure it’s a full-body shot with good lighting and fitted clothing. Within moments, you can see how the garment will look on you.

“Kafkaesque Workplace Theater”

Sounds like kind of an awful band, doesn’t it? How about “Prompt Washing & the Insight Decay Spiral?” (Take that, Billy Corgan.)

This list from Brad Koch puts a finger directly on some of the maladaptive behaviors we’re seeing in our new cognitive golden age…

“Dynamic Text” is coming to Photoshop

Several years ago, my old teammates shared some promising research on how to facilitate more interesting typesetting. Check out this 1-minute overview:

Ever since the work landed in Adobe Express a while back, I’ve wondered why it hadn’t yet made its way to Photoshop or Illustrator. Now, at least, it looks like it’s on its way to PS:

@howardpinsky Dynamic text is finally coming to #Photoshop! You can try it right now in the Beta. #design #photoshoptutorial ♬ Good People Do Bad Things – The Ting Tings

The feature looks cool, and I’m eager to try it out, but I hope that Adobe will keep trying to offer something more semantically grounded (i.e. where word size is tied to actual semantic importance, not just rectangular shape bounds)—like what we shipped last year:

Higgsfield debuts Ads

Sigh… having quickly exhausted my paid credits, Imma have to up my subscription level, aren’t I? But these are good problems to have. 🙂

Krea introduces “GPT Paint”

Continuing their excellent work to offer more artistic control over image creation, the fast-moving crew at Krea has introduced GPT Paint—essentially a simple canvas for composing image references to guide the generative process. You can directly sketch, and/or position reference images, then combine the input with prompts & style references to fine-tune compositions:

Historically, approaches like this have sounded great but—at least in my experience—have fallen short.

Think about what you’d get from just saying “draw a photorealistic beautiful red Ferrari” vs. feeing in a crude sketch + the same prompt.

In my quick tests here, however, providing a simple reference sketch seems helpful—maybe because GPT-4o is smart enough to say, “Okay, make a duck with this rough pose/position—but don’t worry about exactly matching the finger-painted brushstrokes.” The increased sense of intentionality & creative ownership feels very cool. Here’s a quick test:

I’m not quite sure where the spooky skull and, um, lightning-infused martini came from. 🙂