Splice (2D/3D design in your browser) has added support for progressive blur & gradients, and the results look awesome.
I haven’t seen anything advance like this in Adobe‘s core apps in maybe 20 years— maybe 25, since Illustrator & Acrobat added support for transparency.
We are adding Progressive Blur + Gradients to Hana! All interactive, all real-time.
On an aesthetically similar note, check out the launch video for the new version of Sketch (still very much alive & kicking in an age of Figma, it seems):
Remember when we said auto layout was coming to Sketch? It’s here. It’s called Stacks, and it’s part of our biggest release ever — out now.
There’s a lot to cover, so buckle up and we’ll give you a tour.
Also, stick around for a surprise at the end of the thread
Opt in to get started: Head over to Search Labs and opt into the “try on” experiment.
Browse your style: When you’re shopping for shirts, pants or dresses on Google, simply tap the “try it on” icon on product listings.
Strike a pose: Upload a full-length photo of yourself. For best results, ensure it’s a full-body shot with good lighting and fitted clothing. Within moments, you can see how the garment will look on you.
Several years ago, my old teammates shared some promising research on how to facilitate more interesting typesetting. Check out this 1-minute overview:
Ever since the work landed in Adobe Express a while back, I’ve wondered why it hadn’t yet made its way to Photoshop or Illustrator. Now, at least, it looks like it’s on its way to PS:
The feature looks cool, and I’m eager to try it out, but I hope that Adobe will keep trying to offer something more semantically grounded (i.e. where word size is tied to actual semantic importance, not just rectangular shape bounds)—like what we shipped last year:
Man, for 18 years (yes, I keep the receipts) I’ve been wanting to ship an interactive relighting experience—and now my team has done it! Check out the quick demo below plus details on DP Review.
Good news! You too can capture footage exactly like this. You just need a $100,000 Phantom Flex 4K with a Canon 50-1000mm lens—oh, and you need to be hanging out the side of a Black Hawk helicopter:
We’ve released the code for LegoGPT. This autoregressive model generates physically stable and buildable designs from text prompts, by integrating physics laws and assembly constraints into LLM training and inference.
I have to admit, I don’t know Erwitt’s photography nearly as well as I know his name, but this largely humorous new collection makes me want to change that:
Continuing their excellent work to offer more artistic control over image creation, the fast-moving crew at Krea has introduced GPT Paint—essentially a simple canvas for composing image references to guide the generative process. You can directly sketch, and/or position reference images, then combine the input with prompts & style references to fine-tune compositions:
introducing GPT Paint.
now you can prompt ChatGPT visually through edit marks, basic shapes, notes, and reference images.
Historically, approaches like this have sounded great but—at least in my experience—have fallen short.
Think about what you’d get from just saying “draw a photorealistic beautiful red Ferrari” vs. feeing in a crude sketch + the same prompt.
In my quick tests here, however, providing a simple reference sketch seems helpful—maybe because GPT-4o is smart enough to say, “Okay, make a duck with this rough pose/position—but don’t worry about exactly matching the finger-painted brushstrokes.” The increased sense of intentionality & creative ownership feels very cool. Here’s a quick test:
I’m not quite sure where the spooky skull and, um, lightning-infused martini came from. 🙂