All posts by jnack

Safari: Shake a Tailfeather

To be honest I’ve never taken a more than passing interest in most birds, and certainly in photographing them, but the insane diversity of those in southern Africa was too much to resist. Here are some of my favorites we spied on our journey through Zimbabwe & Botswana:

Meanwhile we enjoyed visiting Painted Dog Conservation and learning about their tireless efforts to preserved & rehabilitate some of the 6,000 or so of these unique animals that remain in the wild—and that often fall prey to poachers’ snares. Tap/click to see a rather charming little vid:

Here’s a bit more about their work:

Childhood drawings brought to life through Midjourney video

Even though I got absolutely wrecked for having the temerity to use one of my son’s cute old drawings in an AI project last year (no point in now digging up the hundreds of flames it drew), I still enjoy seeing this kind of creative interpretation:

Gemini enables image-to-video

Man, am I now gonna splash out for another monthly subscription? I haven’t done so yet, but these results are pretty darn impressive:

To turn your photos into videos, select ‘Videos’ from the tool menu in the prompt box and upload a photo. … The photo-to-video capability is starting to roll out today to Google AI Pro and Ultra subscribers in select countries around the world. Try it out at gemini.google.com. These same capabilities are also available in Flow, Google’s AI filmmaking tool.

Safari: Big Mouths & Small Spots

Hey friends—we’ve made it home to Cali after a whirlwind trip to Zimbabwe & Botswana. I’ll try to post some observations about the state of photo editing these days, and I’d love to hear yours. Meanwhile, while my body still tries to clue into where & when the heck I am, here are a few small galleries I’ve shared so far:

View this post on Instagram

A post shared by John Nack (@jnack)

View this post on Instagram

A post shared by John Nack (@jnack)

Gone (lion) fishin’

D’oh—before heading to Zimbabwe & Botswana with my wife to celebrate our 20th anniversary, I neglected to mention that things will be a bit quieter around here than normal. We plan to return to the States next week, and I might share a few posts between now & then. Meanwhile, check out some new friends we made this morning!

My birthday gift: Ditching AI

My family, having seen so many of my AI-powered image generations over the last 3 years, is just utterly inured to them. So, for my MiniMe’s 16th, I sketched up the patriotic little HO-scale engine we’re getting him, along with a cute large ground squirrel (to quote the Dude, “Nice marmot”).

I feel like this is my micro version of when the world revolted against too-perfect Instagram culture, swinging towards Snapchat & stories, where “rough is real,” and flaws are a feature. In any case, my dude was happy as a clam—and that’s all that matters to me.

Higgsfield Soul: Generate -> Inpaint -> Animate

Okay, so this isn’t precisely what I thought it was at first (video inpainting), but rather an creation->inpainting->animation flow. Still, the results look impressive:

Forgotten design: The Africar

“If you’re into weird cars, forgotten history, and stories that don’t end well, hit that subscribe button.”

I found this piece really interesting, not least because my wife & I are headed to Africa for the first time next week, and I’m eager to learn what kinds of vehicles & roads we’ll experience. Seems like something like the Africar would make a ton of sense in many places:

John Oliver vs. AI slop

“What a fun way to celebrate the destruction of our shared objective reality!” :->

But honestly this is a really insightful, hilarious, and eye-opening tour through the charms & many, many discontents of our new world:

Google steps up virtual try-on with Doppl

As I’ve noted previously, Google has been trying to crack the try-on game for a long time. Back in the day (c. 2017), we really want to create AR-enabled mirrors that could do this kind of thing. The tech wasn’t quite ready, and for the realtime mirror use case it likely still isn’t, but check out the new free iOS & Android app Doppl:

In May, Google Shopping announced the ability to virtually try billions of clothing items on yourself, just by uploading a photo. Doppl builds on these capabilities, bringing additional experimental features, including the ability to use photos or screenshots to “try on” outfits whenever inspiration strikes.

Doppl also brings your looks to life with AI-generated videos — converting static images into dynamic visuals that give you an even better sense for how an outfit might feel. Just upload a picture of an outfit, and Doppl does the rest.

AI brings people to tears—of joy

Several years ago, MyHeritage saw a huge (albeit short-lived) spike in interest from their Deep Nostalgia feature that animated one’s old photos. Everything old is new again, in many senses. Check out Reddit founder Alexis Ohanian talk about how touching he found the tech—as well as tons of blowback from people who find it dystopian.

Greg the Stormtrooper

I’ve heard people referring to the recent release of Google’s Veo 3 as the ChatGPT moment for video generation—that is, a true inflection point at which a mere curosity becomes something of real value. The spatial & character coherence of its output, and especially its ability to generate speech & other audio, turn it into a genuine storytelling tool.

You’ve probably seen some of the myriad vlogger-genre creations making the rounds. Here’s one of my faves:

Jawas gone wild

On classic cars & the feeling of craft

John Gruber recently linked back to this clip in which designer Neven Mrgan highlights what feels like an important consideration in the age of mass-generated AI “designs”:

I think that was what mattered is that they looked rich, they looked like a lot of work had been put into them. That’s what people latch onto. It seems it’s something that, yes, they should have spent money on, and they should be spending time on right now.

Regardless of what tools were used in the making of a piece, does it feel rich, crafted, thoughtfully made? Does it have a point, and a point of view? As production gets faster, those qualities will become all the more critical for anything—and anyone—wishing to stand out.

An incredible PM role opens up on Photoshop

This could be an awesome opportunity for the right person, who’d get to work on things I’ve wanted the team to do for 15+ years!

We’re looking for an expert technical product manager to lead Photoshop’s foundational architecture and performance strategy. This is a pivotal role responsible for evolving the core technologies that power Photoshop’s speed, stability, and future scalability across platforms.

You’ll drive major efforts to modernize our rendering and compute architecture, migrate legacy systems to more scalable platforms, and accelerate performance through GPU and hardware optimization. This work touches nearly every part of Photoshop, from canvas rendering to feature responsiveness to long-term cross-platform consistency.

This is a principal-level individual contributor role with the potential to grow a team in the future.

“Tell me about a product you hate…”

I interviewed many hundreds of PM candidates at Google, and if things were going well, I’d ask, “Tell me about a product you hate that you use regularly. Why do you hate it?”

This proved to be a great bozo detector. Does this person have curiosity, conviction, passion, unreasonableness? Were they forced into coding & now just want to escape life in the damn debugger, or do they have a semi-pathological need to build stuff they’re proud of? Would I want them in the proverbial foxhole with me? Are they willing to sweep the floor?

Unsurprisingly, most candidates offer shallow, banal answers (“Uh, wow… I mean, I guess the ESPN app is kinda slow…?”), whereas great ones explain not just what sucks, but why it sucks. Like, why—systemically—is every car infotainment system such crap? Those are the PMs I want asking the questions, then questioning the answers.

——-

Specifically the car front, as Tolstoy might say, “Each one is unhappy in its own way.” The most interesting thing, I think, isn’t just to talk about the crappy mismatched & competing experiences, but rather about why every system I’ve ever used sucks. The answer can’t be “Every person at every company is a moron”—so what is it?

So much comes down to the structure of the industry, with hardware & software being made by a mishmash of corporate frenemies, all contending with a soup of regulations, risk aversion (one recall can destroy the profitability of a whole product line), and surprisingly bargain-bin electronics.

Check out this short vid for some great insights from Ford CEO Jim Farley:

“A surrealist design engine no one asked for”

A while back, Sam Harris & Ricky Gervais discussed the impossibility of translating a joke discovered during a dream (“What noise does a monster make?”) back into our consensus waking reality. Like… what?

I get the same vibes watching ChatGPT try to dredge up some model of me and of… humor?… in creating a comic strip based on our interactions. I find it uncanny, inscrutable, and yet consequently charming all at once.

The new Flux rocks for image restoration

Please tell me Adobe is hiding off screen, secretly cooking up magic. Please

Meanwhile, you can try it yourself here.

Stay frosty, UI

Splice (2D/3D design in your browser) has added support for progressive blur & gradients, and the results look awesome.

I haven’t seen anything advance like this in Adobe‘s core apps in maybe 20 years— maybe 25, since Illustrator & Acrobat added support for transparency.

On an aesthetically similar note, check out the launch video for the new version of Sketch (still very much alive & kicking in an age of Figma, it seems):

New Google virtual try-on tech

Take it away, Marques:

To try it yourself:

  • Opt in to get started: Head over to Search Labs and opt into the “try on” experiment.
  • Browse your style: When you’re shopping for shirts, pants or dresses on Google, simply tap the “try it on” icon on product listings.
  • Strike a pose: Upload a full-length photo of yourself. For best results, ensure it’s a full-body shot with good lighting and fitted clothing. Within moments, you can see how the garment will look on you.

“Kafkaesque Workplace Theater”

Sounds like kind of an awful band, doesn’t it? How about “Prompt Washing & the Insight Decay Spiral?” (Take that, Billy Corgan.)

This list from Brad Koch puts a finger directly on some of the maladaptive behaviors we’re seeing in our new cognitive golden age…

“Dynamic Text” is coming to Photoshop

Several years ago, my old teammates shared some promising research on how to facilitate more interesting typesetting. Check out this 1-minute overview:

Ever since the work landed in Adobe Express a while back, I’ve wondered why it hadn’t yet made its way to Photoshop or Illustrator. Now, at least, it looks like it’s on its way to PS:

@howardpinsky Dynamic text is finally coming to #Photoshop! You can try it right now in the Beta. #design #photoshoptutorial ♬ Good People Do Bad Things – The Ting Tings

The feature looks cool, and I’m eager to try it out, but I hope that Adobe will keep trying to offer something more semantically grounded (i.e. where word size is tied to actual semantic importance, not just rectangular shape bounds)—like what we shipped last year:

Higgsfield debuts Ads

Sigh… having quickly exhausted my paid credits, Imma have to up my subscription level, aren’t I? But these are good problems to have. 🙂

Krea introduces “GPT Paint”

Continuing their excellent work to offer more artistic control over image creation, the fast-moving crew at Krea has introduced GPT Paint—essentially a simple canvas for composing image references to guide the generative process. You can directly sketch, and/or position reference images, then combine the input with prompts & style references to fine-tune compositions:

Historically, approaches like this have sounded great but—at least in my experience—have fallen short.

Think about what you’d get from just saying “draw a photorealistic beautiful red Ferrari” vs. feeing in a crude sketch + the same prompt.

In my quick tests here, however, providing a simple reference sketch seems helpful—maybe because GPT-4o is smart enough to say, “Okay, make a duck with this rough pose/position—but don’t worry about exactly matching the finger-painted brushstrokes.” The increased sense of intentionality & creative ownership feels very cool. Here’s a quick test:

I’m not quite sure where the spooky skull and, um, lightning-infused martini came from. 🙂

The explosive titles of “Your Friends & Neighbors”

As Motionographer aptly puts it,

Director John Likens and FX Supervisor Tomas Slancik dissect existential collapse in Your Friends & Neighbors’ haunting opener, blending Jon Hamm’s live-action gravitas with a symphony of digital decay. […]

Shot across two days and polished by world-class VFX artists, the title sequence mirrors Hamm’s crumbling protagonist, juxtaposing his stoic performance against hyper-detailed destruction.

GPT-4o image creation is coming to Designer!

Having created 200+ images in just the last month via this still-new image model (see new blog category that gathers some of them), I’m delighted to say that my team is working to bring it to Microsoft Designer, Copilot, and beyond. From the boss himself:

Fun recent GPT-4o explorations

Just sharing a few things I’ve been trying.
For Easter, my cousin’s sweet pup as sweet treats:

Bespoke felt ornaments FTW:


Creating cozy slippers from an A-10 Warthog:

StarVector: Text/Image->SVG Code

Back at Adobe we introduced Firefly text-to-vector creation, but behind the scenes it was really text-to-image-to-tracing. That could be fine, actually, provided that the conversion process did some smart things around segmenting the image, moving objects onto their own layers, filling holes, and then harmoniously vectorizing the results. I’m not sure whether Adobe actually got around to shipping that support.

In any event, StarVector promises actual, direct creation of SVG. The results look simple enough that it hasn’t yet piqued my interest enough to spend my time with it, but I’m glad that folks are trying.