Monthly Archives: September 2025

HBoooookay

“A few weeks ago,” writes John Gruber, “designer James Barnard made this TikTok video about what seemed to be a few mistakes in HBO’s logo. He got a bunch of crap from commenters arguing that they weren’t mistakes at all. Then he heard from the designer of the original version of the logo, from the 1970s.”

Check out these surprisingly interesting three minutes of logo design history:

@barnardco “Who. Cares? Unfollowed” This is how a *lot* of people responded to my post about the mistake in the HBO logo. For those that didn’t see it, the H and the B of the logo don’t line up at the top of the official vector version from the website. Not only that, but the original designer @Gerard Huerta700 got in touch! Long story short, we’re all good, and Designerrrs™ community members can watch my interview with Gerard Huerta where we talk about this and his illustrious career! #hbo #typography #logodesign #logo #designtok  original sound – James Barnard

How to change your eval ways (baby)

As much as one can be said to enjoy thinking through the details of how to evaluate AI (and it actually can be kinda fun!), I enjoyed this in-depth guide from Hamel Husain & Shreya Shankar.

All year I’ve been focusing pretty intently on how to tease out the details of what makes image creation & editing models “good” (e.g. spelling, human realism, prompt alignment, detail preservation, and more). This talk pops up a level, focusing more on holistic analysis of end-to-end experiences. If you’re doing that kind of work, or even if you just want to better understand the kind of thing that’s super interesting to hiring managers now, I think you’ll find watching this to be time well spent.

Photoshop integrates Flux, Nano Banana

I’m so happy to see Adobe greatly accelerating the pace of 3p API integrations!

Show your work, AI edition

Microsoft VP Aparna Chennapragada, who recruited me to Microsoft after I reported to her at Google, recently wrote a thoughtful piece about building trust through transparency. Specifically around AI agents, we want less of this…

…and more of this:

I agree completely. Having some thoughtful back-and-forth makes me feel better understood & therefore more confident in my assistant’s work.

And feel here is a big deal. As Maya Angelou said, “People won’t remember what you said, or even what you did, but they’ll remember how you made them *feel*. Microsoft AI leader (and previously DeepMind cofounder) Mustafa Suleyman totally gets this.

Conversely, I just saw a founder advertising his product as “visual storytelling on autopilot.” I get the intent, but I find the phrasing oxymoronic: would any worthwhile “story” be generated by autopilot? Yuck.

When apps try to do too much with my sparse input, seeing the results makes me feel like Neven Mrgan did upon receiving AI-generated slop from a friend: “I was repelled, as if digital anthrax had poured out of the app.” I don’t even want to read such content, much less share it, much less be judged on it.

So yeah, apps: ditch autopilot & instead take the time to show interest & ask good questions. “Slow is smooth, and smooth is fast”—and a little thoughtfulness up front will save me time while increasing my pride of ownership.

Google Flow adds Nano Banana

In addition to adding support for vertical video & greater character consistency, the new Veo-powered storytelling tool now includes direct image creation & manipulation via tiny, tiny fruit:

Google introduces “Learn Your Way”

This paper seems really promising. From textbooks it promises to make:

— Mind maps if you think visually
— Audio lessons with simulated teacher conversations
— Interactive timelines
— Quizzes that change based on where you’re struggling

More details:

Vibe Coding at Google: Prototyping the all-new AI Studio

Check out these interesting insights from the former head of design at ElevenLabs, who recently joined Google to help build their AI Studio:

In today’s episode Ammaar Reshi shows exactly how he uses AI to prototype ideas for the new Google AI Studio. He shares his Figma files and two example prototypes (including how he vibe-coded his own version of AI Studio in a couple of days). We also go deep into:

— 4 lessons for vibe-coding like a pro
— When to rely on mockups vs. AI prototypes
— Ammaar’s step-by-step process for prompting
— How Ammaar thinks about the fidelity of his prototypes
— a lot more

BFL’s Flux hackathon kicks off

Prizes include $5,000, an NVIDIA RTX 5090 GPU, and $3K in FAL credits. Check out the site for more info.

“Ruining” art with Nano Banana

But, y’know, in a fun & cheeky way. 🙂 Check out this little iterative experiment from Ethan Mollick:

As a longtime Bosch enthusiast, I’m partial to this one:

Reminds me of the time in 2023 (i.e. 10,000 AI years ago) that I forced DALL•E to keep making images look more & more “cheugy”:

The Phantom Superbad

I never want to get used to just how transformative the latest crop of AI-powered tools has become! Check out just one of the latest examples:

Nano Banana is coming to Photoshop—officially!

“Yes, And”: It’s the golden rule of improv comedy, and it’s the title of the paper I wrote & circulated throughout Adobe as soon as DALL•E dropped 3+ years ago: yes, we should make our own great models, and of course we should integrate the best of what the rest of the world is making! I mean, duh, why wouldn’t we??

This stuff can take time, of course (oh, so much time), but here we are: Adobe has announced that Google’s Nano Banana editing model will be coming to a Photoshop beta build near you in the immediate future.

Side note: it’s funny that in order to really upgrade Photoshop, one of the key minds behind Firefly simply needed to quit the company, move to Google, build Nano Banana, and then license it back to Adobe. Funny ol’ world…

Beautiful new AI mograph explorations

Check out this new work from Alex Patrascu. As generative video tools continue to improve in power & precision, what’ll be the role of traditional apps like After Effects? ¯\_(ツ)_/¯

AI Lego Redux

Back when DALL•E 3 launched (not even two years ago, though in AI time it feels like a million), I used it to delight friends by rendering them & their signature vehicles in Lego form.

Now that Google’s Nano Banana model has dropped, I felt like revisiting the challenge, comparing results to the original plus ones from ChatGPT 4o.

As you can see in the results, 4o increases realism relative to DALL•E, but it loses a lot of expressiveness & soul. Nano Banana manages to deliver the best of both worlds.

Nano Banana comes to Photoshop

Rob de Winter is back at it, mixing in Google’s new model alongside Flux Kontext.

Rob notes,

From my experiments so far:
• Gemini shines at easy conversational prompting, character consistency, color accuracy, understanding reference images
• Flux Kontext wins at relighting, blending, and atmosphere consistency

Barber, gimme the “Kling-Nano Banana…”

And yes, I do feel like I’m having a stroke when I type our actual phrases like that. 🙂 But putting that aside, check out the hairstyling magic that can come from pairing Google’s latest image-editing model with an image-to-video system: