Monthly Archives: March 2021

Adobe introduces the Design Mobile Bundle

It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:

  • Photoshop: Edit, composite, and create beautiful images, graphics, and art.
  • Illustrator: Create beautiful vector art and illustrations.
  • Fresco: Draw and paint with thousands of natural brushes.
  • Spark Post: Make stunning social graphics — in seconds.
  • Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.

More good stuff is coming to Fresco soon, too:

Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.

TinyElvis has (re)entered the building

…at least virtually.

Well gang, it’s official: I’m back at Adobe! Through the magic of technology, I found myself going through orientation yesterday in a desert motel room on Route 66 while my son/co-pilot/astromech droid attended 6th grade next to me. I was reminded of a dog walking on its hind legs: it doesn’t work well, but one is impressed that it works at all. 😌Afterwards we powered through the last six hours of our epic drive down 66 & its successors from Illinois to CA.

The blog may remain somewhat quiet for a bit as I find my sea legs, catch up with old friends, meet new folks, and realize how much I have to learn. It should be a great* journey, however, and I’m grateful to have you along for the ride!

*Mostly 😉:

Adobe’s looking for a Neural Filters PM

My excitement about what’s been going on here at the Big Red A is what drew me to reach out & eventually return (scheduled for Monday!). If you are (or know) a seasoned product manager who loves machine learning, check out this kickass listing:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

For some context, here’s an overview of the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

And check out Neural Filters working on Conan O’Brien back at Adobe MAX:

Font Me, Amadeus

“The world’s first typeface you can hear and play” sounds (heh) interesting. Per DesignTaxi,

Visualizing the brilliance of Amadeus Mozart, branding agency Happy People Project has created a typeface to front communications for Peter Shaffer’s play, Amadeus, in Turkey. […]

14 numbers and letters were created in line with notes and octaves on the staff, so you could listen to them. In total, though, a massive font family of 574 characters was designed for the project.

Check it out:

2-minute tour: ProRAW + Lightroom

Over the last 20 years or so, photographers have faced a slightly Faustian bargain: shoot JPEG & get the benefits of a camera manufacturer’s ability to tune output with a camera’s on-board smarts; or shoot raw and get more dynamic range and white balance flexibility—at the cost of losing that tuning and having to do more manual work.

Fortunately Adobe & Apple have been collaborating for many months to get Apple’s ProRAW variant of DNG supported in Camera Raw and Lightroom, and here Russell Brown provides a quick tour of how capture and editing work:

View this post on Instagram

A post shared by Russell Preston Brown (@dr_brown)

“You Look Like A Thing, And I Love You”

I really enjoyed listening to the podcast version of this funny, accessible talk from AI Weirdness writer Janelle Shane, and think you’d get a kick out of it, too.

On her blog, Janelle writes about AI and the weird, funny results it can produce. She has trained AIs to produce things like cat names, paint colors, and candy heart messages. In this talk she explains how AIs learn, fail, adapt, and reflect the best and worst of humanity.

Happy St. Paddy’s from one disgruntled leprechaun

We can’t celebrate in person with pals this year, but here’s a bit of good cheer from our wee man (victim of the old “raisin cookie fake-out”):

Saturday, March 16, 2019

Meanwhile, I just stumbled across this hearty “Sláinte” from Bill Burr. 😌

And on a less goofball note,

May the road rise to meet you,
May the wind be always at your back.
May the sun shine warm upon your face,
The rains fall soft upon your fields.
And until we meet again,
May God hold you in the palm of His hand.

☘️ J.

Nack to the Future: I’m returning to Adobe!

Well everything dies, baby, that’s a fact.
But maybe everything that dies, someday comes back…

I’m thrilled to say that seven years after heading out to “Teach Google Photoshop,” I’m returning to where my PM journey started in the year 2000. As the old saying goes, “You can take the boy out of Adobe, but…” As I said in 2019:

…and as I’d type my brain would autocomplete “[Google] Photos” to “Photoshop.” It’s funny, too: my Google orientation took place in Adobe’s former HQ in Mountain View, where the Photoshop vets at Google say they’d written Photoshop for Unix. I didn’t need a DeLorean to feel a time warp—but Google brought one to the party anyway:

It was quite the interesting journey (partially summarized here), and as you’d expect, I have a lot of thoughts on the subject. I hope to share those soon—but today is all about looking forward.

Sooo… what will I be doing? I can’t say much yet as 1) I don’t want to speak out of turn, and 2) I won’t officially start for two weeks (“Twooo weeeeks…!!”)

I can say, though, that I’ve been really excited by what I’ve seen from Adobe in the last couple of years (see innumerable previous posts), especially around Neural Filters, and I love that they’re exploring what one might call “AI-first creation tools.”

Now, if you ask me precisely what that means, I’ll yell “No one knows what it means—but it’s provocative! Gets the people GOIN’!!” 🙃 But seriously, any of us had it all figured out, it’d be boring and they wouldn’t need me. I’m what Larry Page might call “uncomfortably excited” to commit to the journey—to messing around & finding out. I can’t wait to team up with friends new & old—including a couple of amazing researchers who’ve also returned from tours at Google.

So, please watch this space for more details. In the meantime I’m due to fly to Chicago to start an epic road trip down Route 66 with my son, driving my dad’s ancient Miata all the way to California. Stay tuned for what’ll be an almost punitive number of pics & vids. 😌

And with that—let the new adventures begin!

Insta360 GO 2: Finally a wearable cam that doesn’t suck?

Photo-taking often presents a Faustian bargain: be able to relive memories later, but at the cost of being less present in the experience as it happens. When my team researched why people do & don’t take photos, wanting to be present & not intrusive/obnoxious were key reasons not to bring out a camera.

So what if you could wear not just a lightweight, unobtrusive capture device, but actually wear a photographer—an intelligence that could capture the best moments, leaving your hands & mind free in the moment? Even naive, interval-based capture could produce a really interesting journey through space, as Blaise Agüera y Arcas demonstrated at Microsoft back in 2013:

It’s a long-held dream that products like Google’s Clips camera (which Blaise led at Google) have tried so achieve, thus far without any notable success. Clips proved to be too large & heavy for many people to wear comfortably, and training an AI model to find “good” moments ends up being much harder than one might imagine. Google discontinued Clips, though as a consolation prize I ended up delighting my young son by bringing home reams of unused printed circuit boards (which for some reason resembled the Millennium Falcon). Meanwhile Microsoft discontinued PhotoSynth.

The need remains & the dream won’t die, however, so I was excited ~18 months ago when Insta360 introduced the GO, a $199, “20-gram steadicam” for $199. It promised ultra lightweight wearability, photo capture, and a slick AirPods-style case for both recharging & data transfer. The wide FOV capture promised post-capture reframing driven by (you guessed it) mythical AI that could select the best moments.

Others (including many on the Insta forum) were skeptical, but I was enamored enough that my wife bought me one for Christmas. Sadly, buying Insta products is a little like Russian Roulette (e.g. I have loved the One X & subsequent X2, while the One R has been a worthless paperweight), and the GO ended up on the bummer side of the ledger. I found it way too hard to reliably start/stop & to transfer data. It’s been another paperweight.

To their possible credit (TBD), though, Insta has persisted with the product and has released the GO 2—now more expensive ($299) but promising a host of improvements (wireless preview & transfer, better storage & battery, etc.). Check it out:

“Looks perfect for a proctologist, which is where Insta can shove it,” said one salty user on the Insta forum. Will it finally work well? I don’t know—but I’m just hungry/sucker enough to pull the trigger, F around & find out. Hopefully it’ll arrive in advance of the road trip I’m planning with my son, so stay tuned for real-world findings.

Meanwhile, here’s a review I found thorough & informative—and not least in its innovative use of gummi bears as a unity of measure 🙃:

Oh, and I did not order the forthcoming Minion mod (a real thing, they swear):

Sketchfab enables easy 3D content migration from Google Poly

I was sorry to see the announcement that Google’s Poly 3D repository is going away, but I’m happy to see the great folks at Sketchfab stepping up to help creators easily migrate their content:

Poly-to-Sketchfab will help members of the Poly community easily transfer their models to Sketchfab before Poly closes its doors this summer. We’re happy to welcome the Poly community to Sketchfab and look forward to exploring their 3D creations.

Our Poly-to-Sketchfab app connects to both your Poly and Sketchfab accounts, presents you with a list of models that can be transferred, and then copies the models that you select from Poly to Sketchfab.