How totally & completely here for this am I? It’s… beyond my powers to describe. 😌
Back in February I got to try my hand at some long-exposure phone photography in Death Valley with Russell Brown, interspersing chilly morning & evening shoots with low-key Adobe interviewing. 😌
Stellar times chilling (literally!) with Russell Preston Brown. 💫
View this post on Instagram
Welcome to the rabbit hole, my friends. 🙃
What if instead of pushing pixels, you could simply tell your tools what changes you’d like to see? (Cue Kramer voice: “Why don’t you just tell me the movie…??”) This new StyleCLIP technology (code) builds on NVIDIA’s StyleGAN foundation to enable image editing simply by applying various terms. Check out some examples (“before” images in the top row; “after” below along with editing terms).
Here’s a demo of editing human & animal faces, and even of transforming cars:
By no means have I been around here long enough (five whole days!) to grok everything that’s going on here, but as I come up to speed, I’ll do my best to share what I’m learning. Meanwhile I’d love to hear your thoughts on how we might thoughtfully bring techniques like this to life.
After driving 2,000+ miles down Route 66 and beyond in six days—the last of which also included getting onboarded at Adobe!—I’ve only just begun to breathe & go through the titanic number of photos and videos my son & I captured. I’ll try to share more good stuff soon, but in the meantime you might get a kick (heh) out of this little vid, captured via my Insta360 One X2:
Now one of these days I just need to dust off my After Effects skills enough to nuke the telltale pole shadows. Someday…!
<Old Man Nack voice> In my day, it cost $2,500 to buy the Adobe Font Folio—but Kids These Days™ (and the rest of us) get fonts on demand, right through the air. I enjoyed the type & illustrations in this little promo piece:
I spent my last couple of years at Google working on a 3D & AR engine that could power experiences across Maps, YouTube, Search, and other surfaces. Meanwhile my colleagues have been working on data-gathering that’ll use this system to help people navigate via augmented reality. As TechCrunch writes:
Indoor Live View is the flashiest of these. Google’s existing AR Live View walking directions currently only work outdoors, but thanks to some advances in its technology to recognize where exactly you are (even without a good GPS signal), the company is now able to bring this indoors.
This feature is already live in some malls in the U.S. in Chicago, Long Island, Los Angeles, Newark, San Francisco, San Jose and Seattle, but in the coming months, it’ll come to select airports, malls and transit stations in Tokyo and Zurich as well (just in time for vaccines to arrive and travel to — maybe — rebound). Because Google is able to locate you by comparing the images around you to its database, it can also tell which floor you are on and hence guide you to your gate at the Zurich airport, for example.
It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:
- Photoshop: Edit, composite, and create beautiful images, graphics, and art.
- Illustrator: Create beautiful vector art and illustrations.
- Fresco: Draw and paint with thousands of natural brushes.
- Spark Post: Make stunning social graphics — in seconds.
- Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.
More good stuff is coming to Fresco soon, too:
Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.
…at least virtually.
Well gang, it’s official: I’m back at Adobe! Through the magic of technology, I found myself going through orientation yesterday in a desert motel room on Route 66 while my son/co-pilot/astromech droid attended 6th grade next to me. I was reminded of a dog walking on its hind legs: it doesn’t work well, but one is impressed that it works at all. 😌Afterwards we powered through the last six hours of our epic drive down 66 & its successors from Illinois to CA.
The blog may remain somewhat quiet for a bit as I find my sea legs, catch up with old friends, meet new folks, and realize how much I have to learn. It should be a great* journey, however, and I’m grateful to have you along for the ride!
My excitement about what’s been going on here at the Big Red A is what drew me to reach out & eventually return (scheduled for Monday!). If you are (or know) a seasoned product manager who loves machine learning, check out this kickass listing:
Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.
The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!
For some context, here’s an overview of the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:
And check out Neural Filters working on Conan O’Brien back at Adobe MAX:
“The world’s first typeface you can hear and play” sounds (heh) interesting. Per DesignTaxi,
14 numbers and letters were created in line with notes and octaves on the staff, so you could listen to them. In total, though, a massive font family of 574 characters was designed for the project.
Check it out: