At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.
Monthly Archives: July 2022
Designers: Come design Photoshop!
Two roles (listed as being based in NYC & Denver) are now open. Check out the descriptions from the team:
—————————
As a key member and thought-leader on this team, you’ll be an integral part of exploring and influencing the next generation of Adobe’s creative tools. Together, we are forging a new class of experience standards for desktop, mobile, and web products for years to come.
You will (among other things):
- Seek/design the “simple” experiences and interactions that influence our growing portfolio of creative tools. Empower users to delight themselves.
- Partner closely with fellow senior designers, product managers, senior engineers, and other leaders across different teams to bring new products to life.
What you’ll bring to the team
Must-Haves
- A minimum of 5 years of industry experience in product design with a proven track record of success
- Experience (and a love of!) solving complex design and technology problems using systems thinking
- Excellent communication skills, with the ability to clearly articulate a multi-level problem space and strategy behind design decisions
- Creative and analytical skills to advocate for and support research, synthesize, and communicate insights that encourage design opportunities and product strategy
- Passion for understanding how creative people do what they do and how technology plays a role in the creative process
- Experience establishing user experience patterns across mobile, web, and desktop products within connected platforms
Google Maps rolls out photorealistic aerial views
Awesome work from my friend Bilawal Sidhu & team:
[W]e’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.

Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.
“This New AI is Photoshop For Your Hair!”
GANs (generative adversarial networks), like what underpins Smart Portrait in Photoshop, promise all kind of fine-grained image synthesis and editing. Check out new advances around one’s ‘do:
[Via Davis Brown]
MegaPortraits: One-shot Megapixel Neural Head Avatars
AI animation tech, which in this case leverages the motion of a face in a video to animate a different face in a still image, keeps getting better & better. Check out these results from Samsung Labs:

Adobe UI role: Sr. Staff Experience Designer, Premiere Pro
My video teammates (including the one I married) are attempting some groundbreaking, audacious stuff, and this newly open gig is a great chance to dive in with them:
This role will lead projects across various workflows from editing, audio, color, graphics and motion. You will bring your future-looking ideas to life with your designs, prototypes, and storytelling toolkit. You are a strong advocate for the customer because you can relate to their needs and understand the power of design and story to transform.
You will use your experience in post-production and experience design to paint the future vision of the products, but also love getting down into the details of shipping new builds and setting the example for shipping work with high quality.

— Viva Em Dashes —
I am, evidently, the kind of guy at parties who’ll whip out a typographical comedy clip—but hey, this one is worth it! Enjoy, fellow nerds:
The replies led me to discover Emdash.fan, an entire site devoted to providing you, the gentle visitor, with exactly one (1) delicious em dash for your clipboard. Insane & thus amazing. 😌
On a related note:

“Some total rando named John Nack blogged about it”
“Oh my God… Who is this Dave Werner guy and what kind of government lab built him?” So I raved back in 2006 (!). Dave was only a student back then, but his potential was obvious, and I’m so happy he reached out in 2012 and became the designer on my then-team. Now 10 years later, he reflects on his journey in this characteristically charming, inventive little video:
NVIDIA Text2LIVE modifies semantic regions in photos & vids
When it rains, it pours: Text2LIVE promises the ability to use descriptions to modify parts of photos:


[O]ur goal is to edit the appearance of existing objects (e.g., object’s texture) or augment the scene with new visual effects (e.g., smoke, fire) in a semantically meaningful manner.

It also works on video:
“Make-A-Scene” promises generative imaging cued via sketching
This new tech from Facebook Meta one-ups DALL•E et al by offering more localized control over where elements are placed:
The team writes,
We found that the image generated from both text and sketch was almost always (99.54 percent of the time) rated as better aligned with the original sketch. It was often (66.3 percent of the time) more aligned with the text prompt too. This demonstrates that Make-A-Scene generations are indeed faithful to a person’s vision communicated via the sketch.


Kids swoon as DALL•E brings their ideas into view
Nicely done; can’t wait to see more experiences like this.
Animated magic made via DALL•E + After Effects
😮
DALL•E now depicts greater diversity
It’s cool & commendable to see OpenAI making improvements in the tricky area of increasing representation & diversity among the humans it depicts. From email they sent today:
DALL·E now generates images of people that more accurately reflect the diversity of the world’s population. Thank you to everyone who has marked results as biased in our product; your feedback helped inform and evaluate this new mitigation, which we plan on refining as we gather more data and feedback.
People have been noticing & sharing examples, e.g. via this Reddit thread.
[Update: See their blog post for more details & examples.]


AI’s emerging impact on architecture
Neil Leach, author of Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects, here shares his enthusiastic thoughts about emerging tools becoming “a prosthesis for the human imagination” (recalling Steve Jobs describing the computer as “a bicycle for the mind”).
Diffusion models, such as MidJourney, are going to be game changers that will change the way in which we operate. Consulting these models for inspiration in the design studio will become as common as using Google or Wikipedia when writing an essay. Importantly, however, we must recognise that there are other forms of AI that will be deployed in architectural design, that will look at other aspects of design, such as performance. For the moment they operate as a form of ‘extended intelligence’ – or as an extension to the human imagination – where the designer remains in charge. Eventually, however, we can expect these all to be incorporated on to a single ‘data to fabrication’ platform that will allow building designs to be generated completely autonomously.
Metal Machine Music 🤖
Just weird & silly enjoy to share here; enjoy!
Design: “An Ozymandian Nightmare”
Adobe Substance 3D is Hiring
Check out the site to see details & beautiful art—but at a glance here are the roles:
- Multi Surface Graphics Software Engineer – macOS & iOS
- Sr. Software Engineer UI Oriented, Substance 3D Designer
- Sr. Software Development Engineer, Substance 3D Painter
- Senior Software Engineer, 3D Graphics
- Sr. Software Development Engineer, Test Automation
- Creative Cloud Desktop Frontend Developer (CDD 12 mois)
- Sr. 3D Artist
- Sr. Manager, Strategic Initiatives and Partnerships
- Data Engineer – Contract Role
- Sr. DevOps Engineer – Contract Role

Fun with Insta360’s new Sky Swap mode
This is what happens when I lean out the side of a moving train, then have altitude-induced insomnia and get to playing with my phone. 🙃
Here’s a quick look into how it works & how to use it:
Adobe wins a Webby for AR Reef
Congrats to the team on this recognition of the project!
Discover the experience for yourself with these QR Codes by downloading the Aero app. We recommend running the experience for iOS on 8S and above, or on Android, Private Beta, US only, a list of Android can be found here on HelpX. (FYI, the experience may take a few seconds to load as it is a more sophisticated AR project.)
Realtime beatmaking
Apropos of nothing (but who cares, I’m still on vacation 😛), I really dig the ways these guys make beats with a theremin (?) and drum kit:
Adobe plans to make Photoshop on the web free to everyone
It’s been great to connect my former Google & Adobe teammates, helping them deepen the companies’ ongoing efforts to build up Web tech & enable deployment of demanding apps like Photoshop. Meanwhile the PS team has been working to make the app accessible everywhere:
The company is now testing the free version in Canada, where users are able to access Photoshop on the web through a free Adobe account. Adobe describes the service as “freemium” and eventually plans to gate off some features that will be exclusive to paying subscribers. Enough tools will be freely available to perform what Adobe considers to be Photoshop’s core functions. […]
“I want to see Photoshop meet users where they’re at now,” [Maria] Yap says. “You don’t need a high-end machine to come into Photoshop.”

“Thank God ‘E.T.’ Sucked,” revisited
Recently Atari creator Nolan Bushnell reflected on the 50th anniversary of the company, giving me occasion to reflect on how Atari’s decline very indirectly paved the way to my joining Photoshop. Here, 10 (!) years after I first shared it, is the brief story:
The stars aligned Monday, and two of my favorite creative people, Russell Brown & Panic founder Cabel Sasser, got to meet. Cabel (who commissioned Panic’s awesome homage to 1982-style video game art) was in town for a classic games show, and as we passed Russell’s office, I pointed out the cutout display for Atari’s notorious 1982 video game “E.T.” Russell had worked at Atari back then, and I rather gingerly asked, “Uh, didn’t that game kinda suck?”
“Oh yes!” said Russell–and thank goodness it did: if it hadn’t, Russell (and hundreds of others) wouldn’t have gotten laid off, and he wouldn’t have gone to Apple (where he met his future wife) and from there gone to “this little startup called ‘Adobe.'”
If that hadn’t happened, he wouldn’t have snatched my neck off the chopping block in ’02: I was days from being laid off post-LiveMotion, and it’s because Russell saw my “farewell” demo at his ADIM conference that he called the execs to say, “Really–we’re canning this guy…?” And, of course, had that not happened, I likely wouldn’t have met Cabel, wouldn’t have been introducing him & Russell, wouldn’t be talking to you now.
Of course, we joked, if it weren’t for the three of us talking just then, we’d be off experiencing some wonderful life-changing strokes of serendipity right now–but so it goes. 🙂
Capturing Reality With Machine Learning: A NeRF 3D Scan Compilation
Check out this high-speed overview of recent magic courtesy of my friend Bilawal:
Photogrammetry is an art form that has been around for decades, but it’s never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3D captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.
Using DALL•E to sharpen macro photography 👀
Synthesizing wholly new images is incredible, but as I noted my recent podcast conversation, it may well be that surgical slices of tech like DALL•E will prove to be just as impactful—a la Content-Aware Fill emerging from a thin slice of the PatchMatch paper. In this case,
To fix the image, [Nicholas Sherlock] erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”
A keen eye will note that the bug’s spot pattern has changed, but it’s still the same bug. Pretty amazing.

“Taste is the new skill” in the age of DALL•E
I was thinking back yesterday to Ira Glass’s classic observations on the (productive) tension that comes from having developed a sense of taste but not yet the skills to create accordingly:
Independently I came across this encouraging tweet from digital artist Claire Silver:
As it happens, Claire’s Twitter bio includes the phrase “Taste is the new skill.” I’ve been thinking along these lines as tools like DALL•E & Imagen suddenly grant mass access to what previously required hard-won skill. When mechanical execution is taken largely off the table, what’s left? Maybe the sum total of your curiosity & life’s experiences—your developed perspective, your taste—is what sets you apart, making you you, letting you pair that uniqueness with better execution tools & thereby stand out. At least, y’know, until the next big language model drops. 🙃