Monthly Archives: July 2022

Head to head: Insta360 One RS 1″ vs. X2

At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.

Designers: Come design Photoshop!

Two roles (listed as being based in NYC & Denver) are now open. Check out the descriptions from the team:

—————————

As a key member and thought-leader on this team, you’ll be an integral part of exploring and influencing the next generation of Adobe’s creative tools. Together, we are forging a new class of experience standards for desktop, mobile, and web products for years to come.

You will (among other things):

  • Seek/design the “simple” experiences and interactions that influence our growing portfolio of creative tools. Empower users to delight themselves.
  • Partner closely with fellow senior designers, product managers, senior engineers, and other leaders across different teams to bring new products to life.

What you’ll bring to the team

Must-Haves

  • A minimum of 5 years of industry experience in product design with a proven track record of success
  • Experience (and a love of!) solving complex design and technology problems using systems thinking
  • Excellent communication skills, with the ability to clearly articulate a multi-level problem space and strategy behind design decisions
  • Creative and analytical skills to advocate for and support research, synthesize, and communicate insights that encourage design opportunities and product strategy
  • Passion for understanding how creative people do what they do and how technology plays a role in the creative process
  • Experience establishing user experience patterns across mobile, web, and desktop products within connected platforms

Google Maps rolls out photorealistic aerial views

Awesome work from my friend Bilawal Sidhu & team:

[W]e’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.

Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.

Adobe UI role: Sr. Staff Experience Designer, Premiere Pro

My video teammates (including the one I married) are attempting some groundbreaking, audacious stuff, and this newly open gig is a great chance to dive in with them:

This role will lead projects across various workflows from editing, audio, color, graphics and motion. You will bring your future-looking ideas to life with your designs, prototypes, and storytelling toolkit. You are a strong advocate for the customer because you can relate to their needs and understand the power of design and story to transform.

You will use your experience in post-production and experience design to paint the future vision of the products, but also love getting down into the details of shipping new builds and setting the example for shipping work with high quality.

— Viva Em Dashes —

I am, evidently, the kind of guy at parties who’ll whip out a typographical comedy clip—but hey, this one is worth it! Enjoy, fellow nerds:

The replies led me to discover Emdash.fan, an entire site devoted to providing you, the gentle visitor, with exactly one (1) delicious em dash for your clipboard. Insane & thus amazing. 😌

On a related note:

“Make-A-Scene” promises generative imaging cued via sketching

This new tech from Facebook Meta one-ups DALL•E et al by offering more localized control over where elements are placed:

The team writes,

We found that the image generated from both text and sketch was almost always (99.54 percent of the time) rated as better aligned with the original sketch. It was often (66.3 percent of the time) more aligned with the text prompt too. This demonstrates that Make-A-Scene generations are indeed faithful to a person’s vision communicated via the sketch.

DALL•E now depicts greater diversity

It’s cool & commendable to see OpenAI making improvements in the tricky area of increasing representation & diversity among the humans it depicts. From email they sent today:

DALL·E now generates images of people that more accurately reflect the diversity of the world’s population. Thank you to everyone who has marked results as biased in our product; your feedback helped inform and evaluate this new mitigation, which we plan on refining as we gather more data and feedback.

People have been noticing & sharing examples, e.g. via this Reddit thread.

[Update: See their blog post for more details & examples.]

AI’s emerging impact on architecture

Neil Leach, author of Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects, here shares his enthusiastic thoughts about emerging tools becoming “a prosthesis for the human imagination” (recalling Steve Jobs describing the computer as “a bicycle for the mind”).

Diffusion models, such as MidJourney, are going to be game changers that will change the way in which we operate. Consulting these models for inspiration in the design studio will become as common as using Google or Wikipedia when writing an essay. Importantly, however, we must recognise that there are other forms of AI that will be deployed in architectural design, that will look at other aspects of design, such as performance. For the moment they operate as a form of ‘extended intelligence’ – or as an extension to the human imagination – where the designer remains in charge. Eventually, however, we can expect these all to be incorporated on to a single ‘data to fabrication’ platform that will allow building designs to be generated completely autonomously.

Adobe Substance 3D is Hiring

Check out the site to see details & beautiful art—but at a glance here are the roles:

Adobe plans to make Photoshop on the web free to everyone

It’s been great to connect my former Google & Adobe teammates, helping them deepen the companies’ ongoing efforts to build up Web tech & enable deployment of demanding apps like Photoshop. Meanwhile the PS team has been working to make the app accessible everywhere:

The company is now testing the free version in Canada, where users are able to access Photoshop on the web through a free Adobe account. Adobe describes the service as “freemium” and eventually plans to gate off some features that will be exclusive to paying subscribers. Enough tools will be freely available to perform what Adobe considers to be Photoshop’s core functions. […]

“I want to see Photoshop meet users where they’re at now,” [Maria] Yap says. “You don’t need a high-end machine to come into Photoshop.”

“Thank God ‘E.T.’ Sucked,” revisited

Recently Atari creator Nolan Bushnell reflected on the 50th anniversary of the company, giving me occasion to reflect on how Atari’s decline very indirectly paved the way to my joining Photoshop. Here, 10 (!) years after I first shared it, is the brief story:


The stars aligned Monday, and two of my favorite creative people, Russell Brown & Panic founder Cabel Sasser, got to meet. Cabel (who commissioned Panic’s awesome homage to 1982-style video game art) was in town for a classic games show, and as we passed Russell’s office, I pointed out the cutout display for Atari’s notorious 1982 video game “E.T.” Russell had worked at Atari back then, and I rather gingerly asked, “Uh, didn’t that game kinda suck?”

“Oh yes!” said Russell–and thank goodness it did: if it hadn’t, Russell (and hundreds of others) wouldn’t have gotten laid off, and he wouldn’t have gone to Apple (where he met his future wife) and from there gone to “this little startup called ‘Adobe.'”

If that hadn’t happened, he wouldn’t have snatched my neck off the chopping block in ’02: I was days from being laid off post-LiveMotion, and it’s because Russell saw my “farewell” demo at his ADIM conference that he called the execs to say, “Really–we’re canning this guy…?” And, of course, had that not happened, I likely wouldn’t have met Cabel, wouldn’t have been introducing him & Russell, wouldn’t be talking to you now.

Of course, we joked, if it weren’t for the three of us talking just then, we’d be off experiencing some wonderful life-changing strokes of serendipity right now–but so it goes. 🙂

Capturing Reality With Machine Learning: A NeRF 3D Scan Compilation

Check out this high-speed overview of recent magic courtesy of my friend Bilawal:

Photogrammetry is an art form that has been around for decades, but it’s never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3D captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.

Using DALL•E to sharpen macro photography 👀

Synthesizing wholly new images is incredible, but as I noted my recent podcast conversation, it may well be that surgical slices of tech like DALL•E will prove to be just as impactful—a la Content-Aware Fill emerging from a thin slice of the PatchMatch paper. In this case,

To fix the image, [Nicholas Sherlock] erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”

A keen eye will note that the bug’s spot pattern has changed, but it’s still the same bug. Pretty amazing.

“Taste is the new skill” in the age of DALL•E

I was thinking back yesterday to Ira Glass’s classic observations on the (productive) tension that comes from having developed a sense of taste but not yet the skills to create accordingly:

Independently I came across this encouraging tweet from digital artist Claire Silver:

https://twitter.com/ClaireSilver12/status/1542516607653515271?s=20&t=fNgnFtxUEmRItvcNk9C6rg

As it happens, Claire’s Twitter bio includes the phrase “Taste is the new skill.” I’ve been thinking along these lines as tools like DALL•E & Imagen suddenly grant mass access to what previously required hard-won skill. When mechanical execution is taken largely off the table, what’s left? Maybe the sum total of your curiosity & life’s experiences—your developed perspective, your taste—is what sets you apart, making you you, letting you pair that uniqueness with better execution tools & thereby stand out. At least, y’know, until the next big language model drops. 🙃