Monthly Archives: October 2020

New Adobe tech promises 3D & materials scanning

Probably needless to say, 3D model creation remains hard AF for most people, and as such it’s a huge chokepoint in the adoption of 3D & AR viewing experiences.

Fortunately we may be on the cusp of some breakthroughs. Apple is about to popularize LIDAR on phones, and with it we’ll see interesting photogrammetry apps like Polycam:

Meanwhile Adobe is working to enable 3D scanning using devices without fancy sensors. Check out Project Scantastic:

They’re also working to improve the digitization of materials—something that could facilitate the (presently slow, expensive) digitization of apparel:

Inside iPhone’s “Dark Universe”

As a kid, I spent hours fantasizing about the epic films I could make, if only I could borrow my friend’s giant camcorder & some dry ice. Apple 💯 has their finger on the pulse of such aspirational souls in this new ad:

It’s pretty insane to see what talented filmmakers can do with just a phone (or rather, a high-end camera/computer/monitor that happens to make phone calls) and practical effects:

Apple has posted an illuminating behind-the-scenes video for this piece. PetaPixel writes,

In one clip they show how they dropped the phone directly into rocks that they had fired upwards using a piston, and in another, they use magnets and iron filings with the camera very close to the surface. One step further, they use ferrofluid to create rapidly flowing ripples that flow wildly on camera.

Check it out:

New typographical brushes from Adobe turn paint into editable characters

I’ve long, long been a fan of using brush strokes on paths to create interesting glyphs & lettering. I used to contort all kinds of vectors into Illustrator brushes, and as it happens, 11 years ago today I was sharing an interesting tutorial on creating smokey text:

Now Adobe engineers are looking to raise the game—a lot.

Combining users drawn stroke inputs, the choice of brush, and the typographic properties of the text object, Project Typographic Brushes brings paint style brushes and new-type families to life in seconds.

Check out some solid witchcraft in action:

Photoshop’s new Smart Portrait is pretty amazing

My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:

On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:

Here’s a more in-depth look (starting around 1:46) at controlling the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):

New Google Photos widget puts memories onto your iPhone homescreen

YouTube, the Google app, and Photos now offer options to show widgets via the new iOS 14:

To install a Google Widget, first make sure you have the Google Photos appYouTube Music app or Google app downloaded from the App Store. Then follow these steps:

  1. Press and hold  on the home screen of your iPhone or iPad
  2. Tap the plus icon on the upper left corner to open the widget gallery
  3. Search for & tap on the Google app, YouTube Music or the Google Photos app
  4. Swipe right/left to select the widget size
  5. Tap “Add Widget”
  6. Place the widget and tap “Done” at the upper right corner

Photoshop’s Sky Replacement feature was well worth the wait

Although I haven’t yet gotten to use it extensively, I’m really enjoying the newly arrived Sky Replacement feature in Photoshop. Check out a quick before/after on a tiny planet image:

Eye-popping portraits emerge as paint cascades down the human face

Man, these are stunning—and they’re all done in camera:

First coated in black, the anonymous subjects in Tim Tadder’s portraits are cloaked with hypnotic swirls and thick drips of bright paint. To create the mesmerizing images, the Encinitas, California-based photographer and artist pours a mix of colors over his sitters and snaps a precisely-timed shot to capture each drop as it runs down their necks or splashes from their chins.

You can find more of the artist’s work on Behance and Instagram.

Adobe MAX starts tomorrow, and you can attend for free

From Conan O’Brien to Tyler the Creator* to (of course) tons of deep dives into creative tech, Adobe has organized quite the line-up:

Make plans to join us for a uniquely immersive and engaging digital experience, guaranteed to inspire. Three full days of luminary speakers, celebrity appearances, musical performances, global collaborative art projects, and 350+ sessions — and all at no cost.

You can build your session list here. Looking forward to learning a lot this week!

*I’m reminded of Alec Baldwin as Tony Bennett talking about “Wiz Khalifa and Imagine Dragons—what a great, great, random pairing.” I can’t find that episode online, so what the heck, enjoy this one.

Photographic downfall: “Tsunami from Heaven”

This is lovely—especially from a safe, dry distance:

PetaPixel writes,

A couple of years ago, adventure photographer and Visit Austria creator Peter Maier captured a stunning rainstorm timelapse titled ‘Tsunami from Heaven’… It was captured from the Alpengasthof Bergfried hotel in Carinthia, Austria, and shows a sudden cloudburst (AKA microburst or downburst) soaking an area around Lake Millstatt

Snap puts the AR in graffiti art

The notion of a metaverse, “a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space,” has long beguiled those of us captivated by augmented reality. Now Snap has been doing the hard work of making this more real, being able to scan & recognize one’s surroundings and impose a “persistent, shared AR world built right on top of your neighborhood.” Check it out:

This experience (presently available on just one street in London, but presumably destined to reach many others) builds on the AR Landmarkers work the company did previously. (As it happens, I think David Salesin—who led Adobe Research for many years—contributed to this effort during his stopover at Snap before joining Google Research.)

Come see the AR feature I’ve been working on all year!

I’m delighted to share that my team’s work to add 3D & AR automotive results to Google Search—streaming in cinematic quality via cloud rendering—has now been announced! Check out the demo starting around 36:30:

Here’s how we put it on via the Google Keyword blog:

Bring the showroom to you with AR

You can easily check out what the car looks like in different colors, zoom in to see intricate details like buttons on the dashboard, view it against beautiful backdrops and even see it in your driveway. We’re experimenting with this feature in the U.S. and working with top auto brands, such as Volvo and Porsche, to bring these experiences to you soon.

Cloud streaming enables us to take file size out of the equation, so we can serve up super detailed visuals from models that are hundreds of megabytes in size:

Right now the feature is in testing in the US, so there’s a chance you can experience it via Android right now (with iOS planned soon). We hope to make it available widely soon, and I can’t wait to hear what you think!

AR in Google Maps can point you to your friends

This is one of the far-flung projects I’ve been glad to help support. New features (like this one that’s available on Pixel, and coming soon to iOS & Android):

When a friend has chosen to share their location with you, you can easily tap on their icon and then on Live View to see where and how far away they are–with overlaid arrows and directions that help you  know where to go.

It’s also getting smarter about recognizing landmarks:

Soon, you’ll also be able to see nearby landmarks so you can quickly and easily orient yourself and understand your surroundings. Live View will show you how far away certain landmarks are from you and what direction you need to go to get there.

Chrome for iOS improves password autofill, Face ID integration

Teamwork makes the dream work, baby:

Improvements to password filling on iOS

We recently launched Touch-to-fill for passwords on Android to prevent phishing attacks. To improve security on iOS too, we’re introducing a biometric authentication step before autofilling passwords. On iOS, you’ll now be able to authenticate using Face ID, Touch ID, or your phone passcode. Additionally, Chrome Password Manager allows you to autofill saved passwords into iOS apps or browsers if you enable Chrome autofill in Settings.

Warrior dogs to get AR goggles

“Are they gonna use the Snapchat dancing hot dog to steer them or what?” — Henry Nack, age 11, bringing the 🔥 feature requests 😌

Funded by the US military and developed by a Seattle-based company called Command Sight, the new goggles will allow handlers to see through a dog’s eyes and give directions while staying out of sight and at a safe distance.

While looking through the dog’s eyes thanks to the goggle’s built-in camera, the handler can direct the dog by controlling an augmented reality visual indicator seen by the dog wearing the goggles.

Check out “Light Fields, Light Stages, and the Future of Virtual Production”

“Holy shit, you’re actually Paul Debevec!”

That’s what I said—or at least what I thought—upon seeing Paul next to me in line for coffee at Google. I’d known his name & work for decades, especially via my time PM’ing features related to HDR imaging—a field in which Paul is a pioneer.

Anyway, Paul & his team have been at Google for the last couple of years, and he’ll be giving a keynote talk at VIEW 2020 on Oct 18th. “You can now register for free access to the VIEW Conference Online Edition,” he notes, “to livestream its excellent slate of animation and visual effects presentations.”

In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning.  I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded.  Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach.  I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware.  I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.

Put the “AR” in “art” via Google Arts & Culture

I’m excited to see the tech my team has built into YouTube, Duo, and other apps land in Arts & Culture, powering five new fun experiences:

Snap a video or image of yourself to become Van Gogh or Frida Kahlo’s self-portraits, or the famous Girl with a Pearl Earring. You can also step deep into history with a traditional Samurai helmet or a remarkable Ancient Egyptian necklace.

To get started, open the free Google Arts & Culture app for Android or iOS and tap the rainbow camera icon at the bottom of the homepage.

NASA brings the in sound from way out

Nothing can stop us now
We are all playing stars…

A new project using sonification turns astronomical images from NASA’s Chandra X-Ray Observatory and other telescopes into sound. This allows users to “listen” to the center of the Milky Way as observed in X-ray, optical, and infrared light. As the cursor moves across the image, sounds represent the position and brightness of the sources.

Google & researchers demo AI-powered shadow removal

Speaking of Google photography research (see previous post about portrait relighting), I’ve been meaning to point to the team’s collaboration with MIT & Berkeley. As PetaPixel writes,

The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.

Here’s a nice summary from Two-Minute Papers:

https://youtu.be/qeZMKgKJLX4

Interactive Portrait Light comes to Google Photos on Pixel; editor gets upgraded

I have been waiting, I kid you not, since the Bush Administration to have an easy way to adjust lighting on faces. I just didn’t expect it to appear on my telephone before it showed up in Photoshop, but ¯\_(ツ)_/¯. Anyway, check out what you can now do on Pixel 4 & 5 devices:

This feature arrives, as PetaPixel notes, as one of several new Suggestions:

Nestled into a new ‘Suggestions’ tab that shows up first in the Photos editor, the options displayed there “[use] machine learning to give you suggestions that are tailored to the specific photo you’re editing.” For now, this only includes three options—Color Pop, Black & White, and Enhance—but more suggestions will be added “in the coming months” to deal specifically with portraits, landscapes, sunsets, and beyond.

Lastly, the photo editor overall has gotten its first major reorganization since we launched it in 2015: