Monthly Archives: August 2019

Luminar invents Google+ 2013

This “AI Structure” feature looks neat, but I take slight exception to the claim to being “the first-ever content-aware tool to improve details only where needed.” The Auto Enhance feature built into Google+ Photos circa 2013 used to do this kind of thing (treating skin one way, skies another, etc.) to half a billion photos per day.

Of course, as I learned right after joining the team, Google excels at doing incredibly hard stuff & making no one notice. I observed back then that we could switch the whole thing off & no one would notice or care—and that’s exactly what happened. Why? Because A) G+ didn’t show you before/after (so people would never know what difference it had made) and B) most people are to photography as I am to wine (“Is it total shite? No? Then good enough for me”). Here at least the tech is going to that tiny fraction of us who actually care—so good on ‘em.

[YouTube]

Google AR puts Notable Women onto currency

Zall about the Tubmans:

CNET writes

The app, called Notable Women, was developed by Google and former US Treasurer Rosie Rios. It uses augmented reality to let people see what it would look like if women were on US currency. Here’s how it works: Place any US bill in front of your phone’s camera, and the app uses digital filters — like one you’d see on Instagram or Snapchat — to overlay a new portrait on the bill. Users can choose from a database of 100 women, including the civil rights icon Rosa Parks and astronaut Sally Ride.

[YouTube]

AR: Adobe & MIT team up on body tracking to power presentations

Fun, funky idea:

Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]

The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.

Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.

Photography: Spin me right ’round, Milky Way edition

Whoa:

Creator Aryeh Nirenberg writes,

A timelapse of the Milky Way that was recorded using an equatorial tracking mount over a period of around 3 hours to show Earth’s rotation relative to the Milky Way.

I used a Sony a7SII with the Canon 24-70mm f2.8 lens and recorded 1100 10″ exposures at a 12-second interval. All the frames were captured at F/2.8 and 16000iso.

Kinda reminds me of “Turn Down For Spock”:

[YouTube 1 & 2] [Via]

Give yourself a hand! Realtime 3D hand-tracking for your projects

“Why doesn’t it recognize The Finger?!” asks my indignant, mischievous 10-year-old Henry, who with his brother has offered to donate a rich set of training data. 🙃

Juvenile amusement notwithstanding, I’m delighted that my teammates have released a badass hand-tracking model, especially handy (oh boy) for use with MediaPipe (see previous), our open-source pipeline for building ML projects.

Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.

🙌

Don’t nag your family. Make Google do it.

I’ve gotta give this new capability a shot:

To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”

New Snap Spectacles put 3D capture onto your face (!)

Wow:

Per The Verge:

The glasses’ marquee feature is a second camera, which enables Spectacles to capture depth for the first time. Snap has built a suite of new 3D effects that take advantage of the device’s new depth perception ability. They will be exclusive to Spectacles, and the company plans to let third-party developers design depth effects starting later this year.

This time around, Snap is offering a new way to view snaps taken through Spectacles: an included 3D viewer resembling Google Cardboard. (The Spectacles 3D viewer is made of cardboard as well.)

NewImage

[YouTube]

Camera Raw now seamlessly adjusts 360º panos

(May as well keep this Adobe-week content train rolling, amirite?)

If you’d asked me the odds of getting a tweak this deeply nerdy into Camera Raw, I’d probably have put it around 1 in 100—but dang, here we are! This is a godsend for those of us who like to apply area-based adjustments like Clarity & Dehaze to panoramas. Russell Brown shows the benefit below.

A note of caution, though: to my partial disappointment, this doesn’t (yet) work when applying Camera Raw as a filter, so if you want to use it on JPEGs, you’ll need to open them into ACR via Bridge (Cmd-R). And yes, my little Obi-Wan brain just said, “Now that’s a workflow I haven’t heard of in a long time…” Or, if you’re coming from Lightroom Classic, you’ll need to open the image as a Smart Object in Photoshop—clunky (though temporary, I’m told), but it beats the heck out of trying to fix seams manually.

[Vimeo]

Get a job! Lots of good listings in Adobe Video

At risk of making this an all-Adobe week of posts (no subtext there, honest!), you should think about coming to work with my wife & her crew in the rockin’ Digital Video & Audio group:

Premiere Rush adds speed ramping

Ah—just in time for me to play with speed in the dronie I took last week: Premiere Rush has added the ability to selectively speed up & slow down chunks of footage via its iOS, Android, and desktop versions.

Our #1 requested feature is available today in version 1.2 — Speed!

Slow down or speed up footage, add adjustable ramps, and maintain audio pitch — speed in Rush is intuitive for the first-time video creator, yet powerful enough to satisfy video pros who are editing on the go.

Check out the quick sample below & dig into details here.

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 

🎬 Rush (the video editing app I work on) just got a HUGE update: speed controls! You can now slow down and speed up footage + add adjustable ramps on both video & audio clips. I’ve been playing with it for the past few months in beta and it’s been a blast. 📱 The craziest part is that this is all possible on your computer, iPad and iOS/Android phone. Rush stores video projects in the cloud so you can work across all your devices, nuts. 💭 We’ve all seen our friends and family become photographers thanks to our smartphones. Right now, the same thing is happening with video. The era of serious mobile creativity is here, I’m pretty excited to be a part of it 💪📱 🎵Thanks @ianbennettdesigns for the awesome music track! #madewithtush #premiererush #adobe #gopro #adoberush

A post shared by MICHAEL HENRY (@meetmichaelhenry) on

[Via Margot Nack]

AR walking nav comes to iPhone & more Android devices in Google Maps

I’ve been collaborating with these folks for a few months & am incredibly excited about this feature:

With a beta feature called Live View, you can use augmented reality (AR) to better see which way to walk. Arrows and directions are placed in the real world to guide your way. We’ve tested Live View with the Local Guides and Pixel community over the past few months, and are now expanding the beta to Android and iOS devices that support ARCore and ARKit starting this week.

Like the Dos Equis guy, “I don’t always use augmented reality—but when I do, I navigate in Google Maps.” We’ll look back at these first little steps (no pun intended) as foundational to a pretty amazing new world.

[Via]

Gone fishing… and feeling grateful

Hey gang—I know I greatly flatter myself in thinking that my voice here will be much missed if I go quiet for a bit, especially without notice, but for what it’s worth I’m enjoying some very welcome digital downtime with family in friends in Minnesota.

Being minutes away from wrapping up the celebration of my 44th (!) solar orbit, I wanted to say thanks for being one of those still crazy enough to traipse over here periodically & browse my random finds. Fourteen (!!) years after I started this racket, it still remains largely fun & rewarding. I hope you agree, and I’m grateful for your readership.

Now please excuse me for just a few more days while I get back to swamping my hard drive with a crushing backlog of drone, GoPro, Insta360, iPhone, and Osmo shots. 🙃

NewImage

Oh, and for some dumb reason Google Maps insists on starting this pano (showing where we’re staying) pointed straight down into the pitch-black lake. You can drag it upwards and/or zoom out while I go file a bug/feature requests. The work is never done—another possible source of gratitude.