Monthly Archives: November 2019

Google Earth adds new storytelling tools

I’m delighted to see new ways to pair one’s own images with views of our planet:

With creation tools in Google Earth, you can draw your own placemarks, lines and shapes, then attach your own custom text, images, and videos to these locations. You can organize your story into a narrative and collaborate with others. And when you’ve finished your story, you can share it with others. By clicking the new “Present” button, your audience will be able to fly from place to place in your custom-made Google Earth narrative.

Take a look at how students & others are using it:

Here’s a 60-second-ish tour of the actual creation process:

[YouTube]

Google releases open-source, browser-based BodyPix 2.0

Over the last couple of years I’ve pointed out a number of cool projects (e.g. driving image search via your body movements) powered by my teammates’ efforts to 1) deliver great machine learning models, and 2) enable Web browsers to run them efficiently. Now they’ve released BodyPix 2.0 (see live demo):

We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X.

Enjoy!

Bittersweet Symphony: Lightroom improves iPad import

“Hey, y’all got a water desalination plant, ‘cause I’m salty as hell.🙃

First, some good news: Lightroom is planning to improve the workflow of importing images from an SD card:

I know that this is something that photographers deeply wanted, starting in 2010. I just wonder whether—nearly 10 years since the launch of iPad—it matters anymore.

My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,

I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:

  • Let me leave my computer at home & carry just my tablet** & camera
  • Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
  • When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.

This remains a bizarre failure of our industry.

Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.

On we go.

[YouTube]

A surprisingly terrific podcast: “The Great Bitter Lake Association”

Two things I should know by now:

  • Some 99% Invisible podcast is going to seem worthy but so arcane that I just won’t really make time to listen to it.
  • If eventually I do, I’ll be really rewarded for it.

The recent episode about The Great Bitter Lake Association—covering the camaraderie that emerged among sailors in the “Yellow Fleet” that became trapped by the Six-Day War & then stuck in the Suez for years—is totally fascinating. I hope you listen & enjoy. If nothing else, enjoy the homemade postage they created amongst themselves (which eventually became recognized by Egypt & thus usable for sending mail), the discovery of which sets the whole recounting in motion.

NewImage

NewImage

“Sea-thru”: AI-driven underwater color correction

Dad-joke of a name notwithstanding 😌, this tech looks pretty slick:

PetaPixel writes,

To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.

And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”

NewImage

[YouTube]

Google donates $1M to help first responders

Happy Veterans’ Day, everyone. I’m proud of my first-responder brother (who volunteers his time to drive an ambulance in rural Illinois), and of my employer for helping vets & others better serve their communities:

A challenging, but often unrecognized, aspect of this work is the preparation required ahead of potential disasters. Therefore, Google.org is giving a $1 million grant to Team Rubicon to build out teams of volunteers, most of them military veterans, who will work alongside first responders to build out disaster preparedness operations.

[YouTube]

“Project Glowstick” brings light sources to Illustrator

Anything that finally lets regular people tap into the vast (and vastly untapped) power of Illustrator’s venerable gradient mesh is a win, and this tech promises to let vector shapes function as light emitters that help cast shadows:

NewImage

Requisite (?) Old Man Nack moment: though I have no idea if/how the underlying tech relates, I’m reminded of the Realtime Gradient-Domain Painting work that onetime Adobe researcher Jim McCann published back in 2008.

[YouTube 1 & 2

New Adobe tech can relight structures & synthesize shadows

Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”

Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯ 

NewImage

[YouTube] [Via]

Adobe previews tools for detecting object manipulation

Back in 2011, my longtime Photoshop boss Kevin Connor left Adobe & launched a startup (see NYT article) with Prof. Hany Farid to help news organizations, law enforcement, and others detect image manipulation. They were ahead of their time, and since then the problem of “fake news” has only gotten worse.

Now Adobe has teamed up with Twitter & others Content Authenticity Initiative, and last night they previewed Project About Face, meant to help spot manipulated pixels—and even maybe reverse the effects. Check it out:

NewImage

[YouTube]

Adobe announces Photoshop Camera

This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).

I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.

NewImage

[YouTube]