Monthly Archives: November 2019

“Sea-thru”: AI-driven underwater color correction

Dad-joke of a name notwithstanding 😌, this tech looks pretty slick:

PetaPixel writes,

To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.

And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”



Google donates $1M to help first responders

Happy Veterans’ Day, everyone. I’m proud of my first-responder brother (who volunteers his time to drive an ambulance in rural Illinois), and of my employer for helping vets & others better serve their communities:

A challenging, but often unrecognized, aspect of this work is the preparation required ahead of potential disasters. Therefore, is giving a $1 million grant to Team Rubicon to build out teams of volunteers, most of them military veterans, who will work alongside first responders to build out disaster preparedness operations.


“Project Glowstick” brings light sources to Illustrator

Anything that finally lets regular people tap into the vast (and vastly untapped) power of Illustrator’s venerable gradient mesh is a win, and this tech promises to let vector shapes function as light emitters that help cast shadows:


Requisite (?) Old Man Nack moment: though I have no idea if/how the underlying tech relates, I’m reminded of the Realtime Gradient-Domain Painting work that onetime Adobe researcher Jim McCann published back in 2008.

[YouTube 1 & 2

New Adobe tech can relight structures & synthesize shadows

Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”

Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯ 


[YouTube] [Via]

Adobe previews tools for detecting object manipulation

Back in 2011, my longtime Photoshop boss Kevin Connor left Adobe & launched a startup (see NYT article) with Prof. Hany Farid to help news organizations, law enforcement, and others detect image manipulation. They were ahead of their time, and since then the problem of “fake news” has only gotten worse.

Now Adobe has teamed up with Twitter & others Content Authenticity Initiative, and last night they previewed Project About Face, meant to help spot manipulated pixels—and even maybe reverse the effects. Check it out:



Adobe announces Photoshop Camera

This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).

I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.