To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.
And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”
“Imagine your Labrador’s smile on a lion or your feline’s finicky smirk on a tiger,” NVIDIA writes. “A team of NVIDIA researchers has defined new AI techniques that give computers enough smarts to see a picture of one animal and recreate its expression and pose on the face of any other creature.”
Happy Veterans’ Day, everyone. I’m proud of my first-responder brother (who volunteers his time to drive an ambulance in rural Illinois), and of my employer for helping vets & others better serve their communities:
A challenging, but often unrecognized, aspect of this work is the preparation required ahead of potential disasters. Therefore, Google.org is giving a $1 million grant to Team Rubicon to build out teams of volunteers, most of them military veterans, who will work alongside first responders to build out disaster preparedness operations.
Anything that finally lets regular people tap into the vast (and vastly untapped) power of Illustrator’s venerable gradient mesh is a win, and this tech promises to let vector shapes function as light emitters that help cast shadows:
Requisite (?) Old Man Nack moment: though I have no idea if/how the underlying tech relates, I’m reminded of the Realtime Gradient-Domain Painting work that onetime Adobe researcher Jim McCann published back in 2008.
Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”
Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯
Back in 2011, my longtime Photoshop boss Kevin Connor left Adobe & launched a startup (see NYT article) with Prof. Hany Farid to help news organizations, law enforcement, and others detect image manipulation. They were ahead of their time, and since then the problem of “fake news” has only gotten worse.
This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).
I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.