The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.
There’s often a lot of work to go from tech demo to robust, shipping feature (especially when targeting Photoshop’s rigorous level of quality & flexibility), and I’m sure the team has been working hard on that. In any event, I’m looking forward to trying it myself.
The Big Picture features some gorgeous images of the Large Hadron Collider, nearly ready to create a black hole and swallow the world as we know it. (The dorkiness of their little fire-fighting vehicle could rupture spacetime, too.)
The work builds upon research by Adobe’s Aseem Agarwala (who was instrumental in bringing Auto-Blend to Photoshop CS3). Adobe Senior Principal Scientist (and UW prof.) David Salesin is helping facilitate more collaboration between Adobe teams & academia, recruiting full-time hires like Aseem & sponsoring visiting researchers like Hany Farid.
(Note: As always, please don’t take my mentioning of various tech demos as a hint about any specific feature showing up in a particular Adobe product. I just post things that I find interesting & inspiring.)
Photoshop engineer Jerry Harris is responsible for the application’s painting tools, and he’s always got an eye open for interesting developments in the field of computerized painting. This morning he passed along a cool demo video of James McCann and Nancy Pollard’s Real-time Gradient-domain Painting technology.
In a nutshell, according to the video, "A gradient brush allows me to paint with intensity differences. When I draw a stroke, I am specifying that one side is lighter than the other." Uh, okay… And the video is a little ho-hum until the middle. That’s when things get rather cool. Check out cloning/duplicating pixels along a path, plus the interesting approach to painting a band of color.
Adobe Senior Principal Scientist David Salesin, who manages this crew, notes that "If you count their SIGGRAPH papers as well, you’ll see that current Adobe employees had 11 of the 108 papers in the conference."
Now, let me inject a disclaimer: Just because a particular researcher has worked on a particular technology in his or her past life, it’s not possible to conclude that a specific feature will show up in a particular Adobe product. How’s that for non-commital? ;-) In any case, it’s just exciting that so many smart folks are joining the team (more brains to hijack!).
Wow–now this I haven’t seen before: Israeli brainiacs Shai Avidan and Ariel Shamir have created a pretty darn interesting video that demonstrates their technique of "Seam Carving for Content-Aware Image Resizing." When scaling an image horizontally or vertically (e.g. making a panorama narrower), the technology looks for paths of pixels that can be removed while causing the least visual disruption. Just as interesting, if not more so, I think, is the way the technology can add pixels when increasing image dimensions. Seriously, just check out the video; I think you’ll be blown away. (More info is in a 20MB PDF, in which they cite work by Adobe’s Aseem Agarwala–the creator of Photoshop CS3’s Auto-Blend Layer code.) [Via Geoff Stearns]
I hope to share more good stuff from SIGGRAPH soon. While I was being stuffed with ham sandwiches by kindly Irish folks, a number of Adobe engineers were speaking at & exploring the show. Todor Georgiev, one of the key minds behind the Healing Brush, has been busily gluing together his own cutting edge optical systems. More on that soon.
CNET reported recently on a court case that involved image authentication software as well as human experts, both seeking to distinguish unretouched photographs from those created or altered using digital tools. After disallowing the software, written by Hany Farid & his team at Dartmouth, the judge ultimately disallowed a human witness, ruling that neither one could adequately distinguish between real & synthetic images. The story includes some short excerpts from the judge’s rulings, offering some insight into the legal issues at play (e.g. "Protected speech"–manmade imagery–"does not become unprotected merely because it resembles the latter"–illegal pornography, etc.).
As I’ve mentioned previously, Adobe has been collaborating with Dr. Farid & his team for a few years, so we wanted to know his take on the ruling. He replied,
The news story didn’t quite get it right. Our program correctly classifies about 70% of photographic images while correctly classifying 99.5% of computer-generated images. That is, an error rate of 0.5%. We configured the classifier in this way so as to give the benefit of the doubt to the defendant. The prosecutor decided not to use our testimony because of other reasons, not because of a high error rate.
The defense argues that the lay person cannot tell the difference between photographic and CG images. Following this ruling by Gertner, we performed a study to see just how well human subjects are at distinguishing. They turn out to be surprisingly good. Here is a short abstract describing our results. [Observers correctly classified 83% of the photographic images and 82% of the CG images.]
Elsewhere in the world of "Fauxtography" and image authenticity:
Sort of the Ur-Photoshop: This page depicts disappearing commissars and the like from Russia, documenting the Soviet government’s notorious practice or doctoring photos to remove those who’d fallen from favor. [Via]