Category Archives: Image Science

Google & researchers demo AI-powered shadow removal

Speaking of Google photography research (see previous post about portrait relighting), I’ve been meaning to point to the team’s collaboration with MIT & Berkeley. As PetaPixel writes,

The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.

Here’s a nice summary from Two-Minute Papers:

https://youtu.be/qeZMKgKJLX4

Sky-swapping tech is coming to Photoshop at last

I feel vaguely ungrateful marking the arrival of a welcome Photoshop feature by noting that Adobe first demoed it nearly four years ago—but hey, you come here, you know you’re gonna get a little salt. 😌

There’s often a lot of work to go from tech demo to robust, shipping feature (especially when targeting Photoshop’s rigorous level of quality & flexibility), and I’m sure the team has been working hard on that. In any event, I’m looking forward to trying it myself.

Check out PetaPixel’s coverage for other details & screenshots.

Colliding hadrons, sinking subways, & more

Hot image science o' the day

Pravin Bhat & friends at the University of Washington have put together a rather eye-popping video that demonstrates Using Photographs to Enhance Videos of a Static Scene.  I think you’ll dig it.  (The removal of the No Parking sign is especially impressive.) [Via Jeff Tranberry]

 

The work builds upon research by Adobe’s Aseem Agarwala (who was instrumental in bringing Auto-Blend to Photoshop CS3).  Adobe Senior Principal Scientist (and UW prof.) David Salesin is helping facilitate more collaboration between Adobe teams & academia, recruiting full-time hires like Aseem & sponsoring visiting researchers like Hany Farid.

(Note: As always, please don’t take my mentioning of various tech demos as a hint about any specific feature showing up in a particular Adobe product. I just post things that I find interesting & inspiring.)

Previously:

Cool painting tech demo o' the day

Photoshop engineer Jerry Harris is responsible for the application’s painting tools, and he’s always got an eye open for interesting developments in the field of computerized painting.  This morning he passed along a cool demo video of James McCann and Nancy Pollard’s Real-time Gradient-domain Painting technology.

 

In a nutshell, according to the video, "A gradient brush allows me to paint with intensity differences.  When I draw a stroke, I am specifying that one side is lighter than the other."  Uh, okay… And the video is a little ho-hum until the middle.  That’s when things get rather cool.  Check out cloning/duplicating pixels along a path, plus the interesting approach to painting a band of color.

Imaging heavy hitters join Adobe

A number of rock stars from the world of image science have recently joined Adobe:

Adobe Senior Principal Scientist David Salesin, who manages this crew, notes that "If you count their SIGGRAPH papers as well, you’ll see that current Adobe employees had 11 of the 108 papers in the conference."

Now, let me inject a disclaimer:  Just because a particular researcher has worked on a particular technology in his or her past life, it’s not possible to conclude that a specific feature will show up in a particular Adobe product.  How’s that for non-commital? ;-)  In any case, it’s just exciting that so many smart folks are joining the team (more brains to hijack!).

[Update: Cambridge, MA-based Xconomy provides additional context for this news.]

"Holy crap"-worthy imaging technology

Wow–now this I haven’t seen before: Israeli brainiacs Shai Avidan and Ariel Shamir have created a pretty darn interesting video that demonstrates their technique of "Seam Carving for Content-Aware Image Resizing."  When scaling an image horizontally or vertically (e.g. making a panorama narrower), the technology looks for paths of pixels that can be removed while causing the least visual disruption.  Just as interesting, if not more so, I think, is the way the technology can add pixels when increasing image dimensions.  Seriously, just check out the video; I think you’ll be blown away.  (More info is in a 20MB PDF, in which they cite work by Adobe’s Aseem Agarwala–the creator of Photoshop CS3’s Auto-Blend Layer code.) [Via Geoff Stearns]

I hope to share more good stuff from SIGGRAPH soon.  While I was being stuffed with ham sandwiches by kindly Irish folks, a number of Adobe engineers were speaking at & exploring the show.  Todor Georgiev, one of the key minds behind the Healing Brush, has been busily gluing together his own cutting edge optical systems.  More on that soon.

Digital imaging goes to court

CNET reported recently on a court case that involved image authentication software as well as human experts, both seeking to distinguish unretouched photographs from those created or altered using digital tools.  After disallowing the software, written by Hany Farid & his team at Dartmouth, the judge ultimately disallowed a human witness, ruling that neither one could adequately distinguish between real & synthetic images.  The story includes some short excerpts from the judge’s rulings, offering some insight into the legal issues at play (e.g. "Protected speech"–manmade imagery–"does not become unprotected merely because it resembles the latter"–illegal pornography, etc.).

As I’ve mentioned previously, Adobe has been collaborating with Dr. Farid & his team for a few years, so we wanted to know his take on the ruling.  He replied,

The news story didn’t quite get it right. Our program correctly classifies about 70% of photographic images while correctly classifying 99.5% of computer-generated images. That is, an error rate of 0.5%. We configured the classifier in this way so as to give the benefit of the doubt to the defendant. The prosecutor decided not to use our testimony because of other reasons, not because of a high error rate.

The defense argues that the lay person cannot tell the difference between photographic and CG images. Following this ruling by Gertner, we performed a study to see just how well human subjects are at distinguishing. They turn out to be surprisingly good.  Here is a short abstract describing our results. [Observers correctly classified 83% of the photographic images and 82% of the CG images.]

Elsewhere in the world of "Fauxtography" and image authenticity:

  • In the wake of last summer’s digital manipulation blow-up, Reuters has posted guidelines on what is–and is not–acceptable to do to an image in Photoshop. [Via]
  • Calling it "’The Most Culturally Significant Feature’ of Canon’s new 1D MkIII," Micah Marty heralds "the embedding of inviolable GPS coordinates into ‘data-verifiable’ raw files."
  • Sort of the Ur-Photoshop: This page depicts disappearing commissars and the like from Russia, documenting the Soviet government’s notorious practice or doctoring photos to remove those who’d fallen from favor. [Via]
  • These practices know no borders, as apparently evidenced by a current Iranian controversy, complete with Flash demo. [Via Tom Hogarty]
  • Of course, if you really want to fake people out, just take a half-naked photo of yourself, mail it to the newspaper, and tell them that it’s a Gucci ad. Seems to work like a charm. [Via]

[Update: PS–Not imaging but audio: Hart Shafer reports on Adobe Audition being used to confirm musical plagiarism.]