Here’s some very cool imaging tech, though it’ll be interesting to see how many people will take the time to create multiple exposures, each with different controlled lighting:
If this is up your alley, check out a paper and video on the subject that some Adobe researchers put together a couple of years back.
Category Archives: Image Science
Video: Automated reshaping of human bodies
Oh my; how long until we see the Ralph Lauren EmaciatorPro(™) Edition?
Here’s more info on the project. (And no, unlike Puppet Warp, it’s not a CS5 thing.) [Via Jerry Harris]
"Enhance!" Redux
Heh–here’s a nice little satire of phony image enhancement on TV (see previous montage):
Of course, image scientists continue to work on all sorts of new craziness, so it’s all just a matter of time… right?
Video: "A computational model of aesthetics"
People always like to joke about Photoshop eventually adding a big red “Make My Photo Good” button, automatically figuring out what looks good & what adjustments are needed. Of course, researchers are working on just that sort of thing:
As someone who aspires to be creative, I have mixed feelings. The idea of rating images according to precomputed standards of beauty makes me think of the Robin Williams character in Dead Poets Society excoriating a textbook that rated poetry along two axes:
Excrement! That’s what I think of Mr. J. Evans Pritchard! We’re not laying pipe! We’re talking about poetry. How can you describe poetry like American Bandstand? “I like Byron, I give him a 42 but I can’t dance to it!”
And yet, I find I’m intrigued by the idea, wanting to run the algorithm on my images–if only, maybe, to have fun flouting it. I also have to admit that I’d like to see the images taken by certain of my family members (not you, hon) run through such algorithms–if only to crop in on the good stuff.
[Via Jerry Harris]
Photos to sound & back again
- A technology called Photosounder can treat images as audio (demo). “Sounds, once turned into images,” they say, “can be powerfully modified to achieve effects and results that couldn’t be obtained in any other way, while images of all sorts reveal the infinite kinds of otherworldly sounds they contain.” [Via]
- In a related vein, scientists have turned dolphin calls into kaleidoscopic patterns. (Note the image gallery navigation controls on the right.) [Via]
Video: New from Adobe Labs, Content-Aware Fill in Photoshop
You like? 🙂 (Here’s some more background on the technology.) To see higher-res detail, I recommend hitting the full-screen icon or visiting the Facebook page that hosts the video.
As with all such sneak peeks, I have to be really clear in saying that this is just a demo, and as such it’s not a promise that a technology will ship inside a particular version of Photoshop. (As the late Mac columnist Don Crabb told me years ago, “There’s many a slip ‘twixt cup & lip.”) Still, it’s fun to show some of the stuff with which we’re experimenting.
PhotoSketch: Internet Image Montage
Oh, that’s rather cool, then:
I’ve seen various experiments at Adobe that fetch & automatically composite images, but the idea of basing searches on sketches is new to me. Details are in the researchers’ paper (PDF).
Almost completely unrelated, but in the spirit of cool image science, during last night’s sneak peeks at Adobe MAX, Dan Goldman showed a little taste of “PatchMatch” (“content-aware healing”) integrated into Photoshop. (As always, no promises, this is just a test, yadda yadda.)
Wide-angle image correction tech
Adobe researcher Aseem Agarwala, working with Maneesh Agrawala & Robert Carroll at Berkeley, has demonstrated techniques to enable “Content-Preserving Projections for Wide-Angle Images.” That may sound a little dry, but check out the demo video (10MB QT) to see how the work enables extremely wide-angle photography. [Via Dan Goldman]
Aseem contributed the depth-of-field extension feature to Photoshop CS4. For previous entries showing advanced imaging work, check out this blog’s Image Science category.
Super cool video stabilization technology
Adobe researchers Hailin Jin and Aseem Agarwala*, collaborating with U.Wisconsin prof. Michael Gleicher & Feng Liu, have unveiled their work on “Content-Preserving Warps for 3D Video Stabilization.” In other words, their tech can give your (and my) crappy hand-held footage the look of a Steadicam shot.
Check out the demonstration video, shot at & around Adobe’s Seattle office. (Hello, Fremont Lenin!) It compares the new technique to what’s available in iMovie ’09 and other commercial tools.
As with all research papers/demos, I should point out making technology ready for real-world use can require plenty of additional work & tuning. Still, these developments are encouraging. [Via]
[Previously: Healing Brush & Content-Aware Scaling on (really good) drugs.]
* If you’ve created a panorama using Photoshop, you’ve used Hailin’s (image alignment) and Aseem’s (image blending) work.
Image science radness o' the day
“This is your Healing Brush.
“This is your Content-Aware Scaling.
“*This* is your Healing Brush & Content-Aware Scaling on (really good) drugs…”
Adobe researchers Eli Shechtman & Dan Goldman, working together with Prof. Adam Finkelstein from Princeton & PhD student Connelly Barnes, have introduced PatchMatch, “A Randomized Correspondence Algorithm for Structural Image Editing.”
No, I wouldn’t know a randomized correspondence algorithm for structural image editing if it bit me on the butt, either, but just check out the very cool video demo. More details are in the paper (one of the 17 papers featuring Adobe collaboration presented at SIGGRAPH this year).
So, what do you think? [Via]
Adobe papers light up SIGGRAPH
I was excited to hear that researchers at Adobe have submitted 22% of all papers accepted at SIGGRAPH this year. That’s a pretty incredible accomplishment*. In addition, Wojciech Matusik has been selected as this year’s recipient of the ACM SIGGRAPH Significant New Research Award. Congrats, guys!
The company has been making significant investments & attracting top talent in this area in recent years, and it’s great to see those efforts bearing fruit. It’ll be even better when we start harvesting more of this research as real-world features in Photoshop and other apps–and believe me, we’re working to do just that.
* By way of comparison, Microsoft had 6 papers accepted this year (vs. Adobe’s 17). Microsoft has 90,000 employees; Adobe has 7,000.
Adobe previews "Infinite Images" technology
Remember Shai Avidan, the co-creator of seam carving (Content-Aware Scaling) who joined Adobe last year? Just as he did at Adobe MAX last year, Shai took to the stage this year with an eye-catching demo. Collaborating with Prof. Bill Freeman and a team from MIT, Shai has been working on "Infinite Images," "a system for exploring large collections of photos in a virtual 3D space." The team writes:
Our system does not assume the photographs are of a single real 3D location, nor that they were taken at the same time. Instead, we organize the photos in themes, such as city streets or skylines, and let users navigate within each theme using intuitive 3D controls that include pan, zoom and rotate…
We present results on a collection of several millions of images downloaded from Flickr and broken into themes that consist of a few hundred thousands images each. A byproduct of our system is the ability to construct extremely long panoramas, as well as image taxi, a program that generates a virtual tour between a user supplied start and finish images.
To read up on some details, check out the PDF (shared via Acrobat.com):
You could also visit Shai’s site to read up on “Non-Parametric Acceleration, Super-Resolution, and Off-Center Matting,” not to mention “Part Selection with Sparse Eigenvectors”–but I’d recommend being a lot smarter than I am. 😉 (We just may have to name our next child “Eigenvector.”)
Promising video research from Adobe
"Dan Goldman is an old friend of mine from ILM," writes FX pro Stu Maschwitz. "He now works for Adobe’s top-secret G*d Dammit Put This In A Product Now division." Check out Dan’s Interactive Video Object Manipulation demo to see if you agree. (Now that Photoshop Extended can work with video, it’s fun to imagine the possibilities. No promises, of course.)
Colliding hadrons, sinking subways, & more
- Casas de sciencia:
- The Big Picture features some gorgeous images of the Large Hadron Collider, nearly ready to create a black hole and swallow the world as we know it. (The dorkiness of their little fire-fighting vehicle could rupture spacetime, too.)
- Seed shows eerily deserted labs at night.
- Image manipulation isn’t just for political candidates anymore: even the weather isn’t safe.
- No fakery needed: Environmental Graffiti hosts the 30 Most Incredible Abstract Satellite Images of Earth.
- The British Library’s Bodies of Knowledge online exhibition is loaded with great imagery, exploring how the body has been viewed through history. [Via]
- The NYT reports on a deep-sea home for subway cars. I wonder whether global warming will someday make the idea of deliberately flooding subways seem quaint.
Colliding hadrons, sinking subways, & more
- Casas de sciencia:
- The Big Picture features some gorgeous images of the Large Hadron Collider, nearly ready to create a black hole and swallow the world as we know it. (The dorkiness of their little fire-fighting vehicle could rupture spacetime, too.)
- Seed shows eerily deserted labs at night.
- Image manipulation isn’t just for political candidates anymore: even the weather isn’t safe.
- No fakery needed: Environmental Graffiti hosts the 30 Most Incredible Abstract Satellite Images of Earth.
- The British Library’s Bodies of Knowledge online exhibition is loaded with great imagery, exploring how the body has been viewed through history. [Via]
- The NYT reports on a deep-sea home for subway cars. I wonder whether global warming will someday make the idea of deliberately flooding subways seem quaint.
Hot image science o' the day
Pravin Bhat & friends at the University of Washington have put together a rather eye-popping video that demonstrates Using Photographs to Enhance Videos of a Static Scene. I think you’ll dig it. (The removal of the No Parking sign is especially impressive.) [Via Jeff Tranberry]
The work builds upon research by Adobe’s Aseem Agarwala (who was instrumental in bringing Auto-Blend to Photoshop CS3). Adobe Senior Principal Scientist (and UW prof.) David Salesin is helping facilitate more collaboration between Adobe teams & academia, recruiting full-time hires like Aseem & sponsoring visiting researchers like Hany Farid.
(Note: As always, please don’t take my mentioning of various tech demos as a hint about any specific feature showing up in a particular Adobe product. I just post things that I find interesting & inspiring.)
Previously:
Cool painting tech demo o' the day
Photoshop engineer Jerry Harris is responsible for the application’s painting tools, and he’s always got an eye open for interesting developments in the field of computerized painting. This morning he passed along a cool demo video of James McCann and Nancy Pollard’s Real-time Gradient-domain Painting technology.
In a nutshell, according to the video, "A gradient brush allows me to paint with intensity differences. When I draw a stroke, I am specifying that one side is lighter than the other." Uh, okay… And the video is a little ho-hum until the middle. That’s when things get rather cool. Check out cloning/duplicating pixels along a path, plus the interesting approach to painting a band of color.
Imaging heavy hitters join Adobe
A number of rock stars from the world of image science have recently joined Adobe:
- That crazy-cool image resizing demo I mentioned last week continues to get all kinds of attention. I was therefore happy to learn that co-creator Shai Avidan joined the Adobe office in Newton, MA (just down the ‘pike from MIT) last Monday. Here’s a bit more info and Shai’s photo.
- Wojciech Matusik began work at Adobe in May. He’s done some really cool work in the emerging fields of multi-aperture photography, 3D TV, and much more. Like Shai, he works from the Newton office.
- Sylvain Paris is due to join Adobe in a couple of weeks. He’s worked on techniques for matching tones across photos ("Make my image pop like Ansel’s"), generating 3D data from 2D captures, and more. His paper on bilateral filtering was written with MIT colleagues Jiawen Chen (who interned this summer at Adobe) and Fredo Durand.
Adobe Senior Principal Scientist David Salesin, who manages this crew, notes that "If you count their SIGGRAPH papers as well, you’ll see that current Adobe employees had 11 of the 108 papers in the conference."
Now, let me inject a disclaimer: Just because a particular researcher has worked on a particular technology in his or her past life, it’s not possible to conclude that a specific feature will show up in a particular Adobe product. How’s that for non-commital? ;-) In any case, it’s just exciting that so many smart folks are joining the team (more brains to hijack!).
[Update: Cambridge, MA-based Xconomy provides additional context for this news.]
"Holy crap"-worthy imaging technology
Wow–now this I haven’t seen before: Israeli brainiacs Shai Avidan and Ariel Shamir have created a pretty darn interesting video that demonstrates their technique of "Seam Carving for Content-Aware Image Resizing." When scaling an image horizontally or vertically (e.g. making a panorama narrower), the technology looks for paths of pixels that can be removed while causing the least visual disruption. Just as interesting, if not more so, I think, is the way the technology can add pixels when increasing image dimensions. Seriously, just check out the video; I think you’ll be blown away. (More info is in a 20MB PDF, in which they cite work by Adobe’s Aseem Agarwala–the creator of Photoshop CS3’s Auto-Blend Layer code.) [Via Geoff Stearns]
I hope to share more good stuff from SIGGRAPH soon. While I was being stuffed with ham sandwiches by kindly Irish folks, a number of Adobe engineers were speaking at & exploring the show. Todor Georgiev, one of the key minds behind the Healing Brush, has been busily gluing together his own cutting edge optical systems. More on that soon.
Digital imaging goes to court
CNET reported recently on a court case that involved image authentication software as well as human experts, both seeking to distinguish unretouched photographs from those created or altered using digital tools. After disallowing the software, written by Hany Farid & his team at Dartmouth, the judge ultimately disallowed a human witness, ruling that neither one could adequately distinguish between real & synthetic images. The story includes some short excerpts from the judge’s rulings, offering some insight into the legal issues at play (e.g. "Protected speech"–manmade imagery–"does not become unprotected merely because it resembles the latter"–illegal pornography, etc.).
As I’ve mentioned previously, Adobe has been collaborating with Dr. Farid & his team for a few years, so we wanted to know his take on the ruling. He replied,
The news story didn’t quite get it right. Our program correctly classifies about 70% of photographic images while correctly classifying 99.5% of computer-generated images. That is, an error rate of 0.5%. We configured the classifier in this way so as to give the benefit of the doubt to the defendant. The prosecutor decided not to use our testimony because of other reasons, not because of a high error rate.
The defense argues that the lay person cannot tell the difference between photographic and CG images. Following this ruling by Gertner, we performed a study to see just how well human subjects are at distinguishing. They turn out to be surprisingly good. Here is a short abstract describing our results. [Observers correctly classified 83% of the photographic images and 82% of the CG images.]
Elsewhere in the world of "Fauxtography" and image authenticity:
- In the wake of last summer’s digital manipulation blow-up, Reuters has posted guidelines on what is–and is not–acceptable to do to an image in Photoshop. [Via]
- Calling it "’The Most Culturally Significant Feature’ of Canon’s new 1D MkIII," Micah Marty heralds "the embedding of inviolable GPS coordinates into ‘data-verifiable’ raw files."
- Sort of the Ur-Photoshop: This page depicts disappearing commissars and the like from Russia, documenting the Soviet government’s notorious practice or doctoring photos to remove those who’d fallen from favor. [Via]
- These practices know no borders, as apparently evidenced by a current Iranian controversy, complete with Flash demo. [Via Tom Hogarty]
- Of course, if you really want to fake people out, just take a half-naked photo of yourself, mail it to the newspaper, and tell them that it’s a Gucci ad. Seems to work like a charm. [Via]
[Update: PS–Not imaging but audio: Hart Shafer reports on Adobe Audition being used to confirm musical plagiarism.]