Wow—this paper (don’t worry, I’m not going to read it either) promises to recreate face data from extremely low-res images. As Yonatan Zunger explains,
[I]t takes a pixelated image, and uses the fact that it knows it’s looking at a human face, and what human faces look like, to turn each pixel into a 4×4 grid of its best guess of which colors would have to have been there to both be consistent with a face shape and with the average color it saw.
On the right are the original pictures, at 32×32 resolution. On the left is what happens after they’re reduced down to 8×8, the sort of thing you would get when a camera is at the limit of its resolution. In the middle is what their algorithm recovered.
s/recreate face/find similar face in training set/