Fore-edge painting renders a scene on the edges of the pages of a book, and Martin Frost might be the last remaining professional fore-edge painter in the world. Here’s a peek at his vanishing craft:
Dating back centuries, the delicate art form places intricate scenes on the side of books, cheekily hidden beneath gold gilded pages. The beautiful paintings are only visible to the trained eye, but once you unlock the secret, you’ll find pure magic.
Unleash the dank emotes! My teammates George & Tyler (see previous) are back at it running machine learning in your browser, this time to get you off the couch with the playful Move Mirror:
Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.
Welcome to the era of the software-defined camera.
In this era, pocketable, connected cameras can reconstruct the world in three dimensions and superhuman detail, cars are able to perceive the objects around them without the need for special sensors, and robots are able to thread the elusive needle autonomously.
Light’s highly accurate depth mapping can be used to create rich and complex environments for a wide range of applications including augmented reality.
Back in the way back, the Adobe User Ed team got in trouble for publishing a Healing Brush tutorial that demonstrated how to remove watermarks (sorry, photographers!). Now bots promise to do the same, only radically faster & better:
“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA writes. “The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.
“Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.”
“Oh, is that True Love Waits conference?” my friend once snarkily asked as we drove past GPU conference attendees milling around downtown San Jose. “Is this Virgin-con?” Their dorktastic style comes to mind seeing demos for the helmet-mounted Wunder360.
Given that my trusty, if imperfect, Theta S 360º camera has gone MIA, I’m thinking about possible replacements. Having busted on the Wunder a bit, I’ll say I’m intrigued by the mapping possibilities. Given all it promises (especially relative to, say, the $499 Rylo camera), I’d worry that it’s oversold, especially at $159—but I guess we shall see.
The device promises:
Capturing 360 videos with in-camera stitching, no extra post-production software is needed;
Easy 3D scanning, enables the ability to create in 3D for everyone;
AI-powered smart tracking, locks on your favorite view;
Super smooth stabilization, say goodbye to shaky shots;
Compact, lightweight and portable, pop the S1 in your pocket;
With 100ft waterproof case, S1 works with you anywhere;
The fully trained PixelPlayer system, given a video as the input, splits the accompanying audio and identifies the source of sound, and then calculates the volume of each pixel in the image and “spatially localizes” it — i.e., identifies regions in the clip that generate similar sound waves.