Taiwan’s Alishan Forest Railway celebrates its 106th birthday this year. Check out the behind-the-scenes on how the Google Street View team worked with a local university to help the Street View Trekker hitch a ride on the century-old railroad.
What an amazing time we live in for visual storytellers. In my day, says Cranky Old Man Nack, we used to beg our parents for giant, heavy camcorders that we couldn’t afford. Now it’s possible to mount a phone or action camera to this lightweight, powered cable rig and capture amazing moving shots. Check it out:
Tested by athletes, filmmakers and explorers all over the world and actors such as Redbull, GoPro and Nitro Circus. Wiral LITE will bring new angles to filming and make the impossible shots possible. It enables filming where drones might be difficult or even illegal to use, such as the woods, indoors and over people.
Want to “photograph an imagined cyberpunk world in-camera without any post-production”? ARwall promises you can through “a visual effects compositing technology that combines new mixed-reality screens along with the oldest (and maybe best) trick in the filmmaking book.”
This piece delves into the inception, development process, and shows the production test footage of the first ever real-time, in-camera, real-light, real-lens, perspective-adapting, mixed-reality, rear-screen, compositing technique.
Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,
What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.
Check it out:
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”
Let’s face it, a Scottish or Irish brogue objectively makes everything more awesome. (As Ron Burgundy would say, “It’s just science.”) With that in mind, I found this short guide from Captain Cornelius (what a name!) both charming & useful:
I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burnt) Point Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.
They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.
Check out this impressive little video from Japan. “Drone pilot Katsu FPV,” writes PetaPixel, “says the footage was shot with a 1.6-inch drone and the $80 RunCam Split Mini FPV camera, and that stabilization was applied in post.”
Google’s Arts & Culture team partnered with CyArk to travel to over 25 sites across 18 countries, using drone imagery and 3D laser scanners to capture intricate portraits of each place. You can explore the story and 3D model of each historic location—from Syria’s Al-Azem palace, to the Temple of Eshmoun in Lebanon, to the Mayan city of Chichen Itza—on the site.
Heh—this crazy project provides an extreme close-up on the incredible craftsmanship & patience of the creators of Germany’s famous miniature world. The team writes,
Street View cameras have floated on gondolas in Venice, ridden on camels in the Liwa Desert and soared on snowmobiles on Canadian slopes. But to capture the nooks and crannies in Miniatur Wunderland, we worked with our partner at Ubilabs to build an entirely new—and much smaller—device. Tiny cameras were mounted on tiny vehicles that were able to drive the roads and over the train tracks, weaving through the Wunderland’s little worlds to capture their hidden treasures.
Heh—it’s often fun to see the gap between what products are designed (or at least marketed) to do, and how they’re actually used. Saturday Night Live delightfully skewered yoga pants in this pitch for “Pro-Chiller Leggings,” which can be worn as “pants, pajamas, and a napkin” (reminding me of “a floor wax and a dessert topping!”). Now “just sit the hell down and chill”:
You’re the puppet now, dog: Designer, illustration, and natural-born showman Dave Werner does his best Strongbad (or is it Triumph?) impression in this altogether charming (and Bronie-tastic) live demo of the latest from Adobe Character Animator:
If & when Dave does Trogdor, my life will be complete. [YouTube]
A dog (clear favorite), UFO, heart, basketball, and spider join the dinosaur, chicken, alien, gingerbread man, planet, and robot. The latter six stickers have been slightly rearranged, while the new ones are at the beginning of the carousel.
Enjoy! And let us know what else you’d like to see.
As I mentioned the other day, Moment is Kickstarting efforts to create an anamorphic lens for phones like Pixel & iPhone. In the quick vid below, they explain its charms—cool lens flares, oval bokeh, and more:
Another from the “Awesome Past Lives I Never Knew My Colleagues Had” Files: I just learned that Tarik Abdel-Gawad, with whom I’ve been collaborating on AR stuff, programmed & performed the amazing “Box” projection-mapping robot demo with Bot & Dolly before Google acquired that company. It’s now a few years old but no less stunning:
Bot & Dolly produced this work to serve as both an artistic statement and technical demonstration. It is the culmination of multiple technologies, including large scale robotics, projection mapping, and software engineering. We believe this methodology has tremendous potential to radically transform theatrical presentations, and define new genres of expression.
Google had some dorky fun recently with an April Fool’s announcement of a Cloud Hummus API:
Silly, yes—but apparently not as far-fetched as you’d think. Check out computer vision being trained to identify ramen by shop:
Recently, data scientist Kenji Doi used machine learning models and AutoML Vision to classify bowls of ramen and identify the exact shop each bowl is made at, out of 41 ramen shops, with 95 percent accuracy. Sounds crazy (also delicious), especially when you see what these bowls look like. […]
You don’t have to be a data scientist to know how to use it—all you need to do is upload well-labeled images and then click a button. In Kenji’s case, he compiled a set of 48,000 photos of bowls of soup from Ramen Jiro locations, along with labels for each shop, and uploaded them to AutoML Vision.
This super fun combo of style transfer & performance capture (see video below in case you missed the sneak peek last fall) is now accepting applications for beta testers:
Project Puppetron lets you capture your own face via webcam and, through a simple setup process, create a puppet of yourself in the style of a piece of referenced art.
[Y]ou perform various facial expressions and mouth shapes for lip sync, and then select the reference art and the level of stylization you want to apply to create a fully-realized, animated puppet.
Once Project Puppetron has created your puppet, you can perform your character or modify your puppet as you would any other puppet in Character Animator. Then, bring further dimension to your character’s performance with rigging, triggerable artwork, layer cycles, etc., through the broad array of tools offered in Character Animator.
Photographer Páraic McGloughlin took Google Earth satellite photos and strung them together into “Arena,” an extremely fast-paced, bird’s-eye-view animation. (Seriously, don’t watch this if you’re sensitive to strobing.)
Starting today, you can use Google Maps to join in my amazing adventures for April Fools this week. Are you prepared for a perplexing pursuit? I’ve shared my location with you on Android, iOS and desktop (rolling out now). To start the search, simply update your app or visit google.com/maps on desktop. Then press play when you see me waving at you from the side of your screen. You can even ask the Google Assistant on your phone, Chromebook or Home device, “Hey Google, Where’s Waldo?” to start.