NASA, using a digital 3D model of the Moon built from Lunar Reconnaissance Orbiter global elevation maps and image mosaics, produced this lovely tour of our nearby neighbor. The lighting is derived from actual Sun angles during lunar days in 2018.
The filmmakers write,
The visuals were composed like a nature documentary, with clean cuts and a mostly stationary virtual camera. The viewer follows the Sun throughout a lunar day, seeing sunrises and then sunsets over prominent features on the Moon. The sprawling ray system surrounding Copernicus crater, for example, is revealed beneath receding shadows at sunrise and later slips back into darkness as night encroaches.
“So we beat on, boats against the current, borne back ceaselessly into the past…”
Sitting in my parents’ house, surrounded by my dad’s old college books and mine, I’m struck by a certain melancholy—a mix of memory, gratitude, and loss. As it happens, Margot just told me about Insta Repeat, a feed that catalogs the repetitiousness of Instagram photography. This makes me think of “vemödalen” (“the frustration of photographing something amazing when thousands of identical photos already exist”)—and just searching this blog for that term shows my current unoriginality in its use:
The one-man (I believe) band behind the Focus app for iOS continues to apply the awesome sauce—now adding the ability to create & modify light sources in portrait-mode images (which it treats as 3D). Check it out:
“I soon realized that the wide angle lens gives the iPhone and incredibly close focus point, allowing me to capture hard-to-pull-off wide-angle macro photos and videos,” Torres tells PetaPixel. “I set my iPhone to 240fps on 1080p (which my Canon 1DX Mark II can’t even handle), put on the wide angle lens, set it next to a hummingbird feeder in the cloud forests of Sumaco, and pressed record.”
Man, I sure love being a dad. Our resident railfan & little old man Henry (age 9) loves to get us out biking to watch the evening parade of trains (Cal, Amtrak, ACE, freight), and tonight we brought my drone. I’m fond of this shot, with accompaniment kindly provided by Eels:
And what can I say: our in-house editor (age 10) insisted on the closing title. 😌
And just for yuks, here’s the scene in 360º pano form:
For the past four years, The Ocean Agency has revealed the ocean to the world through Google Street View. Along the way, we’ve encountered a few unexpected guests. Follow along as our dive team encounters the world’s largest, most dangerous and most surprising sharks.
The P1000 is off the chain. “It starts a little wider than your typical smartphone camera lens,” says PopPhoto, “and can zoom far enough that you can focus on objects that are literally miles away.” Nikon says,
“We could in theory design the same spec lens for a DSLR, but it would be nearly impossible to create… [A] 3000mm lens with a maximum aperture of f/8 built for a DSLR sensor would need to have a front lens element with a diameter of about 360mm (more than 14 inches)!”
Back in the way back, the Adobe User Ed team got in trouble for publishing a Healing Brush tutorial that demonstrated how to remove watermarks (sorry, photographers!). Now bots promise to do the same, only radically faster & better:
“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA writes. “The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.
“Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.”
You know what’s really hard? Flying steadily in one direction while smoothly sweeping the camera around to focus on a subject and maybe climbing/descending and maybe tilting the camera? Yeah, just kidding: it’s nearly impossible.
But maybe now*, through the use of Course Lock mode & with this guidance from Drone Film Guide, I can pull it off.
In a nutshell:
Pick a heading & speed
Start flying back & forth along this fixed path while varying rotation/height/tilt
Dial down the sensitivity of your yaw control
In a second installment, Stewart goes into more detail comparing Course Lock to Tap Fly:
*”Now” is relative: Yesterday my luck finally ran out as I flew the Mavic into some telephone wires. At least it’s not at the bottom of Bixby Canyon or Three-Mile Slough, where other power lines threatened to put it on previous (mis)adventures. (“God helps old folks & fools…”) The drone took a hard bounce off the pavement, necessitating a service trip to reset the gimbal (which moves but now doesn’t respond to control inputs), but overall it’s amazingly sturdy. 💪😑
Mick Kalber was willing to stick his neck out—literally—to offer a glimpse into Hawaii’s explosive landscape. I’m struck by the visual variety of the flows (seemingly crunchy, creamy, crusted, and more):
The Volcano Goddess Pele is continually erupting hot liquid rock into the channelized rivers leading to the Pacific Ocean. Most of the fountaining activity is still confined within the nearly 200-foot high spatter cone she has built around that eruptive vent. Her fiery fountains send 6-9 million cubic meters of lava downslope every day… a volume difficult to even wrap your mind around!
Who better to sell radar detectors than the people who make radar guns?
From DeepFakes (changing faces in photos & videos) to Lyrebird (synthesizing voices) to video puppetry, a host of emerging tech threatens to further undermine trust in what’s recorded & transmitted. With that in mind, the US government’s DARPA has gotten involved:
DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.
With that in mind, I like seeing that Adobe’s jumping in to detect the work of its & others’ tools:
Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:
[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
Hmm—is there really a big market for this specialized photo-editing hardware like this? Apparently so, as Loupedeck is building a new version of its device, expanding app coverage, and promoting it with a glossy launch video:
The twin towers will also be located in Shenzhen, Guangdong, and will feature giant quadruple-height indoor drone flight testing spaces as well as a sky bridge that will be used for showing off new drones and technologies.
Per PetaPixel (which features a great gallery of images):
In all, the build took Sham about 2 hours and used 1,120 different pieces. Sham says she’s hoping to create a system in which you can create photos using the LEGO camera and a smartphone.
Sham has submitted her Hasselblad build to LEGO Ideas, LEGO’s crowdsourced system for suggesting future LEGO kits. LEGO has already selected Sham’s build as a “Staff Pick.” If Sham’s project attracts 10,000 supporters (it currently has around 500 at the time of this writing), then it will be submitted for LEGO Review, during which LEGO decision makers will hand-pick projects to become new official LEGO Ideas sets.
Cool stuff, coming soon: Basically, “upload portrait-mode image, then let Facebook extrude it into a 3D model, fill in the gaps, and display it interactively a la panoramas.”
10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).
Hmm—I really like the promise of this app (leveraging depth data to apply realistic lighting effects), but I’m finding the UI vexing & the results highly hit-or-miss. Judge for yourself:
Perhaps my earliest memory (circa age 4) is of watching a giant tornado bounce across the plains of southern Wisconsin, blazing through arcing power lines & bounding over a farmhouse as my mom debated whether to force me & my grandparents out of the car to shelter in a ditch. “It looks like a big ice cream cone!” I said.
Photographer Mike Olbinski succeeded in capturing a half-mile-wide cone of his own in this striking clip:
“God takes care of old folks and fools,” said Chuck D, and after miraculously not parking my drone at the bottom of Three Mile Slough thanks to high crosswinds & power lines, I’m grateful to somehow get it back with this footage. (Hat-tip to the presumably freaked-out bird who makes a cameo & who didn’t try to peck my bird out of the sky.)
Heh—years ago College Humor parodied Photoshop demo videos (down to the point of the presenter claiming to be Bryan O’Neil Hughes), but I hadn’t seen this one—in which “Hughes” is a guest of the North—until now:
I’m oddly intrigued by the immediacy of this 107-year-old archival footage showing New York City. As Khoi Vinh explains,
The footage has been altered in two subtle but powerful ways: the normally heightened playback speed of film from this era has been slowed down to a more “natural” pace; and the addition of a soundtrack of ambient city sounds, subtly timed with the action on screen.
Man, I’m really eager to see what the Micronaxx can do with this:
Tour Creator […] enables students, teachers, and anyone with a story to tell, to make a VR tour using imagery from Google Street View or their own 360 photos. The tool is designed to let you produce professional-level VR content without a steep learning curve. […]
Once you’ve created your tour, you can publish it to Poly, Google’s library of 3D content. From Poly, it’s easy to view. All you need to do is open the link in your browser or view in Google Cardboard.
In past posts I’ve talked about how our team has enabled realtime segmentation of videos, and yesterday I mentioned body-pose estimation running in a Web browser. Now that tech stack is surfacing in Google Photos, powering the new effect shown below and demoed by Sundar super briefly here.
Starting today, you may see a new photo creation that plays with pops of color. In these creations, we use AI to detect the subject of your photo and leave them in color–including their clothing and whatever they’re holding–while the background is set to black and white. You’ll see these AI-powered creations in the Assistant tab of Google Photos.
Thoughts? If you could “teach Google Photoshop,” what else would you have it create for you?
I’m honestly not sure what to make of this wacky-looking new device, but it’s weird/interesting enough to share. I can pretty definitely say that no one wants to refocus photos/video after the fact (RIP, Lytro—and have you ever done this with an iPhone portrait image, or even known that you can?), but simply gathering depth data in 180º is interesting, as (maybe) is 360º timelapse. Check it out:
Scattered throughout the place — which seems to be a recreation of a real Oakland home — were cut-out squares floating in the air. When I hovered over them with a cursor, I saw thumbnails of photos and videos, all of which were supposedly taken in the room that I was in. When I clicked on the thumbnails, I teleported over to them so that I could see the photos and videos up close. One was a photo of a family, while another was a short video clip of a young couple getting ready for prom.
What an amazing time we live in for visual storytellers. In my day, says Cranky Old Man Nack, we used to beg our parents for giant, heavy camcorders that we couldn’t afford. Now it’s possible to mount a phone or action camera to this lightweight, powered cable rig and capture amazing moving shots. Check it out:
Tested by athletes, filmmakers and explorers all over the world and actors such as Redbull, GoPro and Nitro Circus. Wiral LITE will bring new angles to filming and make the impossible shots possible. It enables filming where drones might be difficult or even illegal to use, such as the woods, indoors and over people.
Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,
What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.
Check it out:
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”
Let’s face it, a Scottish or Irish brogue objectively makes everything more awesome. (As Ron Burgundy would say, “It’s just science.”) With that in mind, I found this short guide from Captain Cornelius (what a name!) both charming & useful:
I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burnt) Point Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.
A few things I’ve learned:
I wish I’d taken a few minutes to learn about Point of Interest Mode, which you can invoke easily via the controller (see another great little tutorial on that). It would’ve made getting these orbiting shots far easier & the results much smoother.
They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
Tangentially related: I shot this 360º amidst the redwoods where we camped. I stitched it in-app, then uploaded it to Google Maps so that I could embed the interactive pano (see fullscreen).
Check out this impressive little video from Japan. “Drone pilot Katsu FPV,” writes PetaPixel, “says the footage was shot with a 1.6-inch drone and the $80 RunCam Split Mini FPV camera, and that stabilization was applied in post.”
Rad—and I love that we can still see the Apollo 17 lander & rover on the surface!
As the visualization moves around the near side, far side, north and south poles, we highlight interesting features, sites, and information gathered on the lunar terrain.
Heh—this crazy project provides an extreme close-up on the incredible craftsmanship & patience of the creators of Germany’s famous miniature world. The team writes,
Street View cameras have floated on gondolas in Venice, ridden on camels in the Liwa Desert and soared on snowmobiles on Canadian slopes. But to capture the nooks and crannies in Miniatur Wunderland, we worked with our partner at Ubilabs to build an entirely new—and much smaller—device. Tiny cameras were mounted on tiny vehicles that were able to drive the roads and over the train tracks, weaving through the Wunderland’s little worlds to capture their hidden treasures.
As I mentioned the other day, Moment is Kickstarting efforts to create an anamorphic lens for phones like Pixel & iPhone. In the quick vid below, they explain its charms—cool lens flares, oval bokeh, and more:
“The network is the computer,” and maybe the image, too: check out this sneak peek of Adobe tech that fills large holes in images by querying a database of images to find suitable matches:
Photographer Páraic McGloughlin took Google Earth satellite photos and strung them together into “Arena,” an extremely fast-paced, bird’s-eye-view animation. (Seriously, don’t watch this if you’re sensitive to strobing.)
Well, that escalated quickly: For this new set of mobile filmmaking tools (lens, battery, gimbal) Moment hit their $50k funding goal in in just over half an hour, and as of this writing they’ve easily cleared the $750k mark. Check ‘em out:
Wylie Overstreet and Alex Gorosh took a telescope around the streets of LA and invited people to look at the Moon through it. Watching people’s reactions to seeing such a closeup view of the Moon with their own eyes, perhaps for the first time, is really amazing.
By combining software-based visual tracking with the motion metadata from the hardware sensors, we built a new hybrid motion estimation for motion photos on the Pixel 2.
Some say the world will end in fire, Some say in ice…
Photographer Thomas Blanchard “represents all four seasons by showing flowers blooming, submerged in water that freezes over, burning, and shrouded in clouds of colorful inks.” Amazing (stick with it):
From what I’ve tasted of desire I hold with those who favor fire. But if it had to perish twice, I think I know enough of hate To say that for destruction ice Is also great And would suffice.
Skydio boasts 13 cameras that power really impressive collision-avoidance tech, allowing it to track a person even through obstacles like woods. Check it out:
Here’s how the “Autonomy Engine” works:
The device looks bulky, but it’s said to fit into any backpack that can accommodate a 17-inch laptop. Being roughly 3x the price ($2,499), size, and weight of a Mavic Air, the Skydio makes me ask a few questions:
What’s the average utilization of any drones people buy? They seem like action-sports cameras: aspirational, highly specialized, rarely used. (Thus it made perfect sense to me that GoPro would get into the drone business, and that doing so might just compound their existing problems.)
How important is this kind of aerial selfie mode that really sets Skydio apart? That is, what percentage of the time one wants to use a drone is it for, say, mountain biking through the woods? The obvious concern is that it falls into a real niche (the small Venn diagram overlap of “actually doing action sports” and “wanting to view from the air”).
Having said all that, the AI capabilities look like a great step forward, and I’m eager to learn more as the device starts reaching customers.
We’ve just returned from spending a week in Leadville, CO, which at 10,200 ft. is the highest incorporated city in the US. Despite the inevitable dehydration insomnia that always afflicts me there, I didn’t find the elevation quite high enough, so I brought my Mavic Pro (piloted by a custom-printed “Googler” minifig) to capture a few shots of Leadville from above. You can find some 360º panoramas (uploaded to & embedded via Google Maps) and videos below.
Quick observations in case you’d find them useful:
DJI Gogglesdo make it easier to fly while panning/tilting one’s camera, and I got my smoothest shots yet. On the downside, I’ve yet to figure out how to adjust the picture to look fully clear to my eyes (with or without glasses), and the fact that you apparently can’t just put them into head-tracking mode before a flight & keep them there is a pain.
360º panorama capture is totally my jam. Whereas I find capturing video stressful (is it smooth enough? will anyone find it interesting?) and editing video laborious, shooting panos is almost trivially easy. Having said that, I ran into a few snags:
Downloading & installing the needed firmware update took several tries, due partly to dodgy mountain WiFi & partly to the DJI app’s less-than-straightforward flow.
An inscrutable, non-dismissible (!) warning screen would sometimes pop up to let us know that we were a few miles from an airport. It blocking most of the screen, coupled with the wonky path one must follow to invoke pano capture mode, meant that we blew it on at least one pano, accidentally capturing 180º instead of 360º—and facing the wrong way at that!
I found that Facebook wouldn’t recognize the panos as panos, so after some searching I found & downloaded the metadata-tweaking tool Exif Fixer. Running it on the images did the trick.
Stitching images on my iPhone X was fairly quick (taking perhaps a minute for a full 360) and did a pretty good job (PTGui, which I downloaded, seemed to do no better). On the downside it sucks that the process is modal, blocking you from using the app to fly, so you’d probably do well to wait & stitch after landing.
The exposure on what might’ve been my best pano was totally blown out for reasons I don’t understand. ¯\_(ツ)_/¯