European astronaut Thomas Pesquet returned to Earth last month after spending six months aboard the International Space Station, capturing the first Street View imagery captured beyond our planet:
[T]he Street View team worked with NASA at the Johnson Space Center in Houston, Texas and Marshall Space Flight Center in Huntsville, Alabama to design gravity-free method of collecting the imagery using DSLR cameras and equipment already on the ISS. Then I collected still photos in space, that were then sent down to Earth where they were stitched together to create panoramic 360 degree imagery of the ISS.
You can read about the mission on the Google Blog & check out the behind-the-scenes process here:
Dave Simons (O.G. After Effects creator) and team are rocking bells with this new tech. The video below is silent, but you can jump through it to see just how far style transfer has come in just the last year or two.
There’s just no way this ends badly. No possible way.
Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track.
Check out some wonderful work from National Geographic photographer Anand Varma. Interesting details:
A 2013 University of Toronto study concluded that if hummingbirds were the size of an average human, they’d need to drink more than one 12-ounce can of soda for every minute they’re hovering, because they burn sugar so fast.
some hummingbirds can beat their wings 100 times in a second and can sip nectar 15 times per second. I also like the locals’ name for the Cuban bee hummingbird, the world’s smallest bird: zunzuncito (little buzz buzz).
Viewer: “Where the hell should I look?”
Creator: “Where the hell do people look?”
Making compelling 360º content—like both pimpin’ & impin’—ain’t easy. Fortunately YouTube is adding some new analytical tools:
Today we’re introducing heatmaps for 360-degree and VR videos with over 1,000 views, which will give you specific insight into how your viewers are engaging with your content. With heatmaps, you’ll be able to see exactly what parts of your video are catching a viewer’s attention and how long they’re looking at a specific part of the video.
Meanwhile they’ve started a new VR Creator Lab bootcamp:
Take your VR video creation to the next level. YouTube is taking applications for a 3 month learning and production intensive for VR creators. Participants will receive advanced education from leading VR instructors, 1:1 mentoring, and $30K – $40K in funding toward the production of their dream projects.
The application window has now closed (sorry I didn’t the news ’til now), but hopefully this will go well & future openings will emerge.
“Our virtual photographer ‘travelled’ ~40,000 panoramas in areas like the Alps, Banff and Jasper National Parks in Canada, Big Sur in California and Yellowstone National Park,” hunting for the best compositions, writes Hui Fang of the Google Research team.
Per PetaPixel, “Once it finds a nice-looking photo, it uses post-processing techniques to improve the look of the shot like photographers do in Photoshop or Lightroom. Edits include cropping, tweaking saturation, applying HDR effects, adding dramatic lighting with ‘content-aware brightness adjustments.’”
Potentially interesting sidenote: In 2013, before Google Photos became a standalone product, Google+ was backing up & applying semantic Auto Enhance to more than half a billion photos per day. The process mimicked the edits a skilled human would apply (e.g. treating skin differently from skies, sharpening & brightening). This all happened automatically, so almost no one noticed, and when we turned it off, almost no one cared (cf. bad wine). ¯\_(ツ)_/¯
Google’s artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as it is goofy.
Reminds me of this David Lewandowski insanity:
“Simpler, speedier and more reliable”—I can get behind that:
This new tool replaces the existing Google Photos desktop uploader and Drive for Mac/PC.
Backup and Sync is an app for Mac and PC that backs up files and photos safely in Google Drive and Google Photos, so they’re no longer trapped on your computer and other devices. Just choose the folders you want to back up, and we’ll take care of the rest.
Check out the help center if you need details—but generally it should be set it, forget it, get (optionally) free unlimited photo storage.
“Dude, you’re so transparent,” I once told a girl-chasing Photoshop engineer, “I can see a little checkerboard right through you.”
That came to mind seeing this project, which I find just unreasonably charming:
As part of the Stenograffia street art and graffiti festival in Russia, a collaborative of artists worked to create this phenomenal illusion that appears to “erase” a collection of graffiti from a small car and trash dumpster. With the help of a projector, the team painted the familiar grey and white checker grid found in most graphics applications that denotes a deleted or transparent area. The piece is titled “CTRL+X” in reference to the keyboard command in Photoshop for deleting a selection. You can see nearly 100 behind-the-scenes photos of their process here.
[Via some Facebook friend of whose original post I’ve lost track]
Here’s some trippy, claw-level view footage thanks to an eagle making off with researcher Matt Beedle’s GoPro:
So, how did they actually get the camera back? PetaPixel writes, “Thankfully, his father had seen the branch the eagle had landed on, and the two men searched for hours over two days before they managed to find the camera and the footage within.”