There’s almost no limit to my insane love of deliberately crude animal puppetry (Cf. Triumph, The Falconer), and my son Finn & I really get a kick out of the little stop-motion chicken featured in Portland’s latest tourism ads. Check out this super fun peek into how they were made:
ESA astronaut Alexander Gerst… shot the photos in the timelapse by mounting a camera in the Cupola module and using an intervalometer to snap photos at regular intervals. Played back at 8 to 16 times normal speed, the timelapse above shows around 15 minutes of the rocket’s launch.
A few months back my three-year-old nephew Austin, whose hip joints prevent him from walking, was approved to receive an electric wheelchair thanks to his mom’s tireless efforts. The machine has been a real game-changer for him, and the whole fam is delighted to see how he now zips around, getting involved in board games & other activities that had previously been out of reach.
In a similar vein, artist Sue Austin finds a new kind of freedom under the waves thanks to her winged, motorized wheelchair:
It’s a tiny bit surreal to see how freely she moves around in something that many of us associate with an absence of a particular type of movement. But as Austin explains in her 2013 TED Talk, she thinks of her wheelchair in terms of freedom of movement, which is highlighted for others by the underwater video.
Wow—check out this amazing fly-through from Oddviz:
Orphanages are dense and harmonious living spaces housing hundreds of children under same roof simultaneously. Abandoned Jewish Orphanage Building in Ortaköy (OHR-tah-keuy) Istanbul (also known as El Orfelinato) has been home for thousand lives during its century old history. It holds the memory of the past in worn stairs and layers of paint.
El Orfelinato means ‘The Orphanage’ in Spanish. The name has been used by Sephardi Jews (Jews from Spain) community in Istanbul for decades. Sephardi Jews have a 500 year history in Istanbul since they were forced to migrate with mass conversions and executions by Catholic Monarchs in Iberia in 15th century.
oddviz sheds light upon the visual and spatial memory of El Orfelinato, documenting it as it is with photogrammetry and presenting it in doll house view.
And if that’s up your alley, check out their similar Hotel:
Pretty much like it says on the tin. PetaPixel writes,
There isn’t a filter in the app that lets you selectively see only Portrait mode photos, but the new option in the Edit menu will be present for any Portrait shot.
Download the latest version of Google Photos for iOS to get started with this new feature. Depth editing is already available on the Pixel 3, Pixel 2, and Moto phones that have depth photo support. Google says it’ll also be adding more Android devices soon.
This is easily the most awe-inspiring and jaw-dropping thing I’ve seen in months. In its low Earth orbit ~250 miles above our planet, the International Space Station takes about 90 minutes to complete one orbit of the Earth. Fewer than 600 people have ever orbited our planet, but with this realtime video by Seán Doran, you can experience what it looks like from the vantage point of the IIS for the full 90 minutes.
After filming the band performing the song, director Johnny Jansen spent $680 on printing out 2,250 of the frames on regular paper with a laser printer. With a crew of 6 people, Jansen then painstakingly photographed each print in a new photo to create the stop-motion video.
“Please, please don’t come to Google and waste your time.”
I tell this to promising interview candidates. That is, I hope they come here, but it’s waaaaay too easy to fall into a velvet fog: you get free food, good money, something for your parents to brag about… but you wake up one day and realize that you’re polishing some goddamn stupid widget 9 levels deep in who-knows-what system, and you think, “Is this why I was put on earth?” This doesn’t have to happen, and indeed people often do amazing things instead—but it’s anything but guaranteed.
I always think of the amazing monologue in Walk The Line (starts around 1:30 in the clip below). If you had one song to sing before you’re dirt, are you telling me this would be it?
Created by the team of artists and engineers at Within, Wonderscope combines the power of augmented reality, voice recognition, and spatial story design to immerse children in engaging, educational, and powerful narratives. Wonderscope: for story explorers.
By using WebAssembly, Squoosh is able to use image codecs that are not typically available in the browser.
Supporting a variety of web formats like MozJPEG and WebP and traditional ones like PNG, Squoosh allows you to quickly make your images web-ready. The app is able to do 1:1 visual comparisons of the original image and its compressed counterpart, to help you understand the pros and cons of each format.
Visiting the NY Times was always among the real treats of my time working on Photoshop. I was always struck by the thoughtfulness & professionalism of the staff, but also by the gritty, brass-tacks considerations of cranking through thousands of images daily, often using some pretty dated infrastructure.
The morgue contains photos from as far back as the late 19th century, and many of its contents have tremendous historical value—some that are not stored anywhere else in the world. In 2015, a broken pipe flooded the archival library, putting the entire collection at risk. Luckily, only minor damage was done, but the event raised the question: How can some of the company’s most precious physical assets be safely stored?
I believe we can’t abandon our sense of adventure because we lose our ability to see it, and it has become my goal to help people who live with similar challenges, and show them that anything is possible.
In 2013, I became the first blind person to kayak the entire 226 miles of the Colorado River through the Grand Canyon But, I always felt it didn’t mean anything unless I found a way to pay it forward. So I joined up with the good folks at Team River Runner, a nonprofit dedicated to providing all veterans and their families an opportunity to find health, healing, community, and purpose. Together we had the audacious goal to support four other blind veterans take a trip down the Grand Canyon.
The academic research they’ve shared, however, promises to go farther, enabling VR-friendly panoramas with parallax. The promise is basically “Take 30 seconds to shoot a series of images, then allow another 30 seconds for processing.” The first portion might well be automated, enabling the user to simply pan slowly across a scene.
This teaser vid shows how scenes are preserved in 3D, enabling post-capture effects like submerging them in water:
Will we see this ship in FB, and if so when? Your guess is as good as mine, but I find the progress exciting.
Prior to joining Google, my teammate Cristian Sminchisescu & his crew developed some neat techniques (see paper) for estimating two human bodies as 3D objects, then transferring clothing from one to another. Check it out:
Pretty cool stuff, though at the moment it seems to require using a pre-captured background:
When overlaying a digital costume onto a body using pose matching, several parts of the person’s cloth or skin remain visible due to differences in shape and proportions. In this paper, we present a practical solution to these artifacts which requires minimal costume parameterization work, and a straightforward inpainting approach.
Here’s a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.
You use the phone’s spatial awareness to ‘pin’ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another… You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.
Wanna feel like walking directly into the ocean? Try painstakingly isolating an object in frame after frame of video. Learning how to do this in the 90’s (using stone knives & bear skins, naturally), I just as quickly learned that I never wanted to do it again. Thankfully tools like Rotobrush have come to After Effects, but like Quick Select in Photoshop, it was always pretty naive—never knowing what it was looking at.
Upon joining Google in 2014, I saw some amazing early demos of smarter techniques to isolate objects in video. While trying (unsuccessfully) to bring the tech to Google Photos, I kept hucking research paper links over the fence to my Adobe pals saying, “Just in case you’re not already looking into this—please get on it!” I always figured they were.
Smash cut to 2018. I finally get to work with those folks I met in 2014, bringing fast segmentation to Pixel 3 (powering selfie stickers, accelerating Portrait Mode) and beyond. Meanwhile Adobe is publishing their own research and showing how it might come soon (🤞) to After Effects. Check out this rad demo:
Meanwhile, if you want to try some of this hotness today, check out Select Subject—which is likely already in your copy of Photoshop!