Monthly Archives: November 2018

“Squoosh” is a new Google app to slim down Web images

Cool—and open source. 9to5 Google writes,

By using WebAssembly, Squoosh is able to use image codecs that are not typically available in the browser.

Supporting a variety of web formats like MozJPEG and WebP and traditional ones like PNG, Squoosh allows you to quickly make your images web-ready. The app is able to do 1:1 visual comparisons of the original image and its compressed counterpart, to help you understand the pros and cons of each format.

[YouTube]

Using computer vision to unlock a wealth of photographic history

Visiting the NY Times was always among the real treats of my time working on Photoshop. I was always struck by the thoughtfulness & professionalism of the staff, but also by the gritty, brass-tacks considerations of cranking through thousands of images daily, often using some pretty dated infrastructure.

Now Google’s Cloud Vision tools are helping to tap into that infrastructure—specifically, bringing treasures of “The Morgue” back into the light by making their patchwork annotations searchable.

The morgue contains photos from as far back as the late 19th century, and many of its contents have tremendous historical value—some that are not stored anywhere else in the world. In 2015, a broken pipe flooded the archival library, putting the entire collection at risk. Luckily, only minor damage was done, but the event raised the question: How can some of the company’s most precious physical assets be safely stored?

Check it out:

NewImage

[YouTube] [Via]

Blind veterans kayak the Grand Canyon, taking Street View along for the ride

This completely blows my mind. Have a happy, reflective, and grateful Veteran’s Day, everyone.

Check out their 360º captures on Google Street View. Blind Navy vet & expedition leader Lonnie Bedwell writes,

I believe we can’t abandon our sense of adventure because we lose our ability to see it, and it has become my goal to help people who live with similar challenges, and show them that anything is possible.

In 2013, I became the first blind person to kayak the entire 226 miles of the Colorado River through the Grand Canyon But, I always felt it didn’t mean anything unless I found a way to pay it forward. So I joined up with the good folks at Team River Runner, a nonprofit dedicated to providing all veterans and their families an opportunity to find health, healing, community, and purpose. Together we had the audacious goal to support four other blind veterans take a trip down the Grand Canyon.

NewImage

[YouTube]

What might be next for Facebook 3D photos?

Facebook’s 3D photos (generated from portrait-mode images) have quickly proven to be my favorite feature added to that platform in years. Hover or drag over this example:

My crazy three! 😝😍 #007 #HappyHalloween

Posted by John Nack on Wednesday, October 31, 2018

The academic research they’ve shared, however, promises to go farther, enabling VR-friendly panoramas with parallax. The promise is basically “Take 30 seconds to shoot a series of images, then allow another 30 seconds for processing.” The first portion might well be automated, enabling the user to simply pan slowly across a scene.

NewImage

This teaser vid shows how scenes are preserved in 3D, enabling post-capture effects like submerging them in water:

Will we see this ship in FB, and if so when? Your guess is as good as mine, but I find the progress exciting.

[YouTube]

Costume-Aware Fill? Disney shows off neat AR clothing tech

Pretty cool stuff, though at the moment it seems to require using a pre-captured background:

When overlaying a digital costume onto a body using pose matching, several parts of the person’s cloth or skin remain visible due to differences in shape and proportions. In this paper, we present a practical solution to these artifacts which requires minimal costume parameterization work, and a straightforward inpainting approach.

NewImage

[YouTube] [Via Steve Toh]

AR: A virtual desktop on your actual desktop?

Here’s a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.

You use the phone’s spatial awareness to ‘pin’ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another… You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.

NewImage

[Via]

Adobe previews new selection hotness

Wanna feel like walking directly into the ocean? Try painstakingly isolating an object in frame after frame of video. Learning how to do this in the 90’s (using stone knives & bear skins, naturally), I just as quickly learned that I never wanted to do it again. Thankfully tools like Rotobrush have come to After Effects, but like Quick Select in Photoshop, it was always pretty naive—never knowing what it was looking at.

Upon joining Google in 2014, I saw some amazing early demos of smarter techniques to isolate objects in video. While trying (unsuccessfully) to bring the tech to Google Photos, I kept hucking research paper links over the fence to my Adobe pals saying, “Just in case you’re not already looking into this—please get on it!” I always figured they were.

Smash cut to 2018. I finally get to work with those folks I met in 2014, bringing fast segmentation to Pixel 3 (powering selfie stickers, accelerating Portrait Mode) and beyond. Meanwhile Adobe is publishing their own research and showing how it might come soon (🤞) to After Effects. Check out this rad demo:

Meanwhile, if you want to try some of this hotness today, check out Select Subject—which is likely already in your copy of Photoshop!

NewImage

[YouTube 1 & 2]