Category Archives: Mobile

Motion Stills gains a fun new AR mode on Android

Hooray! My first real project to ship since joining my new team is here:

Today, we are excited to announce the new Augmented Reality (AR) mode in Motion Stills for Android. With the new AR mode, a user simply touches the viewfinder to place fun, virtual 3D objects on static or moving horizontal surfaces (e.g. tables, floors, or hands), allowing them to seamlessly interact with a dynamic real-world environment. You can also record and share the clips as GIFs and videos. 

NewImage

For the nerdier among us, we’ve put together a Research blog post about The Instant Motion Tracking Behind Motion Stills AR, and on CNET Stephen Shankland gives a nice overview (and has been tweeting out some fun animations):

The Motion Stills app can put AR stickers into any Android device with a gyroscope, which is nothing special these days.

I’ve long been a longtime fan of Motion Stills, posting about it for years. I’m so glad to get to work with these guys now. There’s more good stuff to come, so please us know what you think!

(BTW, the 3D models are among the many thousands you can download for free from poly.google.com.)

Try three new experimental photo apps from Google

I’m excited for my teammates & their new launches. The team writes,

  • Storyboard (Android) transforms your videos into single-page comic layouts, entirely on device. Simply shoot a video and load it in Storyboard. The app automatically selects interesting video frames, lays them out, and applies one of six visual styles.
  • Selfissimo! (iOS, Android) is an automated selfie photographer that snaps a stylish black and white photo each time you pose. Tap the screen to start a photoshoot. The app encourages you to pose and captures a photo whenever you stop moving.
  • Scrubbies (iOS) lets you easily manipulate the speed and direction of video playback to produce delightful video loops… Shoot a video in the app and then remix it by scratching it like a DJ. Scrubbing with one finger plays the video. Scrubbing with two fingers captures the playback so you can save or share it.

Please take ‘em for a spin, then tell us what you think using the in-app feedback links. 

NewImage

My new team’s new page: Check out Google Machine Perception

“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:

Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.

Our technology powers products across Alphabet, including image understanding in Search and Google Photos, camera enhancements for the Pixel Phone, handwriting interfaces for Android, optical character recognition for Google Drive, video understanding and summarization for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including Motion Stills, PhotoScan and Allo.

We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).


NewImage

[Via Peyman Milanfar]

New for kids: Google Science Journal

I’m eager to try this out with our lads:

We’ve redesigned Science Journal as a digital science notebook, and it’s available today on Android and iOS.

With this new version of Science Journal, each experiment is a blank page that you can fill with notes and photos as you observe the world around you. Over time, we’ll be adding new note-taking tools… We’ve added three new sensors for you to play with along with the ability to take a ”snapshot” of your sensor data at a single moment in time.

NewImage

[Via]

Image science: Inside Portrait mode on the Pixel 2

If TensorFlow, PDAF pixels, and semantic segmentation sound like your kind of jam, check out this deep dive into mobile imaging from Google research lead Marc Levoy. He goes into some detail about how the team behind the new Pixel 2 trains neural network, detects depth, and synthesizes pleasing, realistic bokeh even with a single-lens device. [Update: There’s a higher-level, less technical version of the post if you’d prefer.]

NewImage

NewImage

New Live Photos hotness in Google Photos, Motion Stills

Motion Stills lets you make stabilized multi-clip movies, animated collages, loops, and more from Live Photos. Now version 2.0 for iOS adds 

  • Capture Motion Stills right inside the app.
  • Capture and save Live Photos on any device.
  • Swipe left to delete Motion Stills in the stream.
  • Export collages as GIFs.

The app’s available on Android, too. Android Police writes, “It’s is essentially a GIF camera, but the app stabilizes the video while you’re recording. You can record for a few seconds, or use the fast-forward mode to speed up and stabilize longer videos.”

Not to be outdone, Google Photos on Web, iOS, and Android now displays Live Photos as well as Motion Photos from the new Pixel 2, giving you a choice of whether to display the still or moving portion of the capture. Here’s a quick sample on the Web. Note the Motion On/Off toggle up top.

I’m thrilled to have joined the team behind Motion Stills, so please let us know what you think & what else you’d like to see!

Behinds the scenes of the new Pixel 2’s camera

Fun insights from my new teammates, including:
  • “You essentially have the space of a blueberry for the camera to squeeze into.”
  • The lens is actually six lenses.
  • Each pixel is split into two—useful for sensing depth.
  • The whole thing weighs .003 pounds, about the same as a paperclip.
  • HDR+ looks tile-by-tile across a range of captures shot in quick succession, moving chunks as needed to align them. This is good for “scaring ghosts.”
  • A neural network trained on 1 million images built a model for what’s person-like and should be kept in focus while blurring the background.
  • A hexapod rig is used to generate (and thus find ways to combat) various kinds of shakiness.

NewImage

NewImage

NewImage

[YouTube]