Monthly Archives: September 2017

BMW plans wireless charging for cars

Where phone have led, cars will follow—first becoming malleable, software-centric platforms (e.g. Tesla rolling out Autopilot, improving cars’ acceleration, etc.), integrating voice assistants, and now adopting contactless charging. Sure, poking a plug into my car takes all of 15 seconds, but I’ll admit to being a touch jealous:

When approaching the pad, the 530e’s sensors and internal screen will navigate the driver to the necessary point above the charging pad. Once in position, the pad’s integrated coil, alongside a secondary coil found inside the car, generate an alternating magnetic field that will charge the car’s 9.4kWh battery in 3.5 hours with 3.2kW of power. The entire process can even be monitored via an app.



“I hate Google Photos”—but in a good way!

Heh—nice to see concept movies (assembled from thematically related pics/vids) really resonating with this Reddit user:

You made a grown ass man cry like a baby by automatically making a video titled “They grow up so fast”.. which has about 45 clips of videos with my daughter in it.. aged around 4-5 months to 22 months (current).

I have watched that 3 minute long video 3 times so far.. first time while I cried like a baby.. next two times with my jaw dropped due to the technology that made this possible.

I got one of these myself on Saturday, and now my mom & wife can’t stop watching our Henry Seamus grow from cooing blob to fun-sized weirdo. Cue gratuitous showing!



Canon’s “Free Viewpoint” virtual camera system looks bananas

Remember when the yellow first-down line was the height of game-time badassery? Details here are scant, but this looks like a trip:

PetaPixel writes,

The system would require a number of high-resolution cameras mounted in various places around the stadium. Each camera is connected to a network and controlled by software. Afterward, the video viewpoints are fed into an image processing engine that turns it into high-resolution 3D spatial data.



VFX history: Preserving the last working Scanimate system

“For about a decade, from 1975 to 1985,” Vice writes, “if you witnessed moving animation on television, it was either shot one frame at a time, or made using a Scanimate machine. Only ten of the devices were ever built.”

Here they drop in on engineer Dave Sieg, who has spent the last 20 years preserving the only working Scanimate. Dave discusses the technical and cultural impact of the Scanimate and what the future holds for this iconic machine.


[YouTube] [Via Margot]

Crazy Train: A wild drone flight into & under a moving train

Tommy played piano like a kid out in the rain
then he lost his leg in Dallas he was dancin’ with a train

Man, I thought that my flying a drone off the back of a boat on the Mississippi was risky—but that seems laughably sane compared to Paul Nurkkala flying his drone flying onto, next to, inside, and under a moving freight train.

If you can’t take the queasy-making camera moves, jump to 3:20 to go underneath & 3:30 to go inside:

PetaPixel writes,

Nurkkala specializes in flying camera drones through a first-person point-of-view using a live feed through goggles. His custom-assembled drone was equipped with a GoPro HERO5 Session action camera, which is light enough to keep the craft fast and nimble.

“I recognize that this isn’t the most ‘flowy’ video or anything, but all of the things were all in the same flight, so I wanted to show that off,” Nurkkala writes.




My brand new gig: Augmenting reality in Google Research

I’m stupid-excited to say that I’ve just joined Google’s Skynet Machine Perception team to build kickass creative, expressive experiences, delivering augmented reality to (let’s hope) a billion+ people. I told you sh*t just got real. 🙂

Now, the following career bits may be of interest only to me (and possibly my mom), but in case you’re wondering, “Wait, don’t you work on Google Photos…?”

Well, like SNL’s Stefon, “I’ve had a weird couple of years…” 


The greatly smoothed version goes basically like this:

  • I joined Google in early 2014 to work on Photos. I liked to say I was “Teaching Google Photoshop,” meaning getting computers to see & synthesize like humans (making your Assistant your artist!). Among other things, we created a brand-new image editor, did some early AR face-painting work (a year+ ahead of Snapchat et al), and made movies for tens of millions of people.
  • After a bit over a year, I wanted to explore some crazier photo- and video-related ideas (stuff not ready for Photos to include then, if ever), so I left the team & walked across the hall to work with & learn from Luke Wroblewski. Thus I was “working at Google on photos, just not Photos.” This was a subtle distinction, and as I was working on secret stuff, I didn’t spend time publicizing it. I remained closely involved with the ex-Nik Photos folks in building out Snapseed & the next rev of the new editor we’d started.
  • Meanwhile I spent the better part of the next year thinking up, prototyping, and iterating on a bunch of little photo apps. It was a tough but enlightening process. I know we were on to something, but I also felt like Edison saying some variant of “I have not failed. I’ve just found 10,000 ways not to make a light bulb.”
  • Somewhat tired from the process & eager to make concrete contributions, I was set to join an imaging hardware team. When project plans changed, however, I agreed to help improve photography experiences on social apps including Google+.
  • Having witnessed on Photos the massive importance of speed, I teamed up with my future teammates in Research to build out the RAISR machine-learning library and ship it in Google+, saving users immense amounts of bandwidth (critical in the developing world).
  • Since then, and up until this week, I’ve been focusing on enterprise social needs. Though it wasn’t an area I sought out, I ended up really digging the experience, and I look forward to eventually sharing some of the rad stuff my team was building.
  • And then, Google bought this little company in Belarus & my old Research friends came calling…

So now we’ve come full circle, and to capture my feelings, I’ll cite SNL yet again. Wish me luck. 🙂 

Snapseed gets refreshed with presets, reorganized tools, and perspective

More power & speed for the millions of people who use Snapseed every day:

We’re excited to announce that Snapseed 2.18 has started rolling out today to users on Android and iOS. This update includes a fresh new UI, designed for faster editing with more efficient access to your favorite features.

You’ll find Looks are now available from the main screen, making it easier than ever to apply your customized filters to your photos. Looks are a powerful way to save your favorite combinations of edits and apply them to multiple images. We’ve added 11 beautiful new presets (handcrafted by the Snapseed team) to help you get started – give them a try!

We’re also bringing the Perspective tool to iOS to allow you to easily adjust skewed lines and perfect the geometry of horizons or buildings.