Like a lot of folks I’m now constantly streaming video down & up while working from home, alongside a wife who’s doing the same plus a couple of kids using online learning (and, let’s be honest, a lot of YouTube & Xbox). Freeing up bandwidth to improve these experiences would be great, so I was delighted to learn that our Nest cameras can be scheduled to stop/start streaming video up to the cloud. From the Google help center:
Lower the setting so that your camera isn’t using as much data to stream video to the cloud.
Way to go, Sundar & team—I’m really happy to see all this news, including the following:
In addition to these commitments, we also increased the gift match Google offers every employee annually to $10,000 from $7,500. That means our employees can now give $20,000 to organizations in their communities, in addition to the $50 million Google.org has already donated.
Join my old friends & colleagues Phil Clevenger & Rick Miller tomorrow for what promises to be an informative online class/discussion. Topics include:
Quick history of the Lightroom UI and its influence on modern software design
The importance of choosing the right color space when editing your photos.
Creating custom camera profiles for your DSLR, cellphone, and drone cameras to achieve the best color fidelity.
The RAW advantage: recovering data from overexposed/underexposed images.
Using the Map module and GPS coordinates for location scouting.
Soft Proofing your photos to determine the most appropriate print color settings
Questions & Answers
About your hosts: Phil Clevenger: Senior Director, Experience Design, Adobe Experience Cloud. Original UI designer for Adobe Lightroom and author on two patents for UI innovations in the Lightroom 1.0 interface.
Rick Miller: Former Sr. Solutions Engineer/color management expert at Adobe Systems (Rick’s name appeared on the credit screens for Photoshop and Premiere Pro), Professional photographer, and currently a professor at USC. Rick previously taught at the Art Center College of Design in Pasadena, Cal Poly Pomona University, and assisted the LAPD’s Scientific Investigation Division in the forensic application of Photoshop.
During the COVID-19 crisis, we’re committed to supporting the community with complimentary access to Unity Learn Premium for three months (March 19 through June 20). Get exclusive access to Unity experts, live interactive sessions, on-demand learning resources, and more.
“This is certainly the coolest thing I’ve ever worked on, and it might be one of the coolest things I’ve ever seen.”
My Google Research colleague Jon Barron routinely makes amazing stuff, so when he gets a little breathless about a project, you know it’s something special. I’ll pass the mic to him to explain their new work around capturing multiple photos, then synthesizing a 3D model:
I’ve been collaborating with Berkeley for the last few months and we seem to have cracked neural rendering. You just train a boring (non-convolutional) neural network with five inputs (xyz position and viewing angle) and four outputs (RGB+alpha), combine it with the fundamentals of volume rendering, and get an absurdly simple algorithm that beats the state of the art in neural rendering / view synthesis by *miles*.
You can change the camera angle, change the lighting, insert objects, extract depth maps — pretty much anything you would do with a CGI model, and the renderings are basically photorealistic. It’s so simple that you can implement the entire algorithm in a few dozen lines of TensorFlow.