Quick tip: Conserve your bandwidth by scheduling Nest cam downtime

Like a lot of folks I’m now constantly streaming video down & up while working from home, alongside a wife who’s doing the same plus a couple of kids using online learning (and, let’s be honest, a lot of YouTube & Xbox). Freeing up bandwidth to improve these experiences would be great, so I was delighted to learn that our Nest cameras can be scheduled to stop/start streaming video up to the cloud. From the Google help center:

  • Lower the setting so that your camera isn’t using as much data to stream video to the cloud.
  • Schedule your camera to turn off at certain times, particularly if you have a Nest Aware subscription, so your camera isn’t constantly uploading video to the Nest service. You can also try this if you’re sharing your camera publicly.

Thanks for thinking ahead, Nest team!

UntitledImage

Free Lightroom online seminar, Friday at noon Pacific

Join my old friends & colleagues Phil Clevenger & Rick Miller tomorrow for what promises to be an informative online class/discussion. Topics include:

  • Quick history of the Lightroom UI and its influence on modern software design
  • The importance of choosing the right color space when editing your photos.
  • Creating custom camera profiles for your DSLR, cellphone, and drone cameras to achieve the best color fidelity.
  • The RAW advantage: recovering data from overexposed/underexposed images.
  • Using the Map module and GPS coordinates for location scouting.
  • Soft Proofing your photos to determine the most appropriate print color settings
  • Questions & Answers

UntitledImage

About your hosts:
Phil Clevenger:
Senior Director, Experience Design, Adobe Experience Cloud. Original UI designer for Adobe Lightroom and author on two patents for UI innovations in the Lightroom 1.0 interface.

Rick Miller:
Former Sr. Solutions Engineer/color management expert at Adobe Systems (Rick’s name appeared on the credit screens for Photoshop and Premiere Pro), Professional photographer, and currently a professor at USC. Rick previously taught at the Art Center College of Design in Pasadena, Cal Poly Pomona University, and assisted the LAPD’s Scientific Investigation Division in the forensic application of Photoshop.

Oil paintings come alive in AR

A couple of years ago, Adobe unveiled some really promising style-transfer tech that could apply the look of oil paintings to animated characters:

I have no idea whether it uses any of the same tech, but now 8th Wall is bringing a similar-looking experience to augmented reality via an entirely browser-based stack—very cool:

 [YouTube]

Hilariously overwrought sports commentary on banal scenes

Here’s a much-needed mental palate cleanser:

Nick Heath narrates his videos of people doing mundane things, like crossing the street, with the verve and dramatic flair of competitive sports.

They’re grouped via the #LiveCommentary tag. Enjoy some of my faves:

Free streaming classes on photography, 3D

It’s really cool to see companies stepping up to help creative people make the most of our forced downtime. PetaPixel writes,

If you’re a photographer stuck at home due to the coronavirus pandemic, Professional Photographers of America (PPA) has got your back. The trade association has made all of its 1,100+ online photography classes free for the next two weeks. […]

You can spend some of your lockdown days learning everything from how to make money in wedding photography to developing a target audience to printing in house.

UntitledImage

Meanwhile Unity is opening up their Learn Premium curricula:

During the COVID-19 crisis, we’re committed to supporting the community with complimentary access to Unity Learn Premium for three months (March 19 through June 20). Get exclusive access to Unity experts, live interactive sessions, on-demand learning resources, and more.

UntitledImage

“NeRF” promises amazing 3D capture

“This is certainly the coolest thing I’ve ever worked on, and it might be one of the coolest things I’ve ever seen.”

My Google Research colleague Jon Barron routinely makes amazing stuff, so when he gets a little breathless about a project, you know it’s something special. I’ll pass the mic to him to explain their new work around capturing multiple photos, then synthesizing a 3D model:

I’ve been collaborating with Berkeley for the last few months and we seem to have cracked neural rendering. You just train a boring (non-convolutional) neural network with five inputs (xyz position and viewing angle) and four outputs (RGB+alpha), combine it with the fundamentals of volume rendering, and get an absurdly simple algorithm that beats the state of the art in neural rendering / view synthesis by *miles*.

You can change the camera angle, change the lighting, insert objects, extract depth maps — pretty much anything you would do with a CGI model, and the renderings are basically photorealistic. It’s so simple that you can implement the entire algorithm in a few dozen lines of TensorFlow.

Check it out in action:

[YouTube]