Aww—check out today’s doodle:
The team writes,
Today’s stop-motion, animated video Doodle celebrating Mister Rogers was created in collaboration with Fred Rogers Productions, The Fred Rogers Center, and BixPix Entertainment. Set to the iconic opening song of Mister Rogers’ Neighborhood (“Won’t You Be My Neighbor”), the Doodle aims to be a reminder of the nurturing, caring, and whimsy that made the show feel like a “television visit” between Mister Rogers and his young viewers. Everyone was welcome in this Neighborhood
Mrs. Rogers approves:
“I’m so thrilled that Google is celebrating Fred and Mister Rogers’ Neighborhood with this charming tribute.This stroll through the Neighborhood is delightful, and Fred’s gentle kindness is beautifully captured in the Doodle.”
Here’s a peek behind the scenes:
Whoa—songs of ice & fire:
“We probably could have done this digitally, but we actually shot all of this practically in a studio” says Alan Dye, Apple Vice President of User Interface Design, of the motion faces. “What I love about the fact that we did this is that it’s just so indicative of how the design team works. It was really about bringing together some of our various talents to create these faces. There are of course art directors, and color experts, and graphic designers, but also model makers who helped build these structures that we would eventually, you know, set on fire.”
Ah—so this is the backstory on the large installation now populating our lobby.
The flowers are built using Raspberry Pi running Android Things, our Android platform for everyday devices like home speakers, smart screens and wearables. An “alpha flower” has a camera in it and uses an embedded TensorFlow neural net to analyze which emotion it sees, and the surrounding flowers change colors based on the image the camera captures of your face. All processing is done locally, so no data is saved or sent to any servers.
Better still, the code has been open-sourced.
“Water, fire, metal and light,” writes Apple, “were used to create these mesmerizing scenes using 4K, Slo-mo, and Time-lapse. #ShotoniPhone by Donghoon J. and Sean S.” Enjoy:
This is kind of inside-baseball, but it’s exciting for the possibilities it opens:
If you want to build and test your own experience, you can visit our developer documentation to get started. You can also express your interest in joining the Google Photos partner program if you are planning a larger integration.
Among the things apps can now do:
- Easily find photos, based on
- what’s in the photo
- when it was taken
- attributes like media format
- Upload directly to their photo library or an album
- Organize albums and add titles and locations
- Use shared albums to easily transfer and collaborate
Formerly known as “Project Puppetron” (see previous re: the underlying tech), Adobe’s new Characterizer will soon be part of Character Animator CC:
You will be able to bring original art into Character Animator, record a series of sounds and facial expressions, and Characterizer will generate a new unique character… In a matter of seconds, you have a completely unique puppet that’s ready for your performance, regardless of your previous animation experience.
Here’s a quick tour from my buddy Dave:
Meanwhile After Effects is adding some fun new chops to its puppeting toolset:
[YouTube 1 & 2]
“Teaching Google Photoshop” has long been my working mantra here—i.e. getting computers to see like artists & wield their tools. In a similar vein, researchers from Adobe & MIT have teamed up on “An AI for CGI“—tech that automatically separates image elements in discreet regions for augmentation. I can’t wait to give it a try:
Photoshop’s venerable (oh God, how can it already be venerable?!) Content-Aware Phil is growing up nicely, as seen in this new sneak peek from my friend Meredith. I look forward to seeing whether the increased power/finesse more than offsets the apparent jump in complexity—as I expect it will.