Today, we’re excited to announce our latest AIY Project, the Vision Kit. It’s our first project that features on-device neural network acceleration, providing powerful computer vision without a cloud connection. […]
The provided software includes three TensorFlow-based neural network models for different vision applications. One based on MobileNets can recognize a thousand common objects, a second can recognize faces and their expressions and the third is a person, cat and dog detector. We’ve also included a tool to compile models for Vision Kit, so you can train and retrain models with TensorFlow on your workstation or any cloud service.
Ah good—I’ve figured something like this must be in development, and I’m excited at the prospect of removing some drudgery from selection & adjustment. (This is why “AI” applied to creative tools is interesting—not to take work away from artists, but to cut the crap so they can focus on, y’know, art.) Take it away, Meredith:
“Teaching Google Photoshop” has been my working mantra here—i.e. getting computers to see like artists & wield their tools. A lot of that hinges upon understanding the shape & movements of the human body. Along those lines, my Google Research teammates Tyler Zhu, George Papandreou, and co. are doing cool work to estimate human poses in video. Check out the demo below, and see their poster and paper for more details.
Rodeo has posted some interactive before/after shots on their site along with the breakdown reel. I’m kinda surprised by the number of non-CGI elements involved (e.g. the giant wireframe wrecking ball).
Does Google seem like exactly the kind of company that would celebrate the 20th anniversary of the Guggenheim Bilbao by commissioning a freerunner to launch off the iconic facades & carom around monumental works by Richard Serra? Why yes, yes it does. Explore the museum via Google Arts & Culture, and go behind the scenes of the short film below here.
Julian Tryba scripts After Effects to produce carefully segmented, meticulously choreographed “layer lapses” that produce a “visual time dilation” that juxtaposes the same scene shot at different times of day. Here, just check it out:
We’re in what I’m going to call The 1996 Web Design Era of voice technology. The web was created for something practical (sharing information between scientists), but it didn’t take very long for people to come up with strange and creative things to do with it.
Get vertigo a go-go as this drone pilot goes spinning in infinity:
Orbital drone movements are the ones with power to convert two-dimensional images into dancing focal layers escaping out of the frame. We wanted to further explore the technique, with high altitude long orbits, along with ones very close to the ground, we call them “Orbital drone-lapses”. These shots are a mix of automatic and manual flights.
“The shots were done using both automatic and manual flights over the Folegandros island in Greece,” notes PetaPixel.
When a piece of metal salt is dropped in the solution of sodium silicate, a membrane of insoluble metal silicate is formed. Due to the osmotic pressure, water enters the membrane and breaks it, generating more insoluble membranes. This cycle repeats and the salt grows into all kinds of interesting forms. This film recorded the osmotic growth of 6 salts inside sodium silicate solution. The growth is so life-like, no wonder Stéphane Leduc thought it might have something to do with the mechanism life over 100 years ago.
Mike Krainin & Ce Liu go into detail about how optical flow techniques are helping Google Street View produce panoramas that are not only freer of artifacts, but easier for machines to read (producing a better understanding of business names, hours, etc.):
I wonder whether these techniques might be useful to pano-stitching in apps like Photoshop & Lightroom. I’ve passed the info their way.
Adobe’s ambitious XD app has recently added a raft of new features, and here Khoi Vinh shows a compelling demo of instantly-updating artwork & on-device prototypes. (If for some reason the demo isn’t already queued to the right spot, jump to 8:21.)
A hardware glitch forced Khoi to (figuratively) tap dance during the first portion, and he offered a detailed peek behind the curtain, describing the demo team’s relentless pre-game preparation—and its limits. It’s so nice to see people really giving a damn.
Engineer and birdwatcher Eiji Nakatsu helped redesign Japan’s bullet trains based on the aerodynamics of three very different species of birds. In this short piece, my man Roman Mars from the great 99% Invisible talks about how mimicking natural designs (e.g. the leaves of the lotus) helps create more functional, less wasteful products:
Re:scam can take on multiple personas, imitating real human tendencies with humour and grammatical errors, and can engage with infinite scammers all at once, meaning it can continue any email conversation for as long as possible. Re:scam will now turn the tables on the scammers by wasting their time, and ultimately damage the profits for scammers.
On extremely rare days cold air is trapped in the canyon and topped by a layer of warm air, which in combination with moisture and condensation, form the phenomenon referred to as the full cloud inversion. In what resembles something between ocean waves and fast clouds, Grand Canyon is completely obscured by fog, making the visitors feel as if they are walking on clouds.
On Google search (and soon Maps) you can see wait times for nearly a million sit-down restaurants around the world. Search for the restaurant on Google, open the business listing, and scroll down to the Popular Times section. “You can even scroll left and right to see a summary of each day’s wait times below the hour bars–so you can plan ahead to beat the crowds.”
Google mapped air quality across California, with Street View cars spending 4,000 hours driving 100,000 miles in SF, LA, and the Central Valley. Check out the preliminary results.
Did you know that your timeline on Maps makes it easy to revisit the places you’ve been, filter by activity (e.g. horseback riding), and more?
For a character so realistically petrifying, you’ll be able to guess that a lot of effort went into bringing it to life. The ‘Demogorgon’ wasn’t just the work of digital effects; visual effects studio Aaron Sims Creative also created prototypes using 3D printers and manually painted the models to immaculate, creepy detail.
Heh—I’ve seen stuff like this done before, but props to Max Lanman for taking lifestyle marketing somewhere new. I love that the eBay link is http://bit.ly/luxuryisastateofmind, and that the car is now commanding a bid of $150,000!
I was really pleased to incorporate this After Effects-originated technology into Photoshop years ago, and now that it’s gone through a couple more generations of refinement (thanks in part to Character Animator), I’m excited to see that it’s now in Illustrator:
With Puppet Warp, you can now transform your vector graphics while maintaining an organic and natural look. You can reposition a character’s limbs or reshape an object. Puppet Warp is not limited to just animate objects, though—it works great on lettering and icons as well.
On today’s episode of Old Man Nack’s Software Woulda-Shoulda, I’d note the inordinate amount of time I spent lobbying fruitlessly for Illustrator & Photoshop to add properties panels of the sort you’d see in Macromedia apps—but who the hell cares, it’s here now:
The new Properties Panel shows you the controls you need, when you need them. It organizes all of your panels into one location so you can access them quickly and easily, resulting in a clean, clutter-free workspace.
“With Poly,” says Google AR/VR lead Clay Bavor, “our mission is to organize the world’s 3D information and make it universally accessible and useful.”
Poly lets you quickly find 3D objects and scenes for use in your apps, and it was built from the ground up with AR and VR development in mind. It’s fully integrated with Tilt Brush and Blocks, and it also allows direct OBJ file upload, so there’s lots to discover and use.
Check it out:
You’re frequently allowed to modify the models in case they don’t quite fit your needs, and you can share them as GIFs or explore them in VR viewers.
Remember Instagram hyperlapses—or if you’re nerdier, stabilization app Luma (acquired by Instagram)? Creator Alex Karpenko is back with Rylo, a $499 360º camera that promises great built-in stabilization & innovative software features. PetaPixel notes,
The second feature is called Follow, and that lets you track action with just a single tap on the app. The software will then adjust the orientation of the camera and keep the action in the frame.
Next up is Points, a feature which controls the camera’s perspective. Tapping on specific points of interest, Rylo will produce a smooth shot that “connects each of your points.”
Meanwhile Motorola has introduced the $299 moto 360 camera, a small pop-on addition to its phones that promises “360° photos and 4K video with 3D sound.” The size, immediacy of the phone connection, & ability to switch to the device’s regular cameras on the fly look pretty appealing.
Its six onboard cameras can capture VR and non-VR in 5.2K resolution, with 360-degree audio. It also has an OverCapture feature that “punches out” a regular image from a spherical photo and onboard stabilization features allow for smooth capture. The Fusion works with the GoPro app and the camera is waterproof up to 16 feet.
Back in the day Steve DiVerdi implemented 3D, physics-based brushes in Photoshop, then joined Google Photos. Now he’s back at Adobe working on VR video tech. Check out his demo of “Sidewinder,” which leverages a Google Jump VR rig to capture numerous images, then synthesize new views to enable more interactive nav (hard to describe, easy to grok when watched):