I love the animals created for this piece for Sherwin-Williams:
“In 2009,” writes the team at Buck, “we were asked by our friends at McKinney to explore a world made of color, literally. The Sherwin colors themselves are the cast of their own story of infinite possibility, taking us places that spark our sense of curiosity, exploration, and expression.”
How lucky it was for the world that a brilliant graphics engineer (PostScript creator & Adobe co-founder John Warnock) married a graphic designer (Marva Warnock) who could provide constant input as this groundbreaking app took shape. Those were the days, when the app splash screen listed the whole team of four engineers who’d built it—one of whom was the CEO.
Watch the Illustrator story unfold, from its beginning as Adobe’s first software product, to its role in the digital publishing revolution, to becoming an essential tool for designers worldwide. Interviews include cofounder John Warnock, his wife Marva, artists and designers Ron Chan, Bert Monroy, Dylan Roscover and Jessica Hische.
It’s fun to see all these old friends celebrating an old friend. It takes me back to when I uploaded a copy of the VHS tape (hosted by John himself!) that shipped in the box with Illustrator 1.0:
Istanbul-based Aydın Büyüktaş creates amazing, seamlessly warped landscape composites for his ongoing Flatland project (see earlier, cruder incarnation). This shot is going to blow my train-loving 7yo’s mind.
I know this is soooo several days ago, but this interactive pix2pix demo (running atop Google’s TensorFlow machine learning framework) is good fun for making the stuff of (cute?) nightmares: Try sketching a cat, handbag, or shoe, then let the system try to create a photographic rendition by drawing from a large image set. Try it out and enjoy!
This is bonkers: By having your face 3D scanned, you can now have it show through a VR headset (complete with moving, blinking eyes!), like this:
The Daydream VR team explains,
The first step to removing the VR headset is to construct a dynamic 3D model of the person’s face, capturing facial variations as they blink or look in different directions. This model allows us to mimic where the person is looking, even though it’s hidden under the headset.
Next, we use an HTC Vive, modified by SMI to include eye-tracking, to capture the person’s eye-gaze from inside the headset. From there, we create the illusion of the person’s face by aligning and blending the 3D face model with a camera’s video stream. A translucent “scuba mask” look helps avoid an “uncanny valley” effect.
For a really funny tour, check out the Try Guys’ adventures in VR:
Heh—Team Coco in the floating, glowing house! I’ll let Conan serve up the funny from here:
“Simpsons Did It!!” So goes the cry that lets you know that your unique new idea just ain’t so unique. And so it went, at least inside my head, a couple of months ago when I sent Google’s Geo team a suggestion:
Quick, Draw! and Terrapattern make me wonder whether you could offer a simple drawing UI for Earth that would let people find things that roughly match what they draw… I can see it being a playful, serendipitous way to explore.
Ah, they told me: Sit tight, because we’re about to launch Land Lines. It lets you “Start with a line, let the planet complete the picture.” Take it for a spin yourself, or just watch how it works:
“You Know You’re Living In A Late Culture When…,” Episode #397: Coffee Ripples“customizes coffee with high quality images in just a few seconds. Ripples are made of tiny coffee bean drops that keep the natural quality and flavor of your coffee.” Behold:
Oh, and haven’t I been blogging about printing weird stuff on food for, like, 10+ years? So I have. Now excuse me while I turn to dust.
It makes me sad that after 10 (!!) years of having 3D in Photoshop, I can’t think of a single time I’ve created good-looking text in it, much less anything else 3D of value. Given that PS includes a whole 3D engine, I hope that someday it’ll include easy ways to make attractive text.
In the meantime, amidst sometimes literally cheesy results, Art Text 3 ($29.99) produces some rather impressive pieces. Maybe Adobe could just license & bundle it as a plug-in. Hmm… (No, I don’t know anything you don’t know.)
So, do you use machine learning? Almost certainly, all the time! Sometimes it’s deliberately invisible (if we do our jobs right), while other times it’s more eye-popping. In this interesting short piece, Googlers Nat & Lo present a tour of how style transfer works: