The $549 price tag is no joke, but for serious creators I can imagine this little guy being a delight to use:
[YouTube]
The $549 price tag is no joke, but for serious creators I can imagine this little guy being a delight to use:
[YouTube]
Placing this ML-driven tech atop the set of now-vintage (!) Quick Selection & Magic Wand tools should help get it discovered, and the ability to smartly add & subtract chunks of an image looks really promising. I can’t wait to put it to the test.
[YouTube]
Mark Coleran is a mograph O.G., about whose “Fantasy User Interface” (“FUI”) work for movies I used to write about a lot back at Adobe. It was fun listening to him & other designers share a peek into this unique genre of visual storytelling via Adobe’s great Wireframe podcast. I think you’ll enjoy it:
Happy Sunday. 😌
“Here, we choo-choo-choose to believe in the Constitution. Isn’t that bananas?”
[YouTube]
This looks so rad. Back in the day, I really wanted a solution that would record the “bizarre, freewheeling bedtime stories” my sons & I made up every night, then let us put them into an illustrated journal. The new Recorder app solves the most critical piece of that puzzle.
The new Recorder app on Pixel 4 brings the power of search and AI to audio recording. You can record meetings, lectures, jam sessions — anything you want to save and listen to later. Recorder automatically transcribes speech and tags sounds like music, applause, and more, so you can search your recordings to quickly find the part you’re looking for. All Recorder functionality happens on-device, so your audio never leaves your phone. We’re starting with English for transcription and search, with more languages coming soon.
[YouTube]
Instead of squeezing models down to a couple of megs & constraining rendering to what a phone alone can do, what if you could use a full-power game engine to render gigabyte-sized models in realtime in the cloud, streaming the results onto your device to combine with the world? That’s the promise of Nvidia CloudXR (which looks similar to Microsoft Azure Remote Rendering), announced this week:
[YouTube]
I’ve cued up the most eye-popping fifteen seconds. Enjoy!
[YouTube] [Via Mark Henderson]
Scrappy VFX gangsta Daniel Hashimoto riffed on the latest Boston Dynamics robot launch vid to fun effect:
Amazingly, he gave himself just six hours to do the job, mixing in the new After Effects Content-Aware Fill & a volumetric capture of my teammate Manuel:
6 Hours of near competence:
VFX Breakdown of my #BostonDynamics Video pic.twitter.com/9L3CQawxQy— Action Movie Dad (@ActionMovieKid) October 4, 2019
[YouTube]
Oh my—impressive:
Barack Obama’s voice was provided by Stable Voices, synthesized using an AI model trained on his speech patterns.
[YouTube]
“So simple, even a product manager can do it…” 😌 (Courtesy of my colleague Navin.) Click through for a higher-res version.
“The only problem with Microsoft,” Steve Jobs famously said, “is they just have no taste. They have absolutely no taste.” But critically:
And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.
Here’s Marc Levoy providing a nice counterpoint, talking about art history & its relationship with modern computational photography:
[YouTube]
“The Camera Professor” (as Reddit called him) Marc Levoy gave a great overview today of his team’s work in computational photography, after which Annie Leibovitz came to the stage to discuss her craft & Pixel 4. “My IQ went up by at least 10 by the time he was done,” per the same thread. 😌 Enjoy!
(Starts around 47:12, just in case the deep link above doesn’t take you there directly)
[YouTube]
Check out this video on your compatible iPhone or Android device to don the horns & makeup of this shady lady, powered by the augmented reality tech my team has been building.
Watch @TamangPhan turn herself into the Mistress of Evil and see yourself as Maleficent with our new augmented reality technology! → https://t.co/zfMJF9MV3X#Maleficent pic.twitter.com/XgtLzupEw0
— YouTube (@YouTube) October 14, 2019
[YouTube]
A waterproof, stabilized GoPro that just happens to shoot 360º video? This seems like a serious rival for the Insta 360, which I love. Check it out:
I’ve found sharing actual 360º content to be kind of a non-starter (too much of a pain, too uncertain how to consume), but being able to reframe shots in post is a ball. Here’s an Insta example from some zip lining our fam did this summer:
Peep this beautiful work from Future Deluxe:
Powered by Google’s machine learning platform – TensorFlow, Morphing Clay learns to recognise different human gestures and body movements, triggering the morphing of different pottery shapes and patterns in real time.
[Via Joost Korngold]
I’m so proud of Margot “Hollywood” Nack 😝🤘 and her team for helping make this happen! 🤖🔥 And if you’re a fellow Old, you might join me in marveling at Premiere Pro’s long journey from 90’-s has-been to industrial-strength collaborative platform.
Get an inside look from Director Tim Miller and the creators of Paramount’s Terminator: Dark Fate as they take you behind the scenes into their workflow & production for editing and creating visual effects using Premiere Pro, After Effects, and other tools inside the Adobe Creative Cloud suite.
[YouTube]
“When—not if—I die in a fiery crash on Highway 101,” I’ve long told people, “you’ll remember that I called it that I’d be rubbernecking at some awesome aircraft buzzing overhead.” I’ve always figured it’d be a Blue Angel or An-124 or something, but now I can imagine it being Uncle Larry (who helps fund electric-plane startup Kitty Hawk), softly zipping by in one of these chariots of future:
Sign me up for the last bit in particular:
Project Heaviside is Kitty Hawk’s latest high-performance electric VTOL vehicle. It is designed to be fast, small and exceedingly quiet, taking advantage of new possibilities to free people from traffic.
The Heaviside vehicle is roughly 100 times quieter than a regular helicopter. Once in the air, the vehicle blends into the background noise of a city or suburb, barely discernible to the human ear. Heaviside can travel from San Jose to San Francisco in 15 minutes and uses less than half the energy of a car.
[YouTube]
You might find this creepy as hell—and you might be right—but I guarantee you this will become commonplace, in one way or another (e.g. I walk into a cafe at work, know who all these people are, and know whom to talk to and why).
Velaga and Nefedov scraped photos of investors from Signal, a directory of venture capitalists in different industries, as well as Google Images. They declined to specify how many photos they have, though they said it is over 1,000. […]
When we tried out AngelFace at Vox’s office, the results were not impressive. The app didn’t recognize Casey Newton (not surprising) or Benchmark’s Bill Gurley (quite surprising). But whether it works is almost beside the point: peer-to-peer facial recognition software is still in its early stages and it’s likely to get more prevalent as time goes on. As Velaga said, “Anybody can make this technology — the tech we use, someone could figure it out watching YouTube videos.”
Thrilling stuff for Honda’s celebration of the Japanese Grand Prix:
Campaign writes,
The ad covers Honda’s success on the racetrack through the years, from the RA272’s victory at 1965’s Mexican Grand Prix to Max Verstappen’s race-winning pass at this year’s Austrian Grand Prix. A giant animated bull also features as a nod to the brand’s partnership with Scuderia Toro Rosso and Aston Martin Red Bull Racing.
[YouTube]
…and got 3D-captured via Google’s volumetric scanning array. “Days of Miracles & Wonder,” part 9,217…
Or, if you prefer, duct-tape a drone there—or a 360º cam, or all three! Stewart Carroll shows off fun ways to simulate biking like a bat out of hell, at substantially lower risk to one’s health (if perhaps not to one’s gear):
[YouTube]
Facebook Research recently unveiled Fashion++, “An AI system that proposes easy changes to a person’s outfit to make it more fashionable.”
Not to be outdone, my extended team is rolling out a Google Lens feature called “style ideas”:
If you see a leopard print skirt you like on social media, take a screenshot and use Lens in Google Photos to see how other people have styled similar looks. See a winter coat that catches your eye in a store, but need some inspiration on how to rock it? Just open Lens and point your camera.
This remains, as always, “the strangest life I’ve ever known…” ¯\_(ツ)_/¯
Eight or so years ago, my little son Finn was fascinated by this animation. (As we laid in the dark at bedtime, he noted out of the blue, “The dream is collapsing.”) Now that he’s 11, we can enjoy the original film together & revisit the steampunk version together.
A peek behind the scenes:
[Via]
Pretty much like it says on the tin. Enjoy!
The 9-minute performance features 640 motorized LED spheres, an ancient Chinese weaving machine and a modern dancer. German motor winch producer KINETIC LIGHTS provided the vertical hoist systems for the LED spheres and control software. Russian RADUGADESIGN animated a complementing video backdrop and CPG Concepts from Hong Kong provided the dance choreography for British dancer Rose Alice.
Heh—I pass this along as a pitch-perfect stylistic riff on Downton Abbey—which, as expected, my mom loved. 😌
I also loved their “Spike TV” version from a few years back. “And their. lives. SUCK!!“
[YouTube]