“Using a GoPro Hero4 Session,” PetaPixel writes, “the clip puts you in the driver’s seat of a Hot Wheels car as it hurtles down 8 expertly crafted pieces of track connected by ‘teleporting tunnels…’ In all the car traversed about 200ft of track, most of it in the 4th section. As for the underwater drive, it’s real too.”
A little story of perseverance & defiance from my past…
So, when I got out of school with a History degree, I wasn’t exactly highly employable, so I used my self-taught design & coding skills to talk myself into an internship in NY. Most people at the agency were cool, but the admins were kind of petty tyrants who liked to deny things to the interns just to keep us in our (unpaid) places. That included denying us phones and nametags for our desks.
Not digging the indignity, one weekend I came into the office, stuck a full-time colleague’s nametag into a scanner, then brought it into Illustrator & generated a template from which I could bang out my own versions. I proceeded to create a ton of absurd variations—e.g. “Unmoved Mover,” “HMFIC,” etc.—that I then cycled through displaying. One variation said “Johnny Folk Hero,” and I ended up leaving it up for a while.
Later, a woman came back from a meeting laughing: “Someone was in there asking whether a graphics intern could do a project. ‘Isn’t there a Johnny something?’ she asked. ‘Oh, John Nack?’ ‘No, it’s like… something Native American, I think—Johnny Folk Hero or something??’”
This, as you might imagine, kind of made my life. 🙂
“Using traditional cameras and algorithms,” MIT News reports, “[Interactive Dynamic Video] looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.” Check it out:
“This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis, who will be publishing the work this month for his final dissertation. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”
Back in 2014, right after we’d both joined Google, my friend Alex Powell (who used to lead animation tools at DreamWorks) and I did a lot of exploration of what it would mean to turn photos and videos into paintings. Some of that work paid off later in work like Halloweenify face painting.
Now it’s exciting to see how the industry has evolved, using machine learning to style images as paintings. Evidently the band Drive Like Maria is veeeery patient and dedicated to getting this result. PetaPixel writes,
“We figured that if we’d process 600 pictures each (Bert, Bjorn, and myself) it would take about 5 hours per person to process everything at 30 seconds per picture,” the band’s guitarist Nitzan Hoffmann told us. “By the time we started processing the Android version of Prisma was also available so we could use iPhones and Android phones at the same time.”
Despite some issues transferring JPEGs to and from the phones while keeping them in some kind of order, they eventually managed to convert every frame—all 1,828 of them—into a “painting” by hand before plugging them back into Premier Pro. A little bit of fiddling later, they had their music video!