“What if you could hear color?” asks with Play a Kandinsky, an interactive machine learning experiment created by Google Arts & Culture and Centre Pompidou. “Explore Vassily Kandinsky’s synesthesia and ‘play’ his pioneering masterpiece, Yellow-Red-Blue, with the help of machine learning.”
Visitors are guided to click on different colors in an animated canvas. There, they’ll learn what each hue represented to the artist—yellow sounded like trumpets to him, red was the color of violins playing, and looking at blue would elicit a melody of organs in his head.
“This progress is absolute insanity,” says the narrator, and I’d readily agree. Watch how this tech auto-inflates a 2D sketch into 3D & applies rigging.
The real footage in this video was captured by several cameras that are part of the rover’s entry, descent, and landing suite. The views include a camera looking down from the spacecraft’s descent stage (a kind of rocket-powered jet pack that helps fly the rover to its landing site), a camera on the rover looking up at the descent stage, a camera on the top of the aeroshell (a capsule protecting the rover) looking up at that parachute, and a camera on the bottom of the rover looking down at the Martian surface.
If that’s up your alley, check out this 4K video showing images of the red planet (captured earlier):
Elderfox Studios took photographic footage taken by various Mars space rovers and compiled them into an absolutely astonishing 4K rendered video that reveals the surface of Mars. The original photos used in this short but stunning documentary were from NASA, JPL-Caltech, MSSS, Cornell University and ASU.
We’ve put a Spot in an art gallery, mounted it with a .68cal paintball gun, and given the internet the ability to control it. We’re livestreaming Spot as it frolics and destroys the gallery around it. Spot’s Rampage is piloted by YOU! Spot is remote-controlled over the internet, and we will select random viewers to take the wheel.
The high-key nutty (am I saying that right, kids?) thing is that they’ve devised a whole musical persona to go with it, complete with music videos:
L.L.A.M.A. is the first ever Lego mini-figure to be signed to a major label and the building toy group’s debut attempt at creating its own star DJ/ producer.
A cross between a helmet headed artist like Marshmello and a corporate synergy-prone artificial entity like Lil Miquela, L.L.A.M.A., which stands for “Love, Laughter and Music Always” (not kidding), is introducing himself to the world today with a debut single, “Shake.”
It appears that this guy & pals fly around on giant luckdragon-style copies of our goldendoodle Seamus, and I am here for that.
Do I seem like the kind of guy who’d have tiny Lego representations of himself, his wife, our kids (the Micronaxx), and even our dog? What a silly question. 😌
I had a ball zipping around Death Valley, unleashing our little crew on sand dunes, lonesome highways, and everything in between. In particular I was struck by just how often I got more usable shallow depth-of-field images from my iPhone (which, like my Pixel, lets me edit the blur post-capture) than from my trusty, if aging, DSLR & L-series lens.
Jens writes that the melting snowflake video was shot on his Sony a6300 with either the Sony 90mm macro lens or the Laowa 60mm 2:1 macro lens. He does list the Sony a7R IV as his “main camera,” but it’s still impressive that this high-resolution video was shot thanks to one of Sony’s entry-level offerings.
One of my very earliest interactions with Adobe—in 1999, I believe, before I worked there—a PM called me with questions about how my design team collaborated across offices. Now 20+ years later I find myself married to an Adobe PM charged with enhancing just that. 😌
Check out some of the latest progress they’re making with PS, AI, and the mobile drawing app Fresco:
Invite to Edit in Photoshop, Illustrator and Fresco
The Invite to Edit feature in Photoshop, Illustrator, and Fresco allows asynchronous editing on all surfaces across the desktop, iPad, and iPhone (Fresco). Now collaborators can edit a shared cloud document, one at a time. Just save your .PSD or .AI files as cloud documents and send invitations for others to edit them. You can also edit files that have been shared with you. In addition, you can access your shared cloud documents on assets.adobe.com and the Creative Cloud Desktop app.
Collaborators will not be able to work on the file live alongside you, but they will be able to open up your work, make changes of their own, save it, and have those changes sync back to your machine. If someone is already editing the file, the new user be given the choice to either make a copy or wait until the current editor is finished. It’s not quite Google Docs-style editing for Photoshop, but it should be easier than emailing a file back and forth.
A week ago I found myself shivering in the ghost town of Rhyolite, Nevada, alongside Adobe’s Russell Brown as we explored the possibilities of shooting 360º & traditional images at night. I’d totally struck out days earlier at the Trona Pinnacles as I tried to capture 360º star trails via either the Ricoh Theta Z or the Insta360 One X2, but this time Russell kindly showed me how to set up the Theta for interval shooting & additive exposure. I’m kinda pleased with the results:
Stellar times chilling (literally!) with Russell Preston Brown. 💫
Inspired by the awesome work of photogrammetry expert Azad Balabanian, I used my drone at the Trona Pinnacles to capture some video loops as I sat atop one of the structures. My VFX-expert friend & fellow Google PM Bilawal Singh Sidhu used it to whip up this fun, interactive 3D portrait:
The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:
The facial fidelity isn’t on par with the crazy little 3D prints of my head I got made 15 (!) years ago—but for for footage coming from an automated flying robot, I’ll take it. 🤘😛
I’m excited to see this great team growing, especially as they’ve expanded the Photoshop imaging franchise to mobile & Web platforms. Check out some of the open roles:
No markers, no mocap cameras, no suit, no keyframing. This take uses 3 DSLR cameras, though, and pretty far from being real-time. […]
Under the hood, it uses #OpenPose ML-network for 2d tracking of joints on each camera, and then custom Houdini setup for triangulating the results into 3d, stabilizing it and driving the rig (volumes, CHOPs, #kinefx, FEM – you name it 🙂
“If you want to be a better photographer, stand in front of more interesting things.” Seems like solid advice, especially when one gets the chance to sit atop the pinnacles of an ancient seabed & orbit them with a drone.
Greetings from Death Valley! I’ve been so busy running around with Adobe’s Russell Brown & some amazing models that I’ve had no time to post. I’m having a lot of fun using my new extended selfie stick & creating faux-drone shots like these, which I think you may really dig: