With my little nephew zooming towards school in his electric wheelchair, I’ve been thinking more about ways to make learning environments more open to everyone, and grateful to see folks doing rad work like this:
Haunting work from Saad Moosajee:
Moosajee tells Colossal that the animation is comprised of more than 3,000 individual frames. Using 3-D and 2-D animation techniques, Moosajee and the team layered over the frames, integrating crowd simulation, charcoal washing, fire simulation, and stop motion powder texturing.
Over the last couple of years I’ve pointed out a number of cool projects (e.g. driving image search via your body movements) powered by my teammates’ efforts to 1) deliver great machine learning models, and 2) enable Web browsers to run them efficiently. Now they’ve released BodyPix 2.0 (see live demo):
We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X.
BodyPix 2.0 has been released, including multi-person segmentation support and a new live demo!
— TensorFlow (@TensorFlow) November 18, 2019
Spoiler alert… the phrase “hands down” comes into play. 😌
“Hey, y’all got a water desalination plant, ‘cause I’m salty as hell.” 🙃
First, some good news: Lightroom is planning to improve the workflow of importing images from an SD card:
I know that this is something that photographers deeply wanted, starting in 2010. I just wonder whether—nearly 10 years since the launch of iPad—it matters anymore.
My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,
I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:
- Let me leave my computer at home & carry just my tablet** & camera
- Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
- When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.
This remains a bizarre failure of our industry.
Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.
On we go.
Here’s a 2-minute concept video followed by a 6-minute on-stage demo.
Certainly looks a lot more pleasant than this classic nightmare 🙃:
Being able to place PSD layers in space & then draw a path to animate an object along it (see attached screenshot) is pretty slick. The Aero iOS app is available now, with desktop & hopefully Android to follow.
Two things I should know by now:
The recent episode about The Great Bitter Lake Association—covering the camaraderie that emerged among sailors in the “Yellow Fleet” that became trapped by the Six-Day War & then stuck in the Suez for years—is totally fascinating. I hope you listen & enjoy. If nothing else, enjoy the homemade postage they created amongst themselves (which eventually became recognized by Egypt & thus usable for sending mail), the discovery of which sets the whole recounting in motion.
Dad-joke of a name notwithstanding 😌, this tech looks pretty slick:
To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.
And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”
GANimals, man… GANimals. ¯\_(ツ)_/¯
“Imagine your Labrador’s smile on a lion or your feline’s finicky smirk on a tiger,” NVIDIA writes. “A team of NVIDIA researchers has defined new AI techniques that give computers enough smarts to see a picture of one animal and recreate its expression and pose on the face of any other creature.”