With creation tools in Google Earth, you can draw your own placemarks, lines and shapes, then attach your own custom text, images, and videos to these locations. You can organize your story into a narrative and collaborate with others. And when you’ve finished your story, you can share it with others. By clicking the new “Present” button, your audience will be able to fly from place to place in your custom-made Google Earth narrative.
Take a look at how students & others are using it:
Here’s a 60-second-ish tour of the actual creation process:
With my little nephew zooming towards school in his electric wheelchair, I’ve been thinking more about ways to make learning environments more open to everyone, and grateful to see folks doing rad work like this:
Moosajee tells Colossal that the animation is comprised of more than 3,000 individual frames. Using 3-D and 2-D animation techniques, Moosajee and the team layered over the frames, integrating crowd simulation, charcoal washing, fire simulation, and stop motion powder texturing.
We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X.
BodyPix 2.0 has been released, including multi-person segmentation support and a new live demo!
My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,
I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:
Let me leave my computer at home & carry just my tablet** & camera
Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.
This remains a bizarre failure of our industry.
Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.
Here’s a 2-minute concept video followed by a 6-minute on-stage demo.
Certainly looks a lot more pleasant than this classic nightmare 🙃:
Being able to place PSD layers in space & then draw a path to animate an object along it (see attached screenshot) is pretty slick. The Aero iOS app is available now, with desktop & hopefully Android to follow.
Some 99% Invisible podcast is going to seem worthy but so arcane that I just won’t really make time to listen to it.
If eventually I do, I’ll be really rewarded for it.
The recent episode about The Great Bitter Lake Association—covering the camaraderie that emerged among sailors in the “Yellow Fleet” that became trapped by the Six-Day War & then stuck in the Suez for years—is totally fascinating. I hope you listen & enjoy. If nothing else, enjoy the homemade postage they created amongst themselves (which eventually became recognized by Egypt & thus usable for sending mail), the discovery of which sets the whole recounting in motion.
To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.
And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”
“Imagine your Labrador’s smile on a lion or your feline’s finicky smirk on a tiger,” NVIDIA writes. “A team of NVIDIA researchers has defined new AI techniques that give computers enough smarts to see a picture of one animal and recreate its expression and pose on the face of any other creature.”
Happy Veterans’ Day, everyone. I’m proud of my first-responder brother (who volunteers his time to drive an ambulance in rural Illinois), and of my employer for helping vets & others better serve their communities:
A challenging, but often unrecognized, aspect of this work is the preparation required ahead of potential disasters. Therefore, Google.org is giving a $1 million grant to Team Rubicon to build out teams of volunteers, most of them military veterans, who will work alongside first responders to build out disaster preparedness operations.
Anything that finally lets regular people tap into the vast (and vastly untapped) power of Illustrator’s venerable gradient mesh is a win, and this tech promises to let vector shapes function as light emitters that help cast shadows:
Requisite (?) Old Man Nack moment: though I have no idea if/how the underlying tech relates, I’m reminded of the Realtime Gradient-Domain Painting work that onetime Adobe researcher Jim McCann published back in 2008.
Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”
Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯
Back in 2011, my longtime Photoshop boss Kevin Connor left Adobe & launched a startup (see NYT article) with Prof. Hany Farid to help news organizations, law enforcement, and others detect image manipulation. They were ahead of their time, and since then the problem of “fake news” has only gotten worse.
This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).
I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.