After nearly ten years in the market (tempus fugit, baby…), ol’ Phil could use a makeover, and it looks like some nice finesse (controlling which areas are used for sampling) is waiting in the wings:
[YouTube]
After nearly ten years in the market (tempus fugit, baby…), ol’ Phil could use a makeover, and it looks like some nice finesse (controlling which areas are used for sampling) is waiting in the wings:
[YouTube]
(Tangential reminder: You can build hand tracking into your mobile app right now using tech from my team.)
[YouTube]
Racks on racks on racks o’ cloud GPUs mean we’re setting up to deliver some awesome experiences. Check out these PM gigs:
If you apply, please tell ‘em who sent ya. 😌
Tawanda Kanhema is a Zimbabwe-born PM working in Silicon Valley who’s spending his own time & money using equipment borrowed from Google & Insta360 to help map his home country. According to NPR,
In 2018, Kanhema applied to borrow a 360-degree camera through Google’s Street View camera loan program, and in the fall, he took a two-week trek through Zimbabwe with the equipment. There was a speedboat ride across the Zambezi River and a safari trip through a national park. Most of his time, though, was spent driving down the streets of Harare and the highways that connect Zimbabwe’s major cities. […]
Most recently, in March, the Mushkegowuk Council, in northern Ontario, paid him to document the network of ice roads that connect indigenous communities in the area… Next up, Kanhema says he might head to Greenland, or maybe Alaska or Mozambique — and you’ll be able to see everything he has seen by clicking on Google Maps.
Rock on, man!
[Via]
Much like his big bro did last year (see below), our son Henry stepped up to el micrófono to tell tales of meteorological mayhem. It took a village, with mom scoring a green-screen kit from her Adobe video pals & me applying some AR effects created by my talented teammates.
Here’s a behind-the-scenes peek at our advanced VFX dojo/living room. 😌
Blankets become backpacks, safety cards adorn cases, and even chunks of fuselage become sculptures as Lufthansa gives retired airline ephemera a new ticket to ride:
[YouTube]
All through October. Looks like fun—and great to see how far the world has come from the “Thoughts On Flash”/“Sympathy for the Devil” days.
We've partnered up with @Apple for The Big Draw!
Join free art sessions October 1–31 at Apple Stores to get hands-on experience with #AdobeFresco on iPad and instruction from top talent: https://t.co/ArOZTwF6wD #TodayatApple #TheBigDraw pic.twitter.com/eXlyqX7ghB
— Adobe Drawing (@AdobeDrawing) September 17, 2019
This purchase is made up of a 1,600-megawatt (MW) package of agreements and includes 18 new energy deals. Together, these deals will increase our worldwide portfolio of wind and solar agreements by more than 40 percent, to 5,500 MW—equivalent to the capacity of a million solar rooftops. Once all these projects come online, our carbon-free energy portfolio will produce more electricity than places like Washington D.C. or entire countries like Lithuania or Uruguay use each year.
Our latest agreements will also spur the construction of more than $2 billion in new energy infrastructure, including millions of solar panels and hundreds of wind turbines spread across three continents. In all, our renewable energy fleet now stands at 52 projects, driving more than $7 billion in new construction and thousands of related jobs.
“With great power…” I’m pleased to see some of my collaborators in augmented reality working to help fight deceptive content:
To make this dataset, over the past year we worked with paid and consenting actors to record hundreds of videos. Using publicly available deepfake generation methods, we then created thousands of deepfakes from these videos. The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts. As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.
I see that for today’s Doodle, Google commissioned a baker…
…to help produce this fun, pretzel-y creation:
It reminds me fondly of when I first joined the company & my son Finn ate his snack into the shape of the logotype. 🥰🥨
[YouTube]
🎶 DUN-duhn dun dun, duh dun… DUN-duhn dun dun, duh dun… 🎶🤖💀
We’re giving you the chance to remix the Terminator: Dark Fate trailer with Adobe Premiere Pro or Premiere Rush. Use source files from the film, plus a collection of free Adobe Stock assets — this is your moment to choose your fate.
The prizes don’t suck:
$10,000 cash
One-year Creative Cloud membership
Private screening for you + 50 friends
Chance to showcase your work at Adobe MAX
There are additional prizes for a young creator & runners-up. Let’s see what you can do!
Potentially cool idea:
Onyx puts the world’s smartest trainer in your pocket. With just the camera on your phone it counts your reps, corrects your form, brings tracking to nearly any exercise, and provides audio workouts personalized to your performance in real time.
[YouTube]
Looks fun, though I have no idea how to create these; “open the app to the camera, navigate to the 3D option option in the dropdown menu, and voila:
The Verge writes,
Starting today, people with an iPhone X or newer can use “3D Camera Mode” to capture a selfie and and apply 3D effects, lenses, and filters to it.
Snap first introduced the idea for 3D effects with Snaps when it announced its latest version of Spectacles, which include a second camera to capture depth. The effects and filters add things like confetti, light streaks, and miscellaneous animations.
[YouTube]
Look Ma, no depth sensor required.
People seem endlessly surprised that one is not only allowed to use an iPhone at Google, but that we also build great cross-platform tech for developers (e.g. ML Kit). In that vein I’m delighted to say that my team has now released an iOS version (supporting iPhone 6s and above) of the Augmented Faces tech we first released for ARCore for Android earlier this year:
It provides a high-quality, 468-point 3D mesh that lets users attach fun effects to their faces — all without a depth sensor on their smartphone. With the addition of iOS support rolling out today, developers can now create effects for more than a billion users. We’ve also made the creation process easier for both iOS and Android developers with a new face effects template.
Here’s a quick overview from my teammate Sam:
[YouTube]
Neat idea from El Pollo Loco + Snapchat:
El Pollo Loco… is looking to bring back lost Latino-themed murals in downtown Los Angeles, if only in virtual form. Beginning Sunday, open the Snapchat smartphone app, tap on the background to activate the World Lenses feature, and point the phone at the now blank wall. With that, the old murals come back to life on the screen.
[Via]
This slick tool helps retarget “cinematic 16:9, square 1:1, or vertical 9:16, without losing track of your subject.” PetaPixel writes,
If you’re working with a timeline that includes multiple clips, there’s also an “Auto Reframe Sequence” option that allows you to select the aspect ratio you want and apply it to every clip in your timeline at once. Best of all, the effect isn’t only applied to the video footage, titles and motion graphics are also resized to fit the new aspect ratio.
Check it out:
[YouTube]
I know this will seem like small beans—literally—but over time it’ll be a big deal, and not just because it’s an instance of the engine I’m working to enhance.
Through Lens, you’ll get meal recommendations based on your tastes, dietary preferences, and allergies, along with a personalized score for products like Uncle Ben’s Ready Rice, Flavored Grains, Flavor Infusions, and beans.
VentureBeat goes on to note,
The growing list of things Lens can recognize covers over 1 billion products… The new feature follows a Lens capability that highlights top meals at a restaurant and a partnership with Wescover that supplies information about art and design installations. Lens also recently gained the ability to split a bill or calculate a tip after a meal; [and] to overlay videos atop real-world publications.
Check out the latter, from a couple of months ago. As I say, big things have small beginnings.
Unlock exclusive #NBAFinals content with Google Lens when you scan the Warriors or Raptors logo! Here's how: https://t.co/OOX0ckNy0o pic.twitter.com/fMhkauQcSY
— NBA (@NBA) May 31, 2019
I’m not kidding (or shilling for my employer) when I tell you that:
Now a new 10” sibling (with video chat!) has joined the product lineup, and it can do tons of stuff. Check it out:
[YouTube]
For years I’ve chuckled imagining David Lynch saying, “You did [X improbable thing] on your effing telephone.” In this case, it’s tech from 6D enabling a bunch of people to crowd-source a 3D environmental model in a matter of minutes using just their effing telephones:
[YouTube]
Insta360 camera + a 3D-printed custom airframe + one big-ass pneumatic cannon? Yeah, that’ll produce some fun footage. Enjoy!
[YouTube] [Via Bilawal Sidhu]
The Google Photos team is adding the ability to search for text in images, then copy & paste it. Stay tuned for details.
You spotted it! Starting this month, we’re rolling out the ability to search your photos by the text in them.
Once you find the photo you’re looking for, click the Lens button to easily copy and paste text. Take that, impossible wifi passwords 😏
— Google Photos (@googlephotos)
Ooh, I might need to try this with the Micronaxx:
Students can take a photo of a question or use their voice to ask a question, and we’ll find the most relevant resources from across the web. If they’re struggling to understand textbook content or handouts, they can take a picture of the page and check out alternative explanations of the same concepts.
Back in 2013 I was really taken with how Microsoft’s PhotoSynth technology could generate interactive hyperlapses for reliving walks, bike rides, etc., and I was sad when the tech died pretty soon after. Wearable cameras just weren’t ubiquitous, affordable, and high quality at the time.
Are those times a-changin’? Maybe: The incredibly tiny, albeit not incredibly cheap, Insta360 GO wearable cam promises to capture stabilized hyperlapses, as shown in the demo below. It seems that most commentators are focusing on the device’s 30-second video limit, but that doesn’t bother me. Honestly, as much as I really love the DJI Osmo I got at the end of last year, I’ve barely put it to use: I just haven’t needed a handheld, non-phone, non-360º way to capture video. The GO, by contrast, promises ultra lightweight wearability, photo capture, and a slick AirPods-style case for both recharging & data transfer. Check it out:
[YouTube]
It’s sort of hilarious to me that a 30-year-old silent file format has given its contentiously pronounced name to a genre of little looping animations—which in this case aren’t even silent! But who cares, just enjoy these fun little sequences: