The team shot outdoor scenes in Kiev, Ukraine, before recreating the entire town on a set inside the country’s largest airplane hangar. The “ground,” however, was built six feet off the floor, to allow space for trampolines built into the sidewalks. […]
For a scene where he falls sideways beside a woman on a bench, two practical shots were merged into a single one. The actor bounces off a specially-crafted surface, and the camera was turned 90 degrees to film the woman, who was strapped into a bench built into a wall. The entire production was shot in just 12 days, a feat that required 200 artists and technicians.
Jeff’s mane is a little thin on top, and Gregg is more folliclularly challenged. So, when Jeff returned from vacation to Taiwan, he was rather unhappy to find that Quick Selection was selecting only his head, missing the wispy bits of hair on top. As he proclaimed while making a quick whiteboard self portrait, “I need to keep all the hair I’ve got!”
Smash-cut forward 13 years (cripes…), and researchers are developing a way to use multiple cameras to capture one’s hair, then reconstruct it in 3D (!). Check it out:
Here’s an example of what happens when a team leverages deep learning to make light fields practical. It’s gonna be really fun to try enabling these capture & display capabilities to mass scale.
More than 10,000 artworks from 208 partners worldwide have been captured with Art Camera and digitized in ultra-high resolution, from the fluffy fabric from which Vivienne Westwood tailored the Keith Haring “Witches” dress, to the almost photographic View of Delft by Vermeer. You can see these works in intricate detail simply by browsing on the Google Arts & Culture app. Explore Art Zoom online at g.co/ArtZoom, or download our free app for iOS or Android.
It’s crazy to think that this stuff works in realtime on a telephone, when just 7 years ago here’s how Content-Aware Fill looked when applied to video:
Do I know what the hell is going on here? No, of course not! (Have you ever met me? 😌) But thankfully my colleagues Noah, Richard, and co. do, and it promises a way to capture & display rich, dimensional photos (see interactive example that lets you play with parallax & see depth; more are on the site). Check it out:
The glue my team developed to connect & coordinate machine learning, computer vision, and other processes is now available for developers:
The main use case for MediaPipe is rapid prototyping of applied machine learning pipelines with inference models and other reusable components. MediaPipe also facilitates the deployment of machine learning technology into demos and applications on a wide variety of different hardware platforms (e.g., Android, iOS, workstations).
If you’ve tried any of the Google AR examples I’ve posted in the last year+ (Playground, Motion Stills, YouTube Stories or ads, etc.), you’ve already used MediaPipe, and now you can use it to remove some drudgery when creating your own apps.
People have been trying to combine the power of vector & raster drawing/editing for decades. (Anybody else remember Creature House Expression, published by Fractal & then acquired by Microsoft? Congrats on also being old! 🙃) It’s a tough line to walk, and the forthcoming Adobe Fresco app is far from Adobe’s first bite at the apple (I remember you, Fireworks).
Back in 2010, I transitioned off of Photoshop proper & laid out a plan by which different mobile apps/modules (painting, drawing, photo library) would come together to populate a share, object-centric canvas. Rather than build the monolithic (and now forgotten) Photoshop Touch that we eventually shipped, I’d advocated for letting Adobe Ideas form the drawing module, Lightroom Mobile form the library, and a new Photoshop-derived painting/bitmap editor form the imaging module. We could do the whole thing on a new imaging stack optimized around mobile GPUs.
Obviously that went about as well as conceptually related 90’s-era attempts at OpenDoc et al.—not because it’s hard to combine disparate code modules (though it is!), but because it’s really hard to herd cats across teams, and I am not Steve Fucking Jobs.
Sadly, I’ve learned, org charts do matter, insofar as they represent alignment of incentives & rewards—or lack thereof. “If you want to walk fast, walk alone; if you want to walk far, walk together.” And everyone prefers “innovate” vs. “integrate,” and then for bonus points they can stay busy for years paying down the resulting technical debt. “…Profit!”
But who knows—maybe this time crossing the streams will work. Or, see you again in 5-10 years the next time I write this post. 😌
Today, we’re introducing AR Beauty Try-On, which lets viewers virtually try on makeup while following along with YouTube creators to get tips, product reviews, and more. Thanks to machine learning and AR technology, it offers realistic, virtual product samples that work on a full range of skin tones. Currently in alpha, AR Beauty Try-On is available through FameBit by YouTube, Google’s in-house branded content platform.
M·A·C Cosmetics is the first brand to partner with FameBit to launch an AR Beauty Try-On campaign. Using this new format, brands like M·A·C will be able to tap into YouTube’s vibrant creator community, deploy influencer campaigns to YouTube’s 2 billion monthly active users, and measure their results in real time.
As I noted the other day with AR in Google Lens, big things have small beginnings. Stay tuned!
Hey, I’m as surprised as you probably are. 🙃 And yet here we are:
What if creating games could be as easy and fun as playing them? What if you could enter a virtual world with your friends and build a game together in real time? Our team within Area 120, Google’s workshop for experimental projects, took on this challenge. Our prototype is called Game Builder, and it is free on Steam for PC and Mac.
Whoa—what an eerie, funky thing to undertake: using a computer to anticipate a specific person’s idiosyncratic (yet predictable!) hand gestures based just on a recording of their speech:
Our speakers come from a diverse set of backgrounds: television show hosts, university lecturers and televangelists. They span at least three religions and discuss a large range of topics from commentary on current affairs through the philosophy of death, chemistry and the history of rock music, to readings in the Bible and the Qur’an.
As always, “This is the strangest life I’ve ever known…”
Earlier this week I was messing around with Apple’s new Reality Composer tool, thinking about fun Lego-themed interactive scenes I could whip up for the kids. After 10+ fruitless minutes of trying to get off-the-shelf models into USDZ format, however, I punted—at least for the time being. Getting good building blocks into one’s scene can still be a pain.
This new 3D scanner app promises to make the digitization process much easier. I haven’t gotten to try it, but I’d love to take it for a spin:
Starting in July, new photos and videos from Drive won’t automatically show in Photos. Similarly, new photos and videos in Photos will not be added to the Photos folder in Drive. Photos and videos you delete in Drive will not be removed from Photos. Similarly, items you delete in Photos will not be removed from Drive. This change is designed to help prevent accidental deletion of items across products.
FWIW I bailed on this integration a long while back. Instead I now import images from my SLR & Insta360 to my Mac; edit/convert the selects to JPEG via Lightroom Classic & the Insta app; then drag the JPEGs into a photos.google.com in a browser window (so they’re grouped with my phone pics/vids); and finally back up the originals to an external HD. It’s not exactly elegant, but it’s simple enough and it works.
We’ll see whether Facebook reverses course & deletes this deepfake now that it involves Mark Zuckerberg:
Oh my god. Artists uploaded a deep fake of Mark Zuckerberg to Instagram, saying he's in control of billions of people's stolen data and ready to control the future. Facebook previously said it would not delete similar videos under its policies. We'll see https://t.co/ufwV7zMyedpic.twitter.com/CBfVtGoaQd
When someone edits a text transcript of the video, the software combines all this collected data — the phonemes, visemes, and 3D face model — to construct new footage that matches the text input. This is then pasted onto the source video to create the final result.
In tests in which the fake videos were shown to a group of 138 volunteers, some 60 percent of participants though the edits were real. That may sound quite low, but only 80 percent of that same group thought the original, unedited footage was also legitimate.
See previous: “Audio Photoshopping” using Adobe VoCo:
For the last couple of years I’ve randomly observed colleagues waving at Frankenstein-looking circuit boards & displays. As with many odd things at Google, I’ve channeled Bob Dylan—”Don’t Think Twice, It’s Alright”—and moved right along.
Now some of the fruits of that labor are coming to light, as the new Nest Hub Max has been announced, complete with the ability to recognize gestures (e.g. to stop music or an alarm) and faces (to show you personalized info like your calendar). Dieter Bohn offers a nice overview here:
It’s a small step, to be sure, but I’m exited to see that lensing a Raptors or (for good people 🙃) Warriors logo lets you see animated results, scores, stats, and more. Things are gonna get really interesting from here.
Ooh—I’ll have to show this story to my coin- and detector-loving 9yo son Henry.
Although Peter has been doing this for decades, his success rate of uncovering historic finds has grown since the launch of Google Earth, which helps him research farmlands to search and saves him from relying on outdated aerial photography. In December 2014, Peter noticed a square mark in a field with Google Earth. It was in this area where the Weekend Wanderers discovered the £1.5 million cache of Saxon coins. The coins are now on display in the Buckinghamshire County Museum, by order of the queen’s decree.