Search for Halloween, Jack-o-lantern, human skeleton, cat, dog, or German Shepherd in the Google App or on your mobile browser (Android or iOS) and you’ll find these de-fright-ful AR characters on Google. Tap “View in 3D” to see it up close and then bring it into your space with AR. Don’t forget to take pictures or videos!
As a kid, I spent hours fantasizing about the epic films I could make, if only I could borrow my friend’s giant camcorder & some dry ice. Apple 💯 has their finger on the pulse of such aspirational souls in this new ad:
It’s pretty insane to see what talented filmmakers can do with just a phone (or rather, a high-end camera/computer/monitor that happens to make phone calls) and practical effects:
Apple has posted an illuminating behind-the-scenes video for this piece. PetaPixel writes,
In one clip they show how they dropped the phone directly into rocks that they had fired upwards using a piston, and in another, they use magnets and iron filings with the camera very close to the surface. One step further, they use ferrofluid to create rapidly flowing ripples that flow wildly on camera.
I’ve long, long been a fan of using brush strokes on paths to create interesting glyphs & lettering. I used to contort all kinds of vectors into Illustrator brushes, and as it happens, 11 years ago today I was sharing an interesting tutorial on creating smokey text:
Now Adobe engineers are looking to raise the game—a lot.
Combining users drawn stroke inputs, the choice of brush, and the typographic properties of the text object, Project Typographic Brushes brings paint style brushes and new-type families to life in seconds.
My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:
On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:
Here’s a more in-depth look (starting around 1:46) at controlling the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:
I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):
Man, these are stunning—and they’re all done in camera:
First coated in black, the anonymous subjects in Tim Tadder’s portraits are cloaked with hypnotic swirls and thick drips of bright paint. To create the mesmerizing images, the Encinitas, California-based photographer and artist pours a mix of colors over his sitters and snaps a precisely-timed shot to capture each drop as it runs down their necks or splashes from their chins.
Make plans to join us for a uniquely immersive and engaging digital experience, guaranteed to inspire. Three full days of luminary speakers, celebrity appearances, musical performances, global collaborative art projects, and 350+ sessions — and all at no cost.
A couple of years ago, adventure photographer and Visit Austria creator Peter Maier captured a stunning rainstorm timelapse titled ‘Tsunami from Heaven’… It was captured from the Alpengasthof Bergfried hotel in Carinthia, Austria, and shows a sudden cloudburst (AKA microburst or downburst) soaking an area around Lake Millstatt.
The notion of a metaverse, “a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space,” has long beguiled those of us captivated by augmented reality. Now Snap has been doing the hard work of making this more real, being able to scan & recognize one’s surroundings and impose a “persistent, shared AR world built right on top of your neighborhood.” Check it out:
This experience (presently available on just one street in London, but presumably destined to reach many others) builds on the AR Landmarkers work the company did previously. (As it happens, I think David Salesin—who led Adobe Research for many years—contributed to this effort during his stopover at Snap before joining Google Research.)
I’m delighted to share that my team’s work to add 3D & AR automotive results to Google Search—streaming in cinematic quality via cloud rendering—has now been announced! Check out the demo starting around 36:30:
You can easily check out what the car looks like in different colors, zoom in to see intricate details like buttons on the dashboard, view it against beautiful backdrops and even see it in your driveway. We’re experimenting with this feature in the U.S. and working with top auto brands, such as Volvo and Porsche, to bring these experiences to you soon.
Cloud streaming enables us to take file size out of the equation, so we can serve up super detailed visuals from models that are hundreds of megabytes in size:
Right now the feature is in testing in the US, so there’s a chance you can experience it via Android right now (with iOS planned soon). We hope to make it available widely soon, and I can’t wait to hear what you think!
This is one of the far-flung projects I’ve been glad to help support. New features (like this one that’s available on Pixel, and coming soon to iOS & Android):
When a friend has chosen to share their location with you, you can easily tap on their icon and then on Live View to see where and how far away they are–with overlaid arrows and directions that help you know where to go.
It’s also getting smarter about recognizing landmarks:
Soon, you’ll also be able to see nearby landmarks so you can quickly and easily orient yourself and understand your surroundings. Live View will show you how far away certain landmarks are from you and what direction you need to go to get there.
We recently launched Touch-to-fill for passwords on Android to prevent phishing attacks. To improve security on iOS too, we’re introducing a biometric authentication step before autofilling passwords. On iOS, you’ll now be able to authenticate using Face ID, Touch ID, or your phone passcode. Additionally, Chrome Password Manager allows you to autofill saved passwords into iOS apps or browsers if you enable Chrome autofill in Settings.
“Are they gonna use the Snapchat dancing hot dog to steer them or what?” — Henry Nack, age 11, bringing the 🔥 feature requests 😌
Funded by the US military and developed by a Seattle-based company called Command Sight, the new goggles will allow handlers to see through a dog’s eyes and give directions while staying out of sight and at a safe distance.
While looking through the dog’s eyes thanks to the goggle’s built-in camera, the handler can direct the dog by controlling an augmented reality visual indicator seen by the dog wearing the goggles.
That’s what I said—or at least what I thought—upon seeing Paul next to me in line for coffee at Google. I’d known his name & work for decades, especially via my time PM’ing features related to HDR imaging—a field in which Paul is a pioneer.
Anyway, Paul & his team have been at Google for the last couple of years, and he’ll be giving a keynote talk at VIEW 2020 on Oct 18th. “You can now register for free access to the VIEW Conference Online Edition,” he notes, “to livestream its excellent slate of animation and visual effects presentations.”
In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning. I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded. Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach. I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware. I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.
A new project using sonification turns astronomical images from NASA’s Chandra X-Ray Observatory and other telescopes into sound. This allows users to “listen” to the center of the Milky Way as observed in X-ray, optical, and infrared light. As the cursor moves across the image, sounds represent the position and brightness of the sources.
Seems like the Illustrator feature sneak-peeked last year is about to be released. Of course, I wouldn’t be a salty B 🙃 if I didn’t slip in some reference to all this going back ~15 years to Adobe Kuler & Illustrator Live Color & whatnot. (Okay, there—personal brand promise kept!)
Okay, not wars—how about enamel pins? Color me a little skeptical that the augmented reality portion of these pins will get much use, but hey, if it’s just a nice little bonus on something people already wanted, what the heck?
The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.
I have been waiting, I kid you not, since the Bush Administration to have an easy way to adjust lighting on faces. I just didn’t expect it to appear on my telephone before it showed up in Photoshop, but ¯\_(ツ)_/¯. Anyway, check out what you can now do on Pixel 4 & 5 devices:
This feature arrives, as PetaPixel notes, as one of several new Suggestions:
Nestled into a new ‘Suggestions’ tab that shows up first in the Photos editor, the options displayed there “[use] machine learning to give you suggestions that are tailored to the specific photo you’re editing.” For now, this only includes three options—Color Pop, Black & White, and Enhance—but more suggestions will be added “in the coming months” to deal specifically with portraits, landscapes, sunsets, and beyond.
Lastly, the photo editor overall has gotten its first major reorganization since we launched it in 2015: