“Lost your keys? Lost your job?” asks illustrator Don Moyer. “Look at the bright side. At least you’re not plagued by pterodactyls, pursued by giant robots, or pestered by zombie poodles. Life is good!”
Once the deal closes, BRIO XR will be joining an unparalleled community of engineers and product experts at Adobe – visionaries who are pushing the boundaries of what’s possible in 3D and immersive creation. Our BRIO XR team will contribute to Adobe’s Creative Cloud 3D authoring and experience design teams. Simply put, Adobe is the place to be, and in fact, it’s a place I’ve long set my sights on joining.
Adam Buxton recorded a conversation with his 5-year-old daughter discussing her thoughts on Princess Leia’s famous slave outfit. She is hilarious by herself but when he got The Brothers McLeod to animate her words, it all turned into pure comedic gold.
Can machines generate art like a human would? They already are.
Join us on March 30th, at 9AM Pacific for a live chat about what’s on the frontier of machine learning and art. Our team of panelists will break down how text prompts in machine learning models can create artwork like a human might, and what it all means for the future of artistic expression.
Aaron Hertzmann is a Principal Scientist at Adobe, Inc., and an Affiliate Professor at University of Washington. He received a BA in Computer Science and Art & Art History from Rice University in 1996, and a PhD in Computer Science from New York University in 2001. He was a professor at the University of Toronto for 10 years, and has worked at Pixar Animation Studios and Microsoft Research. He has published over 100 papers in computer graphics, computer vision, machine learning, robotics, human-computer interaction, perception, and art. He is an ACM Fellow and an IEEE Fellow.
Ryan is a Machine Learning Engineer/Researcher at Adobe with a focus on multimodal image editing. He has been creating generative art using machine learning for years, but is most known for his recent work with CLIP for text-to-image systems. With a Bachelor’s in Psychology from the University of Utah, he is largely self-taught.
We are excited to announce that Photoshop now has full support for the WebP file format! WebP files can now be opened, created, edited, and saved in Photoshop without the need for a plug-in or preference setting.
To open a WebP file, simply select and open the file in the same manner as you would any other supported file or document. In addition to open capabilities, you can now create, edit, and save WebP files. Once you are done editing your document, open Save As or Save a Copy and select WebP from the options provided in the file format drop-down menu to save your WebP file.
V7 Labs has created a new artificial intelligence-based (AI) software that works as a Google Chrome extension that is capable of detecting artificially generated profile pictures — like the ones above — with a claimed 99.28% accuracy.
Creator Alberto Rizzoli walks through the flow in this video (more detailed than the one below).
[Adobe] announced a tool that allows consumers to point their phone at a product image on an ecommerce site—and then see the item rendered three-dimensionally in their living space. Adobe says the true-to-life size precision—and the ability to pull multiple products into the same view—set its AR service apart from others on the market. […]
Chang Xiao, the Adobe research scientist who created the tool, said many of the AR services currently on the market provide only rough estimations of the size of the product. Adobe is able to encode dimensions information in its invisible marker code embedded in the photos, which its computer vision algorithms can translate into more precisely sized projections.
The coaster was constructed from just under 90,000 individual Legos, and Chairudo estimates that it took him about 800 hours to build. The mammoth replica is more than 21 feet long, four feet wide, and almost five feet tall, with a total track length of 85 feet. It’s so big, Chairudo had to rent a separate room just to construct it.
Using Lego bricks, a Raspberry Pi mini-computer, an Arduino microcontroller, some off-the-shelf components like lenses, and 3D-printed components, IBM scientist Yuksel Temiz built a fully functional microscope to help him with his work. The materials cost around $300 and the microscope performs as well as scopes many times more expensive.
Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.
Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.
Check out the post for more images & making-of info.
My now-teammates’ work on Neural Filters is exactly what made me want to return to the ‘Dobe, and I’m thrilled to get to build upon what they’ve been doing. It’s great to see Fast Company recognizing this momentum:
For putting Photoshop wizardry within reach
Adobe’s new neural filters use AI to bring point-and-click simplicity to visual effects that would formerly have required hours of labor and years of image-editing expertise. Using them, you can quickly change a photo subject’s expression from deadpan to cheerful. Or adjust the direction that someone is looking. Or colorize a black-and-white photo with surprising subtlety. Part of Adobe’s portfolio of “Sensei” AI technologies, the filters use an advanced form of machine learning known as generative adversarial networks. That lets them perform feats such as rendering parts of a face that weren’t initially available as you edit a portrait. Like all new Sensei features, the neural filters were approved by an Adobe ethics committee and review board that assess AI products for problems stemming from issues such as biased data. In the case of these filters, this process identified an issue with how certain hairstyles were rendered and fixed it before the filters were released to the public.
Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:
In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.
Roughly forever ago, when I was pushing the idea of extending the Photoshop compositing pipeline to include plug-in modules (which we did kinda succeed with in the form of 3D layers—now sadly ripped out), I loved the idea of layers that could emit & control light. We didn’t get there, of course, but I’m happy to see folks like Boris FX offering some cool new controls:
In Death Valley a couple of weeks ago, my 12yo Mini Me Henry & I had fun creating little narratives in the sand. I have to say, it’s pretty cool how far a kid can get these days with a telephone & handful of plastic bricks! Here’s a little gallery we made together.
Elsewhere, I’m perpetually amazed at what folks can do with enough time, talent, and willpower: