Outdoor Photographer reviews Adobe Super Resolution

Great to see Adobe AI getting some love:

Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.

Check out the whole article for details.

Snapchat rolls out Landmarker creation tools; Disney deploys them

Despite Pokemon Go’s continuing (and to me, slightly baffling) success, I’ve long been much more bullish on Snap than Niantic for location-based AR. That’s in part because of their very cool world lens tech, which they’ve been rolling out more widely. Now they’re opening up the creation flow:

“In 2019, we started with templates of 30 beloved sites around the world which creators could build upon called Landmarkers… Today, we’re launching Custom Landmarkers in Lens Studio, letting creators anchor Lenses to local places they care about to tell richer stories about their communities through AR.”

Interesting stats:

At its Lens Fest event, the company announced that 250,000 lens creators from more than 200 countries have made 2.5 million lenses that have been viewed more than 3.5 trillion times. Meanwhile, on Snapchat’s TikTok clone Spotlight, the app awarded 12,000 creators a total of $250 million for their posts. The company says that more than 65% of Spotlight submissions use one of Snapchat’s creative tools or lenses.

On a related note, Disney is now using the same core tech to enable group AR annotation of the Cinderella Castle. Seems a touch elaborate:

  • Park photographer takes your pic
  • That pic ends up in your Disney app
  • You point that app at the castle
  • You see your pic on the castle
  • You then take a pic of your pic on the castle… #YoDawg :upside_down_face:

NASA celebrates Hubble’s 32nd birthday with a lovely photo of five clustered galaxies

Honestly, from DALL•E innovations to classic mind-blowers like this, I feel like my brain is cooking in my head. 🙃 Take ‘er away, science:

Bonus madness (see thread for details):

A free online face-swapping tool

My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)

I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)

Substance for Unreal Engine 5

I’m no 3D artist (had I but world enough and time…), but I sure love their work & anything that makes it faster and easier. Perhaps my most obscure point of pride from my Photoshop years is that we added per-layer timestamps into PSD files, so that Pixar could more efficiently render content by noticing which layers had actually been modified.

Anyway, now that Adobe has made a much bigger bet on 3D tooling, it’s great to see new support for Substance Painter coming to Unreal Engine:

The Substance 3D plugin (BETA) enables the use of Substance materials directly in Unreal Engine 5 and Unreal Engine 4. Whether you are working on games, visualization and or deploying across mobile, desktop, or XR, Substance delivers a unique experience with optimized features for enhanced productivity.

Work faster, be more productive: Substance parameters allow for real-time material changes and texture updates.

Substance 3D for Unreal Engine 5 contains the plugin for Substance Engine.

Access over 1000 high-quality tweakable and export-ready 4K materials with presets on the Substance 3D Asset library. You can explore community-contributed assets in the community assets library.

The Substance Assets platform is a vast library containing high-quality PBR-ready Substance materials and is accessible directly in Unreal through the Substance plugin. These customizable Substance files can easily be adapted to a wide range of projects.

Frame.io is now available in Premiere & AE

To quote this really cool Adobe video PM who also lives in my house 😌, and who just happens to have helped bring Frame.io into Adobe,

Super excited to announce that Frame.io is now included with your Creative Cloud subscription. Frame panels are now included in After Effects and Premiere Pro. Check it out!

From the integration FAQ:

Frame.io for Creative Cloud includes real-time review and approval tools with commenting and frame-accurate annotations, accelerated file transfers for fast uploading and downloading of media, 100GB of dedicated Frame.io cloud storage, the ability to work on up to 5 different projects with another user, free sharing with an unlimited number of reviewers, and Camera to Cloud.


Behind Peacemaker’s joyously bizarre title sequence

I generally really enjoyed HBO’s Peacemaker series—albeit, as I told the kids, if even I found the profanity excessive, insofar as “too much salt spoils the soup.” I really enjoyed the whacked-out intro music & choreography:

Here the creators give a peek into how it was made:

And here a dance troupe in Bangladesh puts their spin on it:

https://twitter.com/JamesGunn/status/1507405773252575234

DALL•E 2 looks too amazing to be true

There’s no way this is real, is there?! I think it must use NFW technology (No F’ing Way), augmented with a side of LOL WTAF. 😛

Here’s an NYT video showing the system in action:

The NYT article offers a concise, approachable description of how the approach works:

A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of avocado photos, for example, it can learn to recognize an avocado. DALL-E looks for patterns as it analyzes millions of digital images as well as text captions that describe what each image depicts. In this way, it learns to recognize the links between the images and the words.

When someone describes an image for DALL-E, it generates a set of key features that this image might include. One feature might be the line at the edge of a trumpet. Another might be the curve at the top of a teddy bear’s ear.

Then, a second neural network, called a diffusion model, creates the image and generates the pixels needed to realize these features. The latest version of DALL-E, unveiled on Wednesday with a new research paper describing the system, generates high-resolution images that in many cases look like photos.

Though DALL-E often fails to understand what someone has described and sometimes mangles the image it produces, OpenAI continues to improve the technology. Researchers can often refine the skills of a neural network by feeding it even larger amounts of data.

I can’t wait to try it out.

MyStyle promises smarter facial editing based on knowing you well

A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.

In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:

Adobe is acquiring BRIO XR

Exciting news!

Once the deal closes, BRIO XR will be joining an unparalleled community of engineers and product experts at Adobe – visionaries who are pushing the boundaries of what’s possible in 3D and immersive creation. Our BRIO XR team will contribute to Adobe’s Creative Cloud 3D authoring and experience design teams. Simply put, Adobe is the place to be, and in fact, it’s a place I’ve long set my sights on joining.  

AI: Live panel discussion Wednesday about machine learning & art

Two of my Adobe colleagues (Aaron Hertzmann & Ryan Murdock) are among those set to discuss these new frontiers on Wednesday:

Can machines generate art like a human would? They already are. 

Join us on March 30th, at 9AM Pacific for a live chat about what’s on the frontier of machine learning and art. Our team of panelists will break down how text prompts in machine learning models can create artwork like a human might, and what it all means for the future of artistic expression. 

Bios:

Aaron Hertzmann

Aaron Hertzmann is a Principal Scientist at Adobe, Inc., and an Affiliate Professor at University of Washington. He received a BA in Computer Science and Art & Art History from Rice University in 1996, and a PhD in Computer Science from New York University in 2001. He was a professor at the University of Toronto for 10 years, and has worked at Pixar Animation Studios and Microsoft Research. He has published over 100 papers in computer graphics, computer vision, machine learning, robotics, human-computer interaction, perception, and art. He is an ACM Fellow and an IEEE Fellow.

Ryan Murdock

Ryan is a Machine Learning Engineer/Researcher at Adobe with a focus on multimodal image editing. He has been creating generative art using machine learning for years, but is most known for his recent work with CLIP for text-to-image systems. With a Bachelor’s in Psychology from the University of Utah, he is largely self-taught.

Fantastic Shadow Beasts (and Where To Find Them)

I’m not sure who captured this image (conservationist Beverly Joubert, maybe?), or whether it’s indeed the National Geographic Picture of The Year, but it’s stunning no matter what. Take a close look:

Elsewhere I love this compilation of work from “Shadowologist & filmmaker” Vincent Bal:

 

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

 

A post shared by WELCOME TO THE UNIVERSE OF ART (@artistsuniversum)

Photoshop adds full support for WebP

Huzzah! Here’s the scoop:

——–

Full support for WebP

We are excited to announce that Photoshop now has full support for the WebP file format! WebP files can now be opened, created, edited, and saved in Photoshop without the need for a plug-in or preference setting.

Full support for WebP in Photoshop

To open a WebP file, simply select and open the file in the same manner as you would any other supported file or document. In addition to open capabilities, you can now create, edit, and save WebP files. Once you are done editing your document, open Save As or Save a Copy and select WebP from the options provided in the file format drop-down menu to save your WebP file.

To learn more, check out Work with WebP files in Photoshop.

Adobe demos new screen-to-AR shopping tech

Cool idea:

[Adobe] announced a tool that allows consumers to point their phone at a product image on an ecommerce site—and then see the item rendered three-dimensionally in their living space. Adobe says the true-to-life size precision—and the ability to pull multiple products into the same view—set its AR service apart from others on the market. […]

Adobe Unveils New Augmented Reality Shopping Tool Prototype | Adobe AR-2022

Chang Xiao, the Adobe research scientist who created the tool, said many of the AR services currently on the market provide only rough estimations of the size of the product. Adobe is able to encode dimensions information in its invisible marker code embedded in the photos, which its computer vision algorithms can translate into more precisely sized projections.

Giant Lego “wooden” rollercoaster

Did you think there is a world in which I don’t post this, my friend?
Happily, there is no such world. 😝

Per Digital Trends,

The coaster was constructed from just under 90,000 individual Legos, and Chairudo estimates that it took him about 800 hours to build. The mammoth replica is more than 21 feet long, four feet wide, and almost five feet tall, with a total track length of 85 feet. It’s so big, Chairudo had to rent a separate room just to construct it.

Google Photos is bringing Portrait Blur to Android subscribers

Nice to see my old team’s segmentation tech roll out more widely.

The Verge writes,

Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.

Asphalt Galaxies

PetaPixel writes,

Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.

Check out the post for more images & making-of info.

Adobe makes list of “10 most innovative companies in AI of 2022”

My now-teammates’ work on Neural Filters is exactly what made me want to return to the ‘Dobe, and I’m thrilled to get to build upon what they’ve been doing. It’s great to see Fast Company recognizing this momentum:

For putting Photoshop wizardry within reach

Adobe’s new neural filters use AI to bring point-and-click simplicity to visual effects that would formerly have required hours of labor and years of image-editing expertise. Using them, you can quickly change a photo subject’s expression from deadpan to cheerful. Or adjust the direction that someone is looking. Or colorize a black-and-white photo with surprising subtlety. Part of Adobe’s portfolio of “Sensei” AI technologies, the filters use an advanced form of machine learning known as generative adversarial networks. That lets them perform feats such as rendering parts of a face that weren’t initially available as you edit a portrait. Like all new Sensei features, the neural filters were approved by an Adobe ethics committee and review board that assess AI products for problems stemming from issues such as biased data. In the case of these filters, this process identified an issue with how certain hairstyles were rendered and fixed it before the filters were released to the public.

Tutorial: Combining video frames to make long exposures

Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:

In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.

“Fiat Lux”

An incandescent labor of love:

There are 686 light painting photographs that make up the 11-scene project. Each of these long exposure light painting photographs are straight out of the camera and arranged side by side to create motion.

[Via Russell Brown]

Are you a badass Web developer? Join us!

My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.

———-

Key Responsibilities:

  • Implement the features and user interfaces of our AI-driven product
  • Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
  • Architect efficient and reusable front-end systems that drive complex web/mobile applications

Must Have:

  • BS/MS in Computer Science or a related technical field
  • Expert level of experience with JavaScript/TypeScript and frameworks such as Web Components, LitElement, ReactJS, Redux, RxJs, Materialize, jQuery, NodeJS.
  • Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
  • Strong experience working with build tools such as Rush, Webpack, npm
  • Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
  • Familiar with scripting languages, such as Python
  • Ability to take a project from scoping requirements through actual launch of the project.
  • Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.

Valley of Fire

My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!

Russell of course caught some amazing moments (see his recent posts), and you might enjoy this behind-the-scenes footage from Rocky Montez-Carr (aka Henry’s kindly chauffeur 😌🙏):