“Why Adobe?” My thoughts in Insider

I, along with a number of colleagues, had the opportunity the other day to speak to Rachel DuRose of Insider (formerly Business Insider) about why we work at Adobe—and why a number of us have returned. In case you’re interested, here are my summarized comments:

——-

“I joined Adobe in 2000. I was working on web animation tools and after a couple years on that, a job opened on Photoshop. I ended up going to Google in 2014 because they were making a huge push into computational photography.”

“I guess a key difference for me between companies is that Google got into photography kind of as a hobby, and for Adobe it’s really the bread and butter of the company. Adobe people tend to come to projects because they really care about the specific mission — people tend to commit to a project for quite some time.”

“I came back in March of last year because I saw what Adobe had been doing around AI and machine learning. I was excited to come back and try to navigate that emerging world and figure out how we make these things useful and meaningful to a lot of folks and also do it responsibly so that it aligns with our values.” 

“In my first tenure and in my return, imaging and the creative parts of Adobe remained the bedrock of the company identity, so I think that’s a through line. I guess the contrast, if there is one, is that now the company has expanded into all these things it really didn’t do before.” 

“Every job is called ‘work’ for a reason. It’s gonna be challenging and frustrating and a million other things, but the caring part, I think, is the distinctive one. I’m cool with people swearing because they care. I’m cool with people who are unreasonably committed to getting something right, or going that extra mile.” 

New stock photos are 100% AI-generated

PetaPixel reports,

PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”

None of the photos are of people who actually exist.

The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:

Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:

Different Strokes: 3D surface analysis helps computers identify painters

Researchers at NVIDIA & Case Western Reserve University have developed an algorithm that can distinguish different painters’ brush strokes “at the bristle level”:

Extracting topographical data from a surface with an optical profiler, the researchers scanned 12 paintings of the same scene, painted with identical materials, but by four different artists. Sampling small square patches of the art, approximately 5 to 15 mm, the optical profiler detects and logs minute changes on a surface, which can be attributed to how someone holds and uses a paintbrush. 

They then trained an ensemble of convolutional neural networks to find patterns in the small patches, sampling between 160 to 1,440 patches for each of the artists. Using NVIDIA GPUs with cuDNN-accelerated deep learning frameworks, the algorithm matches the samples back to a single painter.

The team tested the algorithm against 180 patches of an artist’s painting, matching the samples back to a painter at about 95% accuracy. 

Notre-Dame goes VR

(No, not that Notre Dame—the cathedral undergoing restoration.) This VR tour looks compelling:

Equipped with an immersive device (VR headset and backpack), visitors will be able to move freely in a 500 sqm space in Virtual Reality. Guided by a “Compagnon du Devoir” they will travel through different centuries and will explore several eras of Notre Dame de Paris and its environement, recreated in 3D.

Thanks to scientific surveys, and precise historical data, the cathedral and its surroundings have been precisely reproduced to enhance the visitor’s immersion and engagement in the experience.

Check out the short trailer below:

Milky Way Bridge

A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:

PetaPixel writes,

“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”

Photography: Keyframe mode on Skydio 2 looks clever & fun

File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,

With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.

Here’s some example output:

Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.

Design: “The Fish & The Furious”

I know that we’re on a pretty dark timeline sometimes, but these little bits of silly (?) human ingenuity keep me going:

Stephen Colbert & crew had some good fun with the news:

Rad scans: Drones & trees

Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:

I must try to replicate this myself!

You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.

As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:

It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:

[Via Michael Klynstra]

Photography: “A Choice of Weapons”

Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:

Dreaming of a Neural Christmas

Oh, global warming, you old scamp…

Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:

Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!

For the sake of comparison, here’s the unmodified original:

Cinematography: Ireland in IMAX ☘️

Liam Neeson narrates

Ireland invites giant screen audiences on a joyful adventure into the Emerald Isle’s immense natural beauty, rich history, language, music and arts. Amid such awe-inspiring locations as Giant’s Causeway, Skellig Michael and the Cliffs of Moher, the film follows Irish writer Manchán Magan on his quest to reconnect Irish people from around the world with their land, language, and heritage.

Of course, my wry Irishness compels me to share Conan O’Brien’s classic counterpoint from the blustery Cliffs of Moher…

…and Neeson’s Irish home-makeover show on SNL. 😛

Lego 3D: Spaceship Spaceship Spaceship!

Here’s a fun, <60s year-end gift from 3D artist Tomas Kral, made with the help of Adobe Substance 3D:

New 3D GAN spins your head right ’round, baby

SYNTHESIZE ALL THE CATS!!

This new witchcraftsynthesizes not only high-resolution, multi-view-consistent images in real time, but also produces high-quality 3D geometry.” Plus it makes a literally dizzying array of gatos!

Unsupervised generation of high-quality multi-view-consistent images and 3D shapes using only collections of single-view 2D photographs has been a long-standing challenge. Existing 3D GANs are either compute-intensive or make approximations that are not 3D-consistent; the former limits quality and resolution of the generated images and the latter adversely affects multi-view consistency and shape quality. In this work, we improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations. For this purpose, we introduce an expressive hybrid explicit-implicit network architecture that, together with other design choices, synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry. By decoupling feature generation and neural rendering, our framework is able to leverage state-of-the-art 2D CNN generators, such as StyleGAN2, and inherit their efficiency and expressiveness. We demonstrate state-of-the-art 3D-aware synthesis with FFHQ and AFHQ Cats, among other experiments.

“Zuck on a Truck” is pitch-dark & I’m here for it

“It’s a Decoration Insurrection!”

I loved Jimmy Kimmel’s savage “Elf on the Shelf” parody:

“With the power of Facebook’s massive database, your personal Mark Zuckerberg knows absolutely everything. Zuck on a Truck can tell if you’ve been naughty or nice. He knows every website you’ve ever visited, every place you’ve ever lived, every friend you’ve ever made, every love you’ve ever lost, every schoolmate you’ve stalked — Zuck on a Truck even knows when you’ll die!”

The whole thing rapidly darkens as FB connects all the naughty children to take down democracy—while all the while taking no responsibility:

Enjoy (starts around 10:44):

Disney Research introduces “Rendering With Style”

The imagineers (are they still called that?) promise a new way to create photorealistic full-head portrait renders from captured data without the need for artist intervention.

Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2).

The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings.

“Flee,” a beautifully animated new documentary

This looks gripping:

Sundance Grand Jury Prize winner FLEE tells the story of Amin Nawabi as he grapples with a painful secret he has kept hidden for 20 years, one that threatens to derail the life he has built for himself and his soon to be husband. Recounted mostly through animation to director Jonas Poher Rasmussen, he tells for the first time the story of his extraordinary journey as a child refugee from Afghanistan.

Lightroom is looking for a new PM Director

Talk about an amazing gig that comes around once in ~forever:

Lightroom is the world’s top photography ecosystem with everything needed to edit, manage, store and share your images across a connected system on any device and the web! The Digital Imaging group at Adobe includes the Photoshop and Lightroom ecosystems.

We are looking for the next leader of the Lightroom ecosystem to build and execute the vision and strategy to accelerate the growth of Lightroom’s mobile, desktop, web and cloud products into the future. This person should have a proven track record as a great product management leader, have deep empathy for customers and a passion for photography. This is a high visibility role on one of Adobe’s most important projects.

Jump in or tell a promising friend!

Adobe releases Arm version of Lightroom for Windows and macOS - The Verge

NVIDIA GauGAN enables photo creation through words

“Days of Miracles & Wonder,” part ∞:

Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple trees in the foreground, or clouds in the sky.

It doesn’t just create realistic images — artists can also use the demo to depict otherworldly landscapes.

Here’s a 30-second demo:

And here’s a glimpse at Tatooine:

An awesome Adobe PM opportunity: AI for video

Check out this chance to work with some of my favorite engineering collaborators:

We believe AI will revolutionize the next generation of creative tools, automating repetitive tasks while leaving creative agency with the user. This is your opportunity to join a world-class team of researchers, engineers, and software developers who are inventing the next generation of creative tools powered by machine learning. In this role you will help Adobe Research launch new products and experiences that bring AI to video creative tools

Specific responsibilities:

— Launch products enhanced by machine learning.

Interact with customers as often as possible—through user interviews, user testing, social media, and wherever else you can find them—to understand unspoken, unmet needs. Advocate for the customer.

— Identify and quantify market fit and opportunities across user segments and platforms. 

— Guide our scientists and engineers with your deep understanding of market and technology trends, and the competitive landscape.

— Define a comprehensive product roadmap and strategy.

— Communicate our product strategy to executives.

— Form and test data-driven hypotheses about which choices will increase market impact. 

— Prioritize what needs to get done versus what could be done.

If you might be a good fit, please throw your hat in the ring, or tell a friend who might want to jump in!

Design: Google’s “Dragonscale” solar roofs

For the last few years I’ve been curiously watching what I affectionately call “nerd terrariums” being erected on Google’s main campus. Now the team behind their unique roof designs is providing some insight into how they work:

These panels coupled with the pavilion-like rooflines let us capture the power of the sun from multiple angles. Unlike a flat roof, which generates peak power at the same time of the day, our dragonscale solar skin will generate power during an extended amount of daylight hours… When up-and-running, Charleston East and Bay View will have about 7 megawatts of installed renewable power—generating roughly 40% of their energy needs.

Four construction team members install BIPV at Google’s Bay View office development.

Check out a quick overview—literally:

“Kurt Vonnegut: Unstuck In Time”

I was such a happy dad recently when my 12yo Henry (who, being an ADD guy like me, often finds long texts to be a slog) got completely engrossed in the graphic novel version of Slaughterhouse-Five and read it in an evening. Meanwhile his older brother was working his way through Cat’s Cradle—one of my all-time faves.

Now I’m pleased to see the arrival of Unstuck In Time, a new documentary covering Vonnegut’s life & work:

Tangentially (natch), this brought to mind the Vonnegut scenes in Back To School—where I first heard the phrase “F me?!

Oh, and then there’s one of my favorite encapsulations of life wisdom—a commencement address wrongly attributed to Vonnegut, and tonally right in his wheelhouse”

Color in Skyfall

Per Daring Fireball:

Devan Scott put together a wonderful, richly illustrated thread on Twitter contrasting the use of color grading in Skyfall and Spectre. Both of those films were directed by Sam Mendes, but they had different cinematographers — Roger Deakins for Skyfall, and Hoyte van Hoytema for Spectre. Scott graciously and politely makes the case that Skyfall is more interesting and fully-realized because each new location gets a color palette of its own, whereas the entirety of Spectre is in a consistent color space.

Click or tap on through to the thread; I think you’ll enjoy it.