Monthly Archives: January 2022

PMs: Come intern with my team this summer!

We have a great opportunity for a current MBA student who’s interested in focusing on product management. Here’s what I wrote for the job description:

Creative self-expression is at a generational crossroads: as AI gives apps superhuman perception of the world, creative tools can offer expressive superpowers to anyone. This is your opportunity to join a world-class team of researchers, engineers, and software developers who are inventing the next generation of creative tools powered by machine learning. In this role you will help Adobe Research define and launch new products and experiences that bring AI to a wide range of photographers, illustrators, and hobbyists.

We’re seeking a high-energy Product Manager MBA Intern who combines a deep curiosity about creative imaging with solid experience conducting research, collaborating with cross-functional teams, and delivering solutions to ambiguous challenges.

What you’ll do:

  • Partner with design & engineering colleagues to define product concepts that address a range of user needs around creative imaging.
  • Test, validate, and iterate on these concepts via user research and engagement.
  • Help define and deliver on opportunities to acquire users and drive revenue across products.
  • Identify and quantify market fit and opportunities across user segments and platforms.

If this sounds like a good fit for your skills & availability, please drop us a line, or if you know someone who might be a good fit, please share the link. Thanks!

“Why Adobe?” My thoughts in Insider

I, along with a number of colleagues, had the opportunity the other day to speak to Rachel DuRose of Insider (formerly Business Insider) about why we work at Adobe—and why a number of us have returned. In case you’re interested, here are my summarized comments:

——-

“I joined Adobe in 2000. I was working on web animation tools and after a couple years on that, a job opened on Photoshop. I ended up going to Google in 2014 because they were making a huge push into computational photography.”

“I guess a key difference for me between companies is that Google got into photography kind of as a hobby, and for Adobe it’s really the bread and butter of the company. Adobe people tend to come to projects because they really care about the specific mission — people tend to commit to a project for quite some time.”

“I came back in March of last year because I saw what Adobe had been doing around AI and machine learning. I was excited to come back and try to navigate that emerging world and figure out how we make these things useful and meaningful to a lot of folks and also do it responsibly so that it aligns with our values.” 

“In my first tenure and in my return, imaging and the creative parts of Adobe remained the bedrock of the company identity, so I think that’s a through line. I guess the contrast, if there is one, is that now the company has expanded into all these things it really didn’t do before.” 

“Every job is called ‘work’ for a reason. It’s gonna be challenging and frustrating and a million other things, but the caring part, I think, is the distinctive one. I’m cool with people swearing because they care. I’m cool with people who are unreasonably committed to getting something right, or going that extra mile.” 

New stock photos are 100% AI-generated

PetaPixel reports,

PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”

None of the photos are of people who actually exist.

The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:

Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:

Different Strokes: 3D surface analysis helps computers identify painters

Researchers at NVIDIA & Case Western Reserve University have developed an algorithm that can distinguish different painters’ brush strokes “at the bristle level”:

Extracting topographical data from a surface with an optical profiler, the researchers scanned 12 paintings of the same scene, painted with identical materials, but by four different artists. Sampling small square patches of the art, approximately 5 to 15 mm, the optical profiler detects and logs minute changes on a surface, which can be attributed to how someone holds and uses a paintbrush. 

They then trained an ensemble of convolutional neural networks to find patterns in the small patches, sampling between 160 to 1,440 patches for each of the artists. Using NVIDIA GPUs with cuDNN-accelerated deep learning frameworks, the algorithm matches the samples back to a single painter.

The team tested the algorithm against 180 patches of an artist’s painting, matching the samples back to a painter at about 95% accuracy. 

Notre-Dame goes VR

(No, not that Notre Dame—the cathedral undergoing restoration.) This VR tour looks compelling:

Equipped with an immersive device (VR headset and backpack), visitors will be able to move freely in a 500 sqm space in Virtual Reality. Guided by a “Compagnon du Devoir” they will travel through different centuries and will explore several eras of Notre Dame de Paris and its environement, recreated in 3D.

Thanks to scientific surveys, and precise historical data, the cathedral and its surroundings have been precisely reproduced to enhance the visitor’s immersion and engagement in the experience.

Check out the short trailer below:

Milky Way Bridge

A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:

PetaPixel writes,

“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”

Photography: Keyframe mode on Skydio 2 looks clever & fun

File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,

With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.

Here’s some example output:

Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.

Design: “The Fish & The Furious”

I know that we’re on a pretty dark timeline sometimes, but these little bits of silly (?) human ingenuity keep me going:

Stephen Colbert & crew had some good fun with the news:

Rad scans: Drones & trees

Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:

I must try to replicate this myself!

You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.

As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:

It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:

[Via Michael Klynstra]

Photography: “A Choice of Weapons”

Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:

Dreaming of a Neural Christmas

Oh, global warming, you old scamp…

Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:

Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!

For the sake of comparison, here’s the unmodified original: