Category Archives: Photography

Interesting, interactive mash-ups powered by AI

Check out how StyleMapGAN (paper, PDF, code) enables combinations of human & animal faces, vehicles, buildings, and more. Unlike simple copy-paste-blend, this technique permits interactive morphing between source & target pixels:

From the authors, a bit about what’s going on here:

Generative adversarial networks (GANs) synthesize realistic images from random latent vectors. Although manipulating the latent vectors controls the synthesized outputs, editing real images with GANs suffers from i) time-consuming optimization for projecting real images to the latent vectors, ii) or inaccurate embedding through an encoder. We propose StyleMapGAN: the intermediate latent space has spatial dimensions, and a spatially variant modulation replaces AdaIN. It makes the embedding through an encoder more accurate than existing optimization-based methods while maintaining the properties of GANs. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Last but not least, conventional editing methods on GANs are still valid on our StyleMapGAN. Source code is available at​.

A little fun with Bullet Time

During our epic Illinois-to-California run down Route 66 in March, my son Henry and I had fun capturing all kinds of images, including via my Insta360 One X2 camera. Here are a couple of “bullet time” slow-mo vids I thought were kind of fun. The first comes from the Round Barn in Arcadia, OK…

…and the second from the Wigwam Motel in Holbrook, AZ (see photos):

It’s a bummer that the optical quality here suffers from having the company’s cheap-o lens guards applied. (Without the guards, one errant swipe of the selfie stick can result in permanent scratches to the lens, necessitating shipment back to China for repairs.) They say they’re working on more premium glass ones, for which they’ll likely get yet more of my dough. ¯\_(ツ)_/¯

What a difference four years makes in iPhone cameras

“People tend to overestimate what can be done in one year and to underestimate what can be done in five or ten years,” as the old saying goes. Similarly, it can be hard to notice one’s own kid’s progress until confronted with an example of that kid from a few years back.

My son Henry has recently taken a shine to photography & has been shooting with my iPhone 7 Plus. While passing through Albuquerque a few weeks back, we ended up shooting side by side—him with the 7, and me with an iPhone 12 Pro Max (four years newer). We share a camera roll, and as I scrolled through I was really struck seeing the output of the two devices placed side by side.

I don’t hold up any of these photos (all unedited besides cropping) as art, but it’s fun to compare them & to appreciate just how far mobile photography has advanced in a few short years. See gallery for more.

Tutorial: Light painting tips from Russell Brown

Back in February I got to try my hand at some long-exposure phone photography in Death Valley with Russell Brown, interspersing chilly morning & evening shoots with low-key Adobe interviewing. 😌

Here’s a long-exposure 360º capture I made with Russell’s help in the ghost town of Rhyolite, NV:

Stellar times chilling (literally!) with Russell Preston Brown. 💫

Posted by John Nack on Thursday, February 4, 2021

Russell never stops learning & exploring, and here he shares some of his recent findings, using a neutral density filter on a phone to prevent blown-out highlights:

View this post on Instagram

A post shared by Russell Preston Brown (@dr_brown)

Getting our kicks

After driving 2,000+ miles down Route 66 and beyond in six days—the last of which also included getting onboarded at Adobe!—I’ve only just begun to breathe & go through the titanic number of photos and videos my son & I captured. I’ll try to share more good stuff soon, but in the meantime you might get a kick (heh) out of this little vid, captured via my Insta360 One X2:

Now one of these days I just need to dust off my After Effects skills enough to nuke the telltale pole shadows. Someday…!

2-minute tour: ProRAW + Lightroom

Over the last 20 years or so, photographers have faced a slightly Faustian bargain: shoot JPEG & get the benefits of a camera manufacturer’s ability to tune output with a camera’s on-board smarts; or shoot raw and get more dynamic range and white balance flexibility—at the cost of losing that tuning and having to do more manual work.

Fortunately Adobe & Apple have been collaborating for many months to get Apple’s ProRAW variant of DNG supported in Camera Raw and Lightroom, and here Russell Brown provides a quick tour of how capture and editing work:

View this post on Instagram

A post shared by Russell Preston Brown (@dr_brown)

Happy St. Paddy’s from one disgruntled leprechaun

We can’t celebrate in person with pals this year, but here’s a bit of good cheer from our wee man (victim of the old “raisin cookie fake-out”):

Saturday, March 16, 2019

Meanwhile, I just stumbled across this hearty “Sláinte” from Bill Burr. 😌

And on a less goofball note,

May the road rise to meet you,
May the wind be always at your back.
May the sun shine warm upon your face,
The rains fall soft upon your fields.
And until we meet again,
May God hold you in the palm of His hand.

☘️ J.

Insta360 GO 2: Finally a wearable cam that doesn’t suck?

Photo-taking often presents a Faustian bargain: be able to relive memories later, but at the cost of being less present in the experience as it happens. When my team researched why people do & don’t take photos, wanting to be present & not intrusive/obnoxious were key reasons not to bring out a camera.

So what if you could wear not just a lightweight, unobtrusive capture device, but actually wear a photographer—an intelligence that could capture the best moments, leaving your hands & mind free in the moment? Even naive, interval-based capture could produce a really interesting journey through space, as Blaise Agüera y Arcas demonstrated at Microsoft back in 2013:

It’s a long-held dream that products like Google’s Clips camera (which Blaise led at Google) have tried so achieve, thus far without any notable success. Clips proved to be too large & heavy for many people to wear comfortably, and training an AI model to find “good” moments ends up being much harder than one might imagine. Google discontinued Clips, though as a consolation prize I ended up delighting my young son by bringing home reams of unused printed circuit boards (which for some reason resembled the Millennium Falcon). Meanwhile Microsoft discontinued PhotoSynth.

The need remains & the dream won’t die, however, so I was excited ~18 months ago when Insta360 introduced the GO, a $199, “20-gram steadicam” for $199. It promised ultra lightweight wearability, photo capture, and a slick AirPods-style case for both recharging & data transfer. The wide FOV capture promised post-capture reframing driven by (you guessed it) mythical AI that could select the best moments.

Others (including many on the Insta forum) were skeptical, but I was enamored enough that my wife bought me one for Christmas. Sadly, buying Insta products is a little like Russian Roulette (e.g. I have loved the One X & subsequent X2, while the One R has been a worthless paperweight), and the GO ended up on the bummer side of the ledger. I found it way too hard to reliably start/stop & to transfer data. It’s been another paperweight.

To their possible credit (TBD), though, Insta has persisted with the product and has released the GO 2—now more expensive ($299) but promising a host of improvements (wireless preview & transfer, better storage & battery, etc.). Check it out:

“Looks perfect for a proctologist, which is where Insta can shove it,” said one salty user on the Insta forum. Will it finally work well? I don’t know—but I’m just hungry/sucker enough to pull the trigger, F around & find out. Hopefully it’ll arrive in advance of the road trip I’m planning with my son, so stay tuned for real-world findings.

Meanwhile, here’s a review I found thorough & informative—and not least in its innovative use of gummi bears as a unity of measure 🙃:

Oh, and I did not order the forthcoming Minion mod (a real thing, they swear):

Lego: Nanonaxx Conquer Death Valley

Do I seem like the kind of guy who’d have tiny Lego representations of himself, his wife, our kids (the Micronaxx), and even our dog? What a silly question. 😌

I had a ball zipping around Death Valley, unleashing our little crew on sand dunes, lonesome highways, and everything in between. In particular I was struck by just how often I got more usable shallow depth-of-field images from my iPhone (which, like my Pixel, lets me edit the blur post-capture) than from my trusty, if aging, DSLR & L-series lens.

Anyway, in case this sort of thing is up your alley, please enjoy the results.

Snowflakes materialize in reverse

“Enjoy your delicious moments,” say the somewhat Zen pizza boxes from our favorite local joint. In that spirit, let’s stay frosty:

PetaPixel notes,

Jens writes that the melting snowflake video was shot on his Sony a6300 with either the Sony 90mm macro lens or the Laowa 60mm 2:1 macro lens. He does list the Sony a7R IV as his “main camera,” but it’s still impressive that this high-resolution video was shot thanks to one of Sony’s entry-level offerings.

3D dronie!

Inspired by the awesome work of photogrammetry expert Azad Balabanian, I used my drone at the Trona Pinnacles to capture some video loops as I sat atop one of the structures. My VFX-expert friend & fellow Google PM Bilawal Singh Sidhu used it to whip up this fun, interactive 3D portrait:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

The facial fidelity isn’t on par with the crazy little 3D prints of my head I got made 15 (!) years ago—but for for footage coming from an automated flying robot, I’ll take it. 🤘😛

“The World Deserves Witnesses”

Lovely work from Leica, testifying to the power of presence, capture, and connection.

Per PetaPixel,

“A witness, someone who sees what others simply watch,” the company writes in a description of the campaign. “When Leica invented the first 35mm camera in 1914, it allowed people to capture their world and the world around them and document its events, no matter how small or big they were. Today, as for more than one century, Leica keeps celebrating the witnesses, the ones who see the everyday beauty, grace and poetry, and the never ending irony and drama of our human condition, and bring their cameras to the eye in order to frame it and fix it forever.

Facebook improves automatic image description

I love seeing progress towards making the world more accessible, and tech that’s good for inclusion can also benefit all users & businesses. Here the researchers write,

To make our models work better for everyone, we fine-tuned them so that data was sampled from images across all geographies, and using translations of hashtags in many languages. We also evaluated our concepts along gender, skin tone, and age axes. The resulting models are both more accurate and culturally and demographically inclusive — for instance, they can identify weddings around the world based (in part) on traditional apparel instead of labeling only photos featuring white wedding dresses.

PetaPixel writes,

Facebook says that this new model is more reliably able to recognize more than 1,200 concepts, which is more than 10 times as many as the original version launched in 2016.

From refugee to… squirrel photographer?

Our kids were born with such voluminous, Dizzy Gillespie-grade cheeks that we immediately dubbed them “The Squirrels,” and we later gave our van the license plate SQRLPOD. This has nothing to do with anything, but I thought of it fondly upon seeing this charming 1-minute portrait:

Niki Colemont, is a wildlife photographer and a survivor who fled the Rwandan genocide at just four years old, arriving in Belgium as a refugee. The National Geographic 2019 finalist photographer finds peace today in photographing squirrels, who he considers “the perfect models.”

Drone rescues drone

“For he is truly his brother’s keeper, and the finder of lost children…”

Photographer Ty Poland tells the story, including this MacGyver-y bit:

Next up, we needed a lasso. Thankfully our good friend Martin Sanchez had an extra pair of shoes in the trunk. On top of that, he had just polished off a fresh iced coffee from Dunkin with a straw. With these two ingredients, we were able to construct an open lasso. By simply putting the straw over the shoelace and adding a small key chain for weight, we were able to center the lasso to the Mavic 2 Pro for the rescue.

Samsung adds one-tap object erasing

If you want to be successful, says Twitter founder Evan Williams, “Take a human desire, preferably one that has been around for a really long time…Identify that desire and use modern technology to take out steps.” 

My old Photoshop boss Kevin Connor liked to cite the Healing Brush as an example of how tech kept evolving to offer more specialized, efficient solutions (in this case, from the more general Clone Stamp to something purpose-built). Content-Aware Fill, which we shipped back in 2010, was another such optimization, and now its use is getting even more specialized/direct.

PetaPixel writes,

Samsung added Object Eraser, a tool powered by AI that appears to work by combining object recognition with something like Adobe’s Content-Aware Fill. In any photo captured on an S21 series phone, simply tap the button to tell activate Object Eraser, then just tap on the people you want to remove, and then the phone automatically does all the work.

Night Photo Summit coming soon

I was excited to learn today that Adobe’s Russell Brown, together with a large group of other experts, is set to teach night photography techniques February 12-14:

28 speakers in 6 categories will present 30+ talks over three days.⁠ Our goal: “Inspiring night photographers across the galaxy.”⁠

Check out the schedule & site for more. Meanwhile I’m hoping to get out to the desert with Russell in a couple of weeks, and I hope to help produce some really cool stuff. 🤞



View this post on Instagram


A post shared by National Parks At Night (@nationalparksatnight)

Behind the scenes: Drone light painting

I’m a longtime admirer of Reuben Wu’s beautiful light painting work, and planning to head to Death Valley next month, I thought I’d try to learn more about his techniques. Happily he’s shared a quick, enlightening (heh) peek behind the scenes of his process:

I also enjoyed this more detailed how-to piece from Daniel James. He’s convinced me to spring for the Lume Cube Mavic Pro kit, though I welcome any additional input!

🎄Blinding Lights & Beach Boys

Although we can’t travel far this holiday season (bye bye, Mendocino! see you… someday?), we are, in the words of Tony Stark, “bringing the party to you” via our decked-out VW Westy (pics). I’m having fun experimenting with my new Insta360 One X2, mounting it on a selfie stick & playing with timelapse mode. Here’s a taste of the psychedelic stylings, courtesy of The Weeknd…

…and Brian Wilson:

Astro Adobeans

A couple of my old Adobe pals (who happen to dwell in the dark, beautiful wilderness around Santa Cruz) have been sharing some great astrophotography-related work lately.

First, Bryan O’Neil Hughes shares tips on photographing the heavens, including the Jupiter-Saturn Conjunction and the Ursids Meteor Shower:

Meanwhile Michael Lewis has been capturing heavenly shots which, in the words of my then ~4-year-old son, “make my mind blow away.” Check out his Instagram feed for images like this:

And if you’re shooting with a phone—especially with a Pixel—check out these tips from former Pixel imagining engineer Florian Kainz (who’s now also at Adobe—hat trick!).

Google Photos rolls out Cinematic Photos & more

Nearly 20 years ago, on one of my first customer visits as a Photoshop PM, I got to watch artists use PS + After Effects to extract people from photo backgrounds, then animate the results. The resulting film—The Kid Stays In The Picture—lent its name to the distinctive effect (see previous).

Now I’m delighted that Google Photos is rolling out similar output to its billion+ users, without requiring any effort or tools:

We use machine learning to predict an image’s depth and produce a 3D representation of the scene—even if the original image doesn’t include depth information from the camera. Then we animate a virtual camera for a smooth panning effect—just like out of the movies.

Photos is also rolling out new collages, like this:

And they’re introducing new themes in the stories-style Memories section up top as well:

Now you’ll see Memories surface photos of the most important people in your life…  And starting soon, you’ll also see Memories about your favorite things—like sunsets—and activities—like baking or hiking—based on the photos you upload.


Tilt-shift takes off

Remember Obama’s first term, when faked tilt-shift photos were so popular that Instagram briefly offered a built-in feature for applying the look? The effect got burned out, but I found it surprisingly fun to see it return in this short video.

In a brief interview, Sofia-based photographer Pavel Petrov shares some behind-the-scenes details.

I have used Adobe Premiere Pro for post processing with some compound blur (for the narrow depth of field) and some oversaturation and speed up to 300%.

Google Photos gets HDR & sky palette transfer on Pixel

A couple of exciting new features have landed for Pixel users. My colleague Navin Sarma writes,

Sky palette transfer in Photos – Sky palette transfer allows users to quickly improve their images that contain sky, achieving a dramatic, creative, and professional effect. It localizes the most dramatic changes to color and contrast to the sky, and tapers the effect to the foreground. It’s especially powerful to improve images of sunsets or sunrises, or where there are complex clouds and contrasty light. 

Dynamic/HDR in Photos – The “Dynamic” suggestion is geared towards landscape and “still life” photography, where images can benefit from enhanced brightness, contrast, and color. This effect uses local tone mapping, which allows more control of where brightness and contrast changes occur, making it especially useful in tricky lighting situations. You can use this effect on any photo by using the “Dynamic” suggestion, or navigating to Adjust and moving the “HDR” slider.

Animation: Cutout LeBron flies in the real world

Artist Rudy Willingham has developed a clever, laborious way of turning video frames into physical cutouts & then overlaying them on interesting backgrounds. Check it out:


View this post on Instagram




A post shared by Rudy Willingham (@rudy_willingham)

I think I like this set even more:


View this post on Instagram



A post shared by Rudy Willingham (@rudy_willingham)


Inside iPhone’s “Dark Universe”

As a kid, I spent hours fantasizing about the epic films I could make, if only I could borrow my friend’s giant camcorder & some dry ice. Apple 💯 has their finger on the pulse of such aspirational souls in this new ad:

It’s pretty insane to see what talented filmmakers can do with just a phone (or rather, a high-end camera/computer/monitor that happens to make phone calls) and practical effects:

Apple has posted an illuminating behind-the-scenes video for this piece. PetaPixel writes,

In one clip they show how they dropped the phone directly into rocks that they had fired upwards using a piston, and in another, they use magnets and iron filings with the camera very close to the surface. One step further, they use ferrofluid to create rapidly flowing ripples that flow wildly on camera.

Check it out:

Photoshop’s new Smart Portrait is pretty amazing

My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:

On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:

Here’s a more in-depth look (starting around 1:46) at controlling the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):

Photoshop’s Sky Replacement feature was well worth the wait

Although I haven’t yet gotten to use it extensively, I’m really enjoying the newly arrived Sky Replacement feature in Photoshop. Check out a quick before/after on a tiny planet image:

Eye-popping portraits emerge as paint cascades down the human face

Man, these are stunning—and they’re all done in camera:

First coated in black, the anonymous subjects in Tim Tadder’s portraits are cloaked with hypnotic swirls and thick drips of bright paint. To create the mesmerizing images, the Encinitas, California-based photographer and artist pours a mix of colors over his sitters and snaps a precisely-timed shot to capture each drop as it runs down their necks or splashes from their chins.

You can find more of the artist’s work on Behance and Instagram.

Photographic downfall: “Tsunami from Heaven”

This is lovely—especially from a safe, dry distance:

PetaPixel writes,

A couple of years ago, adventure photographer and Visit Austria creator Peter Maier captured a stunning rainstorm timelapse titled ‘Tsunami from Heaven’… It was captured from the Alpengasthof Bergfried hotel in Carinthia, Austria, and shows a sudden cloudburst (AKA microburst or downburst) soaking an area around Lake Millstatt

Google & researchers demo AI-powered shadow removal

Speaking of Google photography research (see previous post about portrait relighting), I’ve been meaning to point to the team’s collaboration with MIT & Berkeley. As PetaPixel writes,

The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.

Here’s a nice summary from Two-Minute Papers:

Interactive Portrait Light comes to Google Photos on Pixel; editor gets upgraded

I have been waiting, I kid you not, since the Bush Administration to have an easy way to adjust lighting on faces. I just didn’t expect it to appear on my telephone before it showed up in Photoshop, but ¯\_(ツ)_/¯. Anyway, check out what you can now do on Pixel 4 & 5 devices:

This feature arrives, as PetaPixel notes, as one of several new Suggestions:

Nestled into a new ‘Suggestions’ tab that shows up first in the Photos editor, the options displayed there “[use] machine learning to give you suggestions that are tailored to the specific photo you’re editing.” For now, this only includes three options—Color Pop, Black & White, and Enhance—but more suggestions will be added “in the coming months” to deal specifically with portraits, landscapes, sunsets, and beyond.

Lastly, the photo editor overall has gotten its first major reorganization since we launched it in 2015: