Category Archives: Photography

Free streaming classes on photography, 3D

It’s really cool to see companies stepping up to help creative people make the most of our forced downtime. PetaPixel writes,

If you’re a photographer stuck at home due to the coronavirus pandemic, Professional Photographers of America (PPA) has got your back. The trade association has made all of its 1,100+ online photography classes free for the next two weeks. […]

You can spend some of your lockdown days learning everything from how to make money in wedding photography to developing a target audience to printing in house.

UntitledImage

Meanwhile Unity is opening up their Learn Premium curricula:

During the COVID-19 crisis, we’re committed to supporting the community with complimentary access to Unity Learn Premium for three months (March 19 through June 20). Get exclusive access to Unity experts, live interactive sessions, on-demand learning resources, and more.

UntitledImage

“NeRF” promises amazing 3D capture

“This is certainly the coolest thing I’ve ever worked on, and it might be one of the coolest things I’ve ever seen.”

My Google Research colleague Jon Barron routinely makes amazing stuff, so when he gets a little breathless about a project, you know it’s something special. I’ll pass the mic to him to explain their new work around capturing multiple photos, then synthesizing a 3D model:

I’ve been collaborating with Berkeley for the last few months and we seem to have cracked neural rendering. You just train a boring (non-convolutional) neural network with five inputs (xyz position and viewing angle) and four outputs (RGB+alpha), combine it with the fundamentals of volume rendering, and get an absurdly simple algorithm that beats the state of the art in neural rendering / view synthesis by *miles*.

You can change the camera angle, change the lighting, insert objects, extract depth maps — pretty much anything you would do with a CGI model, and the renderings are basically photorealistic. It’s so simple that you can implement the entire algorithm in a few dozen lines of TensorFlow.

Check it out in action:

[YouTube]

“Virus” tintype animation

TBH the last thing I want is for coronavirus talk to infect (ahem) my escapist art-posting, but I’ve gotta give Markus Hofstätter props for the sheer effort he put into making this 7-frame animation with archaic tintype printing (or as my wife asked, lacking all context, “Why did that dude put a picture into a panini press?”). You can watch his process from the beginning (and check out PetaPixel for the full story), or just jump to the finished animation at the end:

UntitledImage

[YouTube]

Google AI helps upscale “Lunar Rover Grand Prix” to 4K and 60fps

So cool! I’d never actually watched these Apollo 16 clips on their own, unedited & with original dialog intact.

PetaPixel writes,

For this particular project, Shiryaev used the stabilized version of the footage that NASA itself released in July of 2019 as a baseline. He then fed it through the same AI software that he’s been using to upscale all of the videos he’s released: Google’s DAIN interpolate frames and achieve 60fps, and Topaz Labs’ Gigapixel AI to upscale each frame and achieve 4K resolution. 

More about the mission from NASA:

[YouTube 1 & 2]

Quick Comparison: Pixel 4 vs. iPhone 11 at Night

[Please note: I don’t work on the Pixel team, and these opinions are just those of a guy with a couple of phones in hand, literally shooting in the dark.]

In Yosemite Valley on Friday night, I did some quick & unscientific but illuminating (oh jeez) tests shooting with a Pixel 4 & iPhone 11 Pro Max. I’d had fleeting notions of trying some proper astrophotography (side note: see these great tips from Pixel engineer & ILM vet Florian Kainz), but between the moon & the clouds, I couldn’t see a ton of stars. Therefore I mostly held up both phones, pressed the shutter button, and held my breath.

Check out the results in this album. You can see which camera produced which images by tapping each image, then tapping the little comment icon. I haven’t applied any adjustments.

Overall I’m amazed at what both devices can produce, but overall I preferred the Pixel’s interpretations. They were darker, but truer to what my eyes perceived, and very unlike the otherworldly, day-for-night iPhone renderings (which persisted despite a few attempts I made to set focus, then drag down the exposure before shooting).

Check out the results, judge for yourself, and let me know what you think.

UntitledImage

Oh, and for a much more eye-popping Pixel 4 result, check out this post from Adobe’s Russell Brown:

VFX: Titanic in reverse

What a fascinating 90-second peek into a clever trick that saved millions of dollars in production costs on Titanic. As a friend asks, “I wonder what became of all those reverse WHITE STAR LINE sweaters?”

“Améliorer!” is the new “Enhance!”

I’ve long heard that 19th-century audiences would faint or jump out of their seats upon seeing gripping, O.G. content like “Train Enters Station.” If that’s true, imagine the blown minds that would result from this upgraded footage. Colossal writes,

Shiryaev first used Topaz Lab’s Gigapixel AI to upgrade the film’s resolution to 4K, followed by Google’s DAIN, which he used to create and add frames to the original file, bringing it to 60 frames per second.

Check out the original…

…and the enhanced version:

Update: Conceptually related:

[YouTube]

“Fishception!”

“It’s like a big fish made out of fish,” my 10yo son Henry just noted, “Fishception!”

Kottke, who says “Scary Sea Monster Really Just Hundreds of Tiny Fish in a Trench Coat,” notes:

“Try rewatching the video, picking one fish and following it the entire time. Then pick another fish and watch the video again. The juvenile striped eel catfish seem to cycle through positions within the school as the entire swarm moves forward.”

Like riders in a peleton, each taking their turn braving danger at the front.

[YouTube]

“How focal length can change your face” — and what can be done about it

Quick, interesting animation:

https://www.instagram.com/p/B7PGR45oDFg/

In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used. 

Interestingly, a couple of years back some Adobe & Google researchers unveiled work on “Perspective-Aware Manipulation of Portrait Photos”:

[YouTube] [Via Peyman Milanfar]

Canon promises AI assistance for Lightroom culls

TL;DR: If this works, I’ll be pleasantly shocked.

I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.

Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss. 

So will pros trust such tech to help them sort through hundreds of similar images? Well… maybe? Canon’s prepping a subscription-based plug-in for the job:

The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.

Here it is in action:

NewImage

[YouTube]

Merry Christmas!

Hey everyone—happy holidays & Merry Christmas from me, Margot, Seamus, and the Micronaxx to you & yours. Thanks so much for being a reader (“the few, the ostensibly proud” 😛), and here’s to making more funky, inspiring discoveries in the new year. Meanwhile, here’s a quick glimpse of our tour of the holiday lights in Los Gatos (complete with throbbing fonky beatz in the tunnel of lights 🙃).

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 

Merry Christmas, everyone! ☺️🎧🤘

A post shared by John Nack (@jnack) on

NewImage

Making 3D models via your drone

I’ll admit that I haven’t yet taken the plunge into photogrammetry, but this tutorial makes me think I just might be able to do it. (And as we close out 2019, let’s take a moment to note how bonkers it is that for the price of a few hundred dollars in flying gear, just about anyone can generate 3D geometry and share it to just about any device sporting a Web browser!)

[YouTube]

Low-light shooting tips for the Insta360

I know this post might be of super niche interest, but I’m going to try out its recommendations tonight when we drive through holiday lights. I think the flowcharts basically boil down to “Go manual, keep ISO at 400 or lower, and bump it up/down to get the exposure right. Oh, and set shutter speed to 2x frame rate for no motion & 4x for moderate motion.” Any shooting tips you may have to share are most welcome as well!

[YouTube]

AI: Diving into Portrait mode improvements

“FM technology: that stands for F’ing Magic…” So said the old Navy radio repair trainer, and it comes to mind reading about how the Google camera team used machine learning plus a dual-lens setup to deliver beautiful portraiture on the Pixel 4:

https://ai.googleblog.com/2019/12/improvements-to-portrait-mode-on-google.html

With the Pixel 4, we have made two more big improvements to this feature, leveraging both the Pixel 4’s dual cameras and dual-pixel auto-focus system to improve depth estimation, allowing users to take great-looking Portrait Mode shots at near and far distances. We have also improved our bokeh, making it more closely match that of a professional SLR camera.

NewImage

NewImage

Google Pixel introduces post-capture Portrait blur

🎉

Now, you can turn a photo into a portrait on Pixel by blurring the background post-snap. So whether you took the photo years ago, or you forgot to turn on portrait mode, you can easily give each picture an artistic look with Portrait Blur in Google Photos.

I’m also pleased to see that the realtime portrait-blurring tech my team built has now come to Google Duo for use during video calls:

Stroboscopic NYC: “A Crowdsourced Hyperlapse”

Prepare for retinal blast-off (and be careful if you’re sensitive to flashing lights).

What happens when everything in the world has been photographed? From multiple angles, multiple times per day? Eventually we’ll piece those photos and videos together to be able to see the entire history of a location from every possible angle.

“I sifted through probably ~100,000 photos on Instagram using location tags and hashtags, then sorted, and then hand-animated in After Effects to create a crowdsourced hyperlapse video of New York City,” Morrison tells PetaPixel. “I think the whole project took roughly 200 hours to create!”

Happy Thanksgiving

Hey gang—I’m working my way out of the traditional tryptophan-induced haze enough to wish you a slightly belated Happy Thanksgiving. I hope you were able to grab a restful few days. Amidst bleak (for Cali) weather I was able to grab a few fun tiny planet shots (see below) and learn about how to attach a 360º cam to a drone (something I’ve not yet been brave/foolhardy enough to try):

Planet PA

NewImage

 [YouTube]

Google Earth adds new storytelling tools

I’m delighted to see new ways to pair one’s own images with views of our planet:

With creation tools in Google Earth, you can draw your own placemarks, lines and shapes, then attach your own custom text, images, and videos to these locations. You can organize your story into a narrative and collaborate with others. And when you’ve finished your story, you can share it with others. By clicking the new “Present” button, your audience will be able to fly from place to place in your custom-made Google Earth narrative.

Take a look at how students & others are using it:

Here’s a 60-second-ish tour of the actual creation process:

[YouTube]

Bittersweet Symphony: Lightroom improves iPad import

“Hey, y’all got a water desalination plant, ‘cause I’m salty as hell.🙃

First, some good news: Lightroom is planning to improve the workflow of importing images from an SD card:

I know that this is something that photographers deeply wanted, starting in 2010. I just wonder whether—nearly 10 years since the launch of iPad—it matters anymore.

My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,

I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:

  • Let me leave my computer at home & carry just my tablet** & camera
  • Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
  • When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.

This remains a bizarre failure of our industry.

Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.

On we go.

[YouTube]

“Sea-thru”: AI-driven underwater color correction

Dad-joke of a name notwithstanding 😌, this tech looks pretty slick:

PetaPixel writes,

To be clear, this method is not the same as Photoshopping an image to add in contrast and artificially enhance the colors that are absorbed most quickly by the water. It’s a “physically accurate correction,” and the results truly speak for themselves.

And as some wiseass in the comments remarks, “I can’t believe we’ve polluted our waters so much there are color charts now lying on the ocean floor.”

NewImage

[YouTube]

New Adobe tech can relight structures & synthesize shadows

Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”

Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯ 

NewImage

[YouTube] [Via]

Adobe announces Photoshop Camera

This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).

I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.

NewImage

[YouTube]

“How Art Inspired the Google Pixel 4 Camera”

“The only problem with Microsoft,” Steve Jobs famously said, “is they just have no taste. They have absolutely no taste.” But critically:

And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.

Here’s Marc Levoy providing a nice counterpoint, talking about art history & its relationship with modern computational photography:

[YouTube]

Pixel 4: Check out amazing astrophotography, super zoom, and more

“The Camera Professor” (as Reddit called him) Marc Levoy gave a great overview today of his team’s work in computational photography, after which Annie Leibovitz came to the stage to discuss her craft & Pixel 4. “My IQ went up by at least 10 by the time he was done,” per the same thread. 😌 Enjoy!

(Starts around 47:12, just in case the deep link above doesn’t take you there directly)

NewImage

NewImage

[YouTube]