I’ve never seen this experimental documentary, but I can dig the steampunk gloss it’s been given courtesy of machine learning:
For comparison, check out the original scene:
[Via Earth Oliver]
I, along with a number of colleagues, had the opportunity the other day to speak to Rachel DuRose of Insider (formerly Business Insider) about why we work at Adobe—and why a number of us have returned. In case you’re interested, here are my summarized comments:
“I joined Adobe in 2000. I was working on web animation tools and after a couple years on that, a job opened on Photoshop. I ended up going to Google in 2014 because they were making a huge push into computational photography.”
“I guess a key difference for me between companies is that Google got into photography kind of as a hobby, and for Adobe it’s really the bread and butter of the company. Adobe people tend to come to projects because they really care about the specific mission — people tend to commit to a project for quite some time.”
“I came back in March of last year because I saw what Adobe had been doing around AI and machine learning. I was excited to come back and try to navigate that emerging world and figure out how we make these things useful and meaningful to a lot of folks and also do it responsibly so that it aligns with our values.”
“In my first tenure and in my return, imaging and the creative parts of Adobe remained the bedrock of the company identity, so I think that’s a through line. I guess the contrast, if there is one, is that now the company has expanded into all these things it really didn’t do before.”
“Every job is called ‘work’ for a reason. It’s gonna be challenging and frustrating and a million other things, but the caring part, I think, is the distinctive one. I’m cool with people swearing because they care. I’m cool with people who are unreasonably committed to getting something right, or going that extra mile.”
PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”
None of the photos are of people who actually exist.
The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:
Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:
Heh—happy Friday from these little guys:
Spoiler: the trick here is a pair of simulations stitched together, like a physics Texas Switch: “Each sequence is obtained by joining two simulations, both starting from the time in which the balls are arranged regularly. One simulates forward in time, one backwards.”
Researchers at NVIDIA & Case Western Reserve University have developed an algorithm that can distinguish different painters’ brush strokes “at the bristle level”:
Extracting topographical data from a surface with an optical profiler, the researchers scanned 12 paintings of the same scene, painted with identical materials, but by four different artists. Sampling small square patches of the art, approximately 5 to 15 mm, the optical profiler detects and logs minute changes on a surface, which can be attributed to how someone holds and uses a paintbrush.
They then trained an ensemble of convolutional neural networks to find patterns in the small patches, sampling between 160 to 1,440 patches for each of the artists. Using NVIDIA GPUs with cuDNN-accelerated deep learning frameworks, the algorithm matches the samples back to a single painter.
The team tested the algorithm against 180 patches of an artist’s painting, matching the samples back to a painter at about 95% accuracy.
Equipped with an immersive device (VR headset and backpack), visitors will be able to move freely in a 500 sqm space in Virtual Reality. Guided by a “Compagnon du Devoir” they will travel through different centuries and will explore several eras of Notre Dame de Paris and its environement, recreated in 3D.
Thanks to scientific surveys, and precise historical data, the cathedral and its surroundings have been precisely reproduced to enhance the visitor’s immersion and engagement in the experience.
Check out the short trailer below:
A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:
“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”
File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,
With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.
Here’s some example output:
Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.
I know that we’re on a pretty dark timeline sometimes, but these little bits of silly (?) human ingenuity keep me going:
I am excited to share a new study led by Shachar Givon & @MatanSamina w/ Ohad Ben Shahar: Goldfish can learn to navigate a small robotic vehicle on land. We trained goldfish to drive a wheeled platform that reacts to the fish’s movement (https://t.co/ZR59Hu9sib). pic.twitter.com/J5BkuGlZ34
— Ronen Segev (@ronen_segev) January 3, 2022
Stephen Colbert & crew had some good fun with the news:
Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:
I must try to replicate this myself!
You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.
As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:
It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:
[Via Michael Klynstra]
Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:
It’s wild what computers can now do—and wild how we just take much of it for granted.
Oh, global warming, you old scamp…
Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:
Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!
For the sake of comparison, here’s the unmodified original:
As usual, I’m channeling Towlie in admitting I have no idea what’s going on right now—or at least just an inkling of one—but check out some recent witchcraft that takes in text & simple strokes, then synthesizes multiple kinds of outputs using a single model:
And as long as we’re talking hallucination:
Let’s all ease into 2022 with a sensible breakfast of pure, brick-flavored weirdness: