…and a million other [insert ethnicity] ways, I’m sure, but this clip sure makes my cousins & me feel seen when it comes to our moms. 😌☘️ See if you can relate:
Monthly Archives: January 2022
“Pass The Ball” collaborative animation
Love this. Per Colossal,
Forty months in the making, “Pass the Ball” is a delightful and eccentric example of the creative possibilities of collaboration […] Each scenario was created by one of 40 animators around the world, who, as the title suggests, “pass the ball” to the next person, resulting in a varied display of styles and techniques from stop-motion to digital.
“Bear with me…” Insane FPV drone footage.
Wait for it…
Opal Camera promises DSLR-quality web conferencing for $299
I’ve long envied friends like Adobe design director Matthew Richmond & principal scientist Marc Levoy who have the time, equipment, and energy to rig up high-end cameras for videoconferencing. Now Opal promises similar quality for the low (?) price of $299. Check out The Verge’s review, available in robo-spoken form here if you’d prefer:
Xenophilia: A new documentary on the origins of “Alien”
I’m surrounded by more than a few Xenomorph-obsessed folks, and this work should be right up their alleys—and potentially mine:
The (Crazy) Horse has seen the “Barn”
I’ve long harbored a love of Neil Young’s deep weirdness, and this new documentary (covering the making of Crazy Horse’s new album Barn) sounds really promising:
PMs: Come intern with my team this summer!
We have a great opportunity for a current MBA student who’s interested in focusing on product management. Here’s what I wrote for the job description:
Creative self-expression is at a generational crossroads: as AI gives apps superhuman perception of the world, creative tools can offer expressive superpowers to anyone. This is your opportunity to join a world-class team of researchers, engineers, and software developers who are inventing the next generation of creative tools powered by machine learning. In this role you will help Adobe Research define and launch new products and experiences that bring AI to a wide range of photographers, illustrators, and hobbyists.
We’re seeking a high-energy Product Manager MBA Intern who combines a deep curiosity about creative imaging with solid experience conducting research, collaborating with cross-functional teams, and delivering solutions to ambiguous challenges.
What you’ll do:
- Partner with design & engineering colleagues to define product concepts that address a range of user needs around creative imaging.
- Test, validate, and iterate on these concepts via user research and engagement.
- Help define and deliver on opportunities to acquire users and drive revenue across products.
- Identify and quantify market fit and opportunities across user segments and platforms.
If this sounds like a good fit for your skills & availability, please drop us a line, or if you know someone who might be a good fit, please share the link. Thanks!
The “Netaverse”: Brooklyn BB goes 3D
I’m intrigued but not quite sure how to feel about this. Precisely tracking groups of fast-moving human bodies & producing lifelike 3D copies in realtime is obviously a stunning technical coup—but is watching the results something people will prefer to high-def video of the real individuals & all their expressive nuances? I have no idea, but I’d like to know more.
Baraka goes AI
I’ve never seen this experimental documentary, but I can dig the steampunk gloss it’s been given courtesy of machine learning:
For comparison, check out the original scene:
[Via Earth Oliver]
“Why Adobe?” My thoughts in Insider
I, along with a number of colleagues, had the opportunity the other day to speak to Rachel DuRose of Insider (formerly Business Insider) about why we work at Adobe—and why a number of us have returned. In case you’re interested, here are my summarized comments:
——-
“I joined Adobe in 2000. I was working on web animation tools and after a couple years on that, a job opened on Photoshop. I ended up going to Google in 2014 because they were making a huge push into computational photography.”
“I guess a key difference for me between companies is that Google got into photography kind of as a hobby, and for Adobe it’s really the bread and butter of the company. Adobe people tend to come to projects because they really care about the specific mission — people tend to commit to a project for quite some time.”
“I came back in March of last year because I saw what Adobe had been doing around AI and machine learning. I was excited to come back and try to navigate that emerging world and figure out how we make these things useful and meaningful to a lot of folks and also do it responsibly so that it aligns with our values.”
“In my first tenure and in my return, imaging and the creative parts of Adobe remained the bedrock of the company identity, so I think that’s a through line. I guess the contrast, if there is one, is that now the company has expanded into all these things it really didn’t do before.”
“Every job is called ‘work’ for a reason. It’s gonna be challenging and frustrating and a million other things, but the caring part, I think, is the distinctive one. I’m cool with people swearing because they care. I’m cool with people who are unreasonably committed to getting something right, or going that extra mile.”
New stock photos are 100% AI-generated
PetaPixel reports,
PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”
None of the photos are of people who actually exist.
The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:
Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:
Bouncing balls, satisfying patterns
Heh—happy Friday from these little guys:
Kottke writes,
Spoiler: the trick here is a pair of simulations stitched together, like a physics Texas Switch: “Each sequence is obtained by joining two simulations, both starting from the time in which the balls are arranged regularly. One simulates forward in time, one backwards.”
Different Strokes: 3D surface analysis helps computers identify painters
Researchers at NVIDIA & Case Western Reserve University have developed an algorithm that can distinguish different painters’ brush strokes “at the bristle level”:
Extracting topographical data from a surface with an optical profiler, the researchers scanned 12 paintings of the same scene, painted with identical materials, but by four different artists. Sampling small square patches of the art, approximately 5 to 15 mm, the optical profiler detects and logs minute changes on a surface, which can be attributed to how someone holds and uses a paintbrush.
They then trained an ensemble of convolutional neural networks to find patterns in the small patches, sampling between 160 to 1,440 patches for each of the artists. Using NVIDIA GPUs with cuDNN-accelerated deep learning frameworks, the algorithm matches the samples back to a single painter.
The team tested the algorithm against 180 patches of an artist’s painting, matching the samples back to a painter at about 95% accuracy.
Notre-Dame goes VR
(No, not that Notre Dame—the cathedral undergoing restoration.) This VR tour looks compelling:
Equipped with an immersive device (VR headset and backpack), visitors will be able to move freely in a 500 sqm space in Virtual Reality. Guided by a “Compagnon du Devoir” they will travel through different centuries and will explore several eras of Notre Dame de Paris and its environement, recreated in 3D.
Thanks to scientific surveys, and precise historical data, the cathedral and its surroundings have been precisely reproduced to enhance the visitor’s immersion and engagement in the experience.
Check out the short trailer below:
Adobe PM & eng roles to define the future of video
This group is whipping up some serious magic. In case you or someone you know might be a good fit for one of their open roles, check ’em out:
Milky Way Bridge
A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:
PetaPixel writes,
“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”
Photography: Keyframe mode on Skydio 2 looks clever & fun
File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,
With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.
Here’s some example output:
Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.
Design: “The Fish & The Furious”
I know that we’re on a pretty dark timeline sometimes, but these little bits of silly (?) human ingenuity keep me going:
I am excited to share a new study led by Shachar Givon & @MatanSamina w/ Ohad Ben Shahar: Goldfish can learn to navigate a small robotic vehicle on land. We trained goldfish to drive a wheeled platform that reacts to the fish’s movement (https://t.co/ZR59Hu9sib). pic.twitter.com/J5BkuGlZ34
— Ronen Segev (@ronen_segev) January 3, 2022
Stephen Colbert & crew had some good fun with the news:
NVIDIA Canvas update: 4x higher res & new materials
Possibly my #1 reason to want to return to in-person work: getting to use apps like this (see previous overview) on a suitably configured workstation (which I lack at home).
Rad scans: Drones & trees
Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:
I must try to replicate this myself!
You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.
As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:
It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:
[Via Michael Klynstra]
Photography: “A Choice of Weapons”
Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:
AI: A video disappearing act
It’s wild what computers can now do—and wild how we just take much of it for granted.
Dreaming of a Neural Christmas
Oh, global warming, you old scamp…
Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:
Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!
For the sake of comparison, here’s the unmodified original:
Hallucination station: New image-synthesis tech from NVIDIA
As usual, I’m channeling Towlie in admitting I have no idea what’s going on right now—or at least just an inkling of one—but check out some recent witchcraft that takes in text & simple strokes, then synthesizes multiple kinds of outputs using a single model:
And as long as we’re talking hallucination:
Animation: Lego Super Mario Breakfast
Let’s all ease into 2022 with a sensible breakfast of pure, brick-flavored weirdness: