Wherever you are, I hope you’re feeling warm & loved, as I’m thankful to be feeling. And if you’ve gotta deal with some crazy family BS, well, maybe this will help. 😌Hang in there.
Cool to see the latest performance-capture tech coming to Adobe’s 2D animation app:
Man, I love stuff like Project Relate, and it’s fun to see some of my old teammates featured here. This is the stuff that’s really worth a damn, IMHO.
This is amazingly well done…
and the behind-the-scenes tour is a delight:
I was such a happy dad recently when my 12yo Henry (who, being an ADD guy like me, often finds long texts to be a slog) got completely engrossed in the graphic novel version of Slaughterhouse-Five and read it in an evening. Meanwhile his older brother was working his way through Cat’s Cradle—one of my all-time faves.
Now I’m pleased to see the arrival of Unstuck In Time, a new documentary covering Vonnegut’s life & work:
Tangentially (natch), this brought to mind the Vonnegut scenes in Back To School—where I first heard the phrase “F me?!“
Oh, and then there’s one of my favorite encapsulations of life wisdom—a commencement address wrongly attributed to Vonnegut, and tonally right in his wheelhouse”
Per Daring Fireball:
Devan Scott put together a wonderful, richly illustrated thread on Twitter contrasting the use of color grading in Skyfall and Spectre. Both of those films were directed by Sam Mendes, but they had different cinematographers — Roger Deakins for Skyfall, and Hoyte van Hoytema for Spectre. Scott graciously and politely makes the case that Skyfall is more interesting and fully-realized because each new location gets a color palette of its own, whereas the entirety of Spectre is in a consistent color space.
Click or tap on through to the thread; I think you’ll enjoy it.
Check out the latest demo of voice, character motion, and conversation all apparently synthesized in real time.
Now I want to see how it could handle powering a vampire doll, complete with accent. 🧛🏻♀️
Let’s start Monday with some moments of Zen. 😌
This is delightful (“and not super weird!” 😛).
Congrats to Eric Chan & the whole crew for making Time’s list:
Most of the photos we take these days look great on the small screen of a phone. But blow them up, and the flaws are unmistakable. So how do you clean up your snaps to make them poster-worthy? Adobe’s new Super Resolution feature, part of its Lightroom and Photoshop software, uses machine learning to boost an image’s resolution up to four times its original pixel count. It works by looking at its database of photos similar to the one it’s upscaling, analyzing millions of pairs of high- and low-resolution photos (including their raw image data) to fill in the missing data. The result? Massive printed smartphone photos worthy of a primo spot on your living-room wall. —Jesse Will
[Via Barry Young}
Perhaps a bit shockingly, I’ve somehow been only glancingly familiar with the artist’s career, and I really enjoyed this segment—including a quick visual tour spanning his first daily creation through the 2-minute piece he made before going to the hospital for the birth of his child (!), to the work he sold on Tuesday for nearly $30 million (!!).
Type the name of something (e.g. “beautiful flowers”), then use a brush to specify where you want it applied. Here, just watch this demo:
The project is open source, complements of the creators of ArtBreeder.
Today we are introducing Pet Portraits, a way for your dog, cat, fish, bird, reptile, horse, or rabbit to discover their very own art doubles among tens of thousands of works from partner institutions around the world. Your animal companion could be matched with ancient Egyptian figurines, vibrant Mexican street art, serene Chinese watercolors, and more. Just open the rainbow camera tab in the free Google Arts & Culture app for Android and iOS to get started and find out if your pet’s look-alikes are as fun as some of our favorite animal companions and their matches.
Check out my man Seamus:
Hmm—I want to get excited here, but as I’ve previously detailed, I’m finding it tough.
Pokemon Go remains the one-hit wonder of the location-based content/gaming space. That being true 5+ years after its launch, during which time Niantic has launched & killed Harry Potter Wizard Unite; Microsoft has done the same with Minecraft Earth; and Google has (AFAIK) followed suit with their location-based gaming API, I’m not sure that we’ll turn a corner until real AR glasses arrive.
Still & all, here it is:
The Niantic Lightship Augmented Reality Developer Kit, or ARDK, is now available for all AR developers around the world at Lightship.dev. To celebrate the launch, we’re sharing a glimpse of the earliest AR applications and demo experiences from global brand partners and developer studios from across the world.
We’re also announcing the formation of Niantic Ventures to invest in and partner with companies building the future of AR. With an initial $20 million fund, Niantic Ventures will invest in companies building applications that share our vision for the Real-World Metaverse and contribute to the global ecosystem we are building. To learn more about Niantic Ventures, go to Lightship.dev.
It’s cool that “The Multiplayer API is free for apps with fewer than 50,000 monthly active users,” and even above that number, it’s free to everyone for the first six months.
In traditional graphics work, vectorizing a bitmap image produces a bunch of points & lines that the computer then renders as pixels, producing something that approximates the original. Generally there’s a trade-off between editability (relatively few points, requiring a lot of visual simplification, but easy to see & manipulate) and fidelity (tons of points, high fidelity, but heavy & hard to edit).
Importing images into a generative adversarial network (GAN) works in a similar way: pixels are converted into vectors which are then re-rendered as pixels—and guess what, it’s a generally lossy process where fidelity & editability often conflict. When the importer tries to come up with a reasonable set of vectors that fit the entire face, it’s easy to end up with weird-looking results. Additionally, changing one attribute (e.g. eyebrows) may cause changes to others (e.g. hairline). I saw a case once where making someone look another direction caused them to grow a goatee (!).
My teammates’ FaceStudio effort proposes to address this problem by sidestepping the challenge of fitting the entire face, instead letting you broadly select a region and edit just that. Check it out:
For about two and a half minutes you’re gonna say, “Dude, this is the most boring content you’ve ever posted; thanks for wasting my time!” And then you’ll see why I posted it. 🙃
Sure, face swapping and pose manipulation on humans is cool and all, but our industry’s next challenge must be beak swapping and wing manipulation. 😅🐦
Okay, not directly, but generally dead-on:
We are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.” – George Orwell, 1946.
“…or at reorg time.” — JNack
By analyzing various artists’ distinctive treatment of facial geometry, researchers in Israel devised a way to render images with both their painterly styles (brush strokes, texture, palette, etc.) and shape. Here’s a great six-minute overview:
90 seconds well spent with the sensei:
And here’s how Camera Raw can feed into SO’s:
Turning bursts of what would have been outtakes into compelling little animations: that’s the promise of Project In-Between.