All posts by jnack

“Diffused Reality” lecture this Thursday

Photographer Dan Marcolina has been pushing the limits of digital creation for many years, and on Feb. 9 at 11am Eastern time, he’s scheduled to present a lecture. You can register here & check out details below:

—————————

Dan will demonstrate how to use an AI workflow to create dynamic, personalized imagery using your own photos. Additional information on Augmented Reality and thoughts from Dan’s 35-year design career will also be presented.

What attendees will learn:

  • Tips from Dan’s book iPhone Obsessed, revealing how to best shoot and process photos on your cell for use in the AI re-imagination process  SEE THE BOOK
  • The AI photo re-creation workflow with tips and tricks to get started quickly, showing how a single source image can be crafted to create new meaning.
  • The post process of upscaling, clean-up, post manipulation and color correction to obtain a gallery ready image.
  • As a bonus he will show a little of how he did the augmented reality aspect of the show.

Anyone interested in image creation, photography, illustration, painting, storytelling, design or who is curious about AI/AR and the future of photography will gain valuable insights from the presentation.

3D capture comes to Adobe Substance 3D Sampler 4.0

Photogrammetrize all the things!!

Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.

Here’s the workflow in more detail:

And here’s info on capture tools:

“The impossibilities are endless”: Yet more NeRF magic

Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:

Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:

As I mentioned last week, Luma Labs has enabled interactive model embedding, and now they’re making the viewer crazy-fast:

Me talk generative imaging one day

I got my professional start at AGENCY.COM, a big dotcom-era startup co-founded by creative whirlwind Kyle Shannon. Kyle has been exploring AI imaging like mad, and recently he’s organized an AI Artists Salon that anyone is welcome to join in person (Denver) or online:

The AI Artists Salon is a collaborative group of creatively-minded people and we welcome anyone curious about the tsunami of inspiring generative technologies already rocking our our world. See Community Links & Resources.

On Tuesday evening I had the chance to present some ideas & progress that has inspired me—nothing confidential about Adobe work, of course, but hopefully illuminating nonetheless. If you’re interested, check it out (and pro tip: if you set playback to 1.5x speed or higher, I sound a lot sharper & funnier!).

Luma enables embedding of interactive NeRF captures

Pretty cool!

Here’s an example made from a quick capture I did of my friend (nothing special, but amazing what one can get simply by walking in a circle while recording video):

The world’s first (?) NeRF-powered commercial

Karen X. Cheng, back with another 3D/AI banger:

As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:

The Chainsmokers meet Stable Diffusion

“HEY MAN, you ever drop acid?? No? Well I do, and it looks *just like this*!!” — an excitable Googler when someone wallpapered a big meeting room in giant DeepDream renderings

In a similar vein, have fun tripping balls with AI, courtesy of Remi Molettee:

Bonus: Journey gets the treatment:

Bonus bonus: Journey gets rather hilariously silenced:

“PERSEVERE”: A giant statement of encouragement

The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:

@brighton.denevan

PERSEVERE 💙 • 1-6-2023 • 3-5:30 • 8 Miles

♬ original sound – brightondenevan

The Santa Cruz Sentinel writes,

In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.

CGI: Primordial soup for you!

Check out these gloriously detailed renderings from Markos Kay. I just wish the pacing were a little more chill so I could stare longer at each composition!

Colossal notes,

Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.

The artist has prints available for sale in his shop, and you can find more work on his website and Behance.

AI-painted animation: “Help Changes Everything”

In this beautiful work from Paul Trillo & co., AI extends—instead of replaces—human creativity & effort:

Here’s a peek behind the scenes:

AI: From dollhouse to photograph

Check out Karen X. Cheng’s clever use of simple wooden props + depth-to-image synthesis to create 3D renderings:

She writes,

1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture)
2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”)
3. Upload your photo and then type in your prompts to remix the image

We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video

“For All Mankind”

Hey friends—Happy New Year! I hope you’ve been able to get a little restful downtime, as I’ve done. I thought it’d be nice to ease back into things with these lovely titles from For All Mankind, which I’ve belatedly started watching & which I’m quite enjoying. The work is by Imaginary Forces, whom I’ve admired ever since seeing founder Kyle Cooper speak in the 90’s:

From the creators:

Lines deviate and converge in a graphic, tactile world that pays homage to the past while hinting at the “what if?” future explored throughout the series. Like the show logo itself, these lines weave and merge to create stylised representations of human exploration—badges, almost— ultimately reminding us of the common thread we share.

AI Snoop Dogg has arrived

…y’know, for all of you who were waiting. 🙄

I’m not sure what to say about “The first rap fully written and sung by an AI with the voice of Snoop Dogg,” except that now I really want the ability to drop in collaborations by other well known voices—e.g. Christopher Walken.

Maybe someone can now lip-sync it with the faces of YoDogg & friends:

Heinz AI Ketchup

Life’s like a mayonnaise soda…
What good is seeing eye chocolate…

Lou Reed

The marketers at Heinz had a little fun noticing that an AI image-making app (DALL•E, I’m guessing) tended to interpret requests for “ketchup” in the style of Heinz’s iconic bottle. Check it out:

❄️ Good tidings to you & yours

Whether or not you’re celebrating Christmas, I hope that you’re having a restful day & keeping warm with family & friends. Enjoy a couple of tidbits from the Nacks—including some Lego stop motion…

…and that heartwarming Christmas classic, Die Hard. 😅

ArtStation, Kickstarter, and others share their AI art policies

The whole community of creators, including toolmakers, continues to feel its way forward in the fast-moving world of AI-enabled image generation. For reference, here are some of the statements I’ve been seeing:

  • ArtStation has posted guidance on “Use of AI Software on ArtStation.”
    • Projects tagged using “NoAI” will automatically be assigned an HTML “NoAI” meta tag.
    • Projects won’t be assigned this tag automatically, as the site wants creators to choose whether or not their work is eligible for use in training.
  • Kickstarter has shared “Our Current Thinking on the Use of AI-Generated Image Software and AI Art.
    • “Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.”
    • Key questions they’ll ask include “Is a project copying or mimicking an artist’s work?” and “Does a project exploit a particular community or put anyone at risk of harm?”
  • From 3dtotal Publishing:
    • “3dtotal has four fundamental goals. One of them is to support and help the artistic community, so we cannot support AI art tools as we feel they hurt this community.”
  • Clip Studio Paint will no longer implement an image generator function:
    • “We received a lot of feedback from the community and will no longer implement the image generator palette.”
    • They “fear that this will make Clip Studio Paint artwork synonymous with AI-generated work” and are choosing to prioritize other features.
  • The Society of Illustrators has shared their thoughts:
    • “We oppose the commercial use of Artificially manufactured images and will not allow AI into our annual competitions at all levels.”
    • “AI was trained using copyrighted images. We will oppose any attempts to weaken copyright protections, as that is the cornerstone of the illustration community.”

More NeRF magic: From Michelangelo to NYC

This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:

Then here’s AJ from the NYT doing a neat day-to-night transition:

And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):

“The Book of Leaves”

Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:

This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.

[Via]

A cool, quick demo of Midjourney->3D

Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:

On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:

In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:

[Via Shi Yan]

Helping artists control whether AI trains on their work

I believe strongly that creative tools must honor the wishes & rights of creative people. Hopefully that sounds thuddingly obvious, but it’s been less obvious how to get to a better state than the one we now inhabit, where a lot of folks are (quite reasonably, IMHO) up in arms about AI models having been trained on their work, without their consent. People broadly agree that we need solutions, but getting to them—especially via big companies—hasn’t been quick.

Thus it’s great to see folks like Mat Dryhurst & Holly Herndon driving things forward, working with Stability.ai and others to define opt-out/-in tools & get buy-in from model trainers. Check out the news:

Here’s a concise explainer vid from Mat:

Adobe celebrates its 40th anniversary

It’s wild to look back & realize that I’ve spent roughly a third of my life at this special place, making amazing friends & even meeting my future wife (and future coworker!) on a customer visit. I feel like I should have more profundity to offer, and maybe I will soon, but at the moment I just feel grateful—including for the banger of a party the company threw last week in SF.

Here’s a fun little homage to history, made now via Photoshop 1.0. (I still kinda wish I hadn’t been talked into donating my boxed copy of 1.0 to the Smithsonian! The ‘Dobe giveth…)

PDF to cloud to homegrown tech titan: Adobe celebrates 40th anniversary

DALL•E/Stable Diffusion Photoshop panel gains new features

Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:

More NeRF magic: Dolly zoom & beyond

It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:

Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)

Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):

Disney demos new aging/de-aging tech

Check out the latest magic, as described by Gizmodo:

To make an age-altering AI tool that was ready for the demands of Hollywood and flexible enough to work on moving footage or shots where an actor isn’t always looking directly at the camera, Disney’s researchers, as detailed in a recently published paper, first created a database of thousands of randomly generated synthetic faces. Existing machine learning aging tools were then used to age and de-age these thousands of non-existent test subjects, and those results were then used to train a new neural network called FRAN (face re-aging network).

When FRAN is fed an input headshot, instead of generating an altered headshot, it predicts what parts of the face would be altered by age, such as the addition or removal of wrinkles, and those results are then layered over the original face as an extra channel of added visual information. This approach accurately preserves the performer’s appearance and identity, even when their head is moving, when their face is looking around, or when the lighting conditions in a shot change over time. It also allows the AI generated changes to be adjusted and tweaked by an artist, which is an important part of VFX work: making the alterations perfectly blend back into a shot so the changes are invisible to an audience.

AI-made avatars for LinkedIn, Tinder, and more

As I say, another day, another specialized application of algorithmic fine-tuning. Per Vice:

For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire…

…while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses. 

Meanwhile, the quality of generated faces continues to improve at a blistering pace:

Crowdsourced AI Snoop Doggs (is a real headline you can now read)

The Doggfather recently shared a picture of himself (rendered presumably via some Stable Diffusion/DreamBooth personalization instance)…

…thus inducing fans to reply with their own variations (click tweet above to see the thread). Among the many fun Snoop Doggs (or is it Snoops Dogg?), I’m partial to Cyberpunk…

…and Yodogg:

Some amazing AI->parallax animations

Great work from Guy Parsons, combining Midjourney with Capcut:

And from the replies, here’s another fun set:

Check out frame interpolation from Runway

I meant to share this one last month, but there’s just no keeping up with the pace of progress!

My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use: