Photographer Dan Marcolina has been pushing the limits of digital creation for many years, and on Feb. 9 at 11am Eastern time, he’s scheduled to present a lecture. You can register here & check out details below:
Dan will demonstrate how to use an AI workflow to create dynamic, personalized imagery using your own photos. Additional information on Augmented Reality and thoughts from Dan’s 35-year design career will also be presented.
What attendees will learn:
Tips from Dan’s book iPhone Obsessed, revealing how to best shoot and process photos on your cell for use in the AI re-imagination process SEE THE BOOK
The AI photo re-creation workflow with tips and tricks to get started quickly, showing how a single source image can be crafted to create new meaning.
The post process of upscaling, clean-up, post manipulation and color correction to obtain a gallery ready image.
As a bonus he will show a little of how he did the augmented reality aspect of the show.
Anyone interested in image creation, photography, illustration, painting, storytelling, design or who is curious about AI/AR and the future of photography will gain valuable insights from the presentation.
Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.
I got my professional start at AGENCY.COM, a big dotcom-era startup co-founded by creative whirlwind Kyle Shannon. Kyle has been exploring AI imaging like mad, and recently he’s organized an AI Artists Salon that anyone is welcome to join in person (Denver) or online:
The AI Artists Salon is a collaborative group of creatively-minded people and we welcome anyone curious about the tsunami of inspiring generative technologies already rocking our our world. See Community Links & Resources.
On Tuesday evening I had the chance to present some ideas & progress that has inspired me—nothing confidential about Adobe work, of course, but hopefully illuminating nonetheless. If you’re interested, check it out (and pro tip: if you set playback to 1.5x speed or higher, I sound a lot sharper & funnier!).
The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:
In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.
Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.
The artist has prints available for sale in his shop, and you can find more work on his website and Behance.
Check out Karen X. Cheng’s clever use of simple wooden props + depth-to-image synthesis to create 3D renderings:
1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture) 2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”) 3. Upload your photo and then type in your prompts to remix the image
We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video
Hey friends—Happy New Year! I hope you’ve been able to get a little restful downtime, as I’ve done. I thought it’d be nice to ease back into things with these lovely titles from For All Mankind, which I’ve belatedly started watching & which I’m quite enjoying. The work is by Imaginary Forces, whom I’ve admired ever since seeing founder Kyle Cooper speak in the 90’s:
From the creators:
Lines deviate and converge in a graphic, tactile world that pays homage to the past while hinting at the “what if?” future explored throughout the series. Like the show logo itself, these lines weave and merge to create stylised representations of human exploration—badges, almost— ultimately reminding us of the common thread we share.
I’m not sure what to say about “The first rap fully written and sung by an AI with the voice of Snoop Dogg,” except that now I really want the ability to drop in collaborations by other well known voices—e.g. Christopher Walken.
Maybe someone can now lip-sync it with the faces of YoDogg & friends:
The whole community of creators, including toolmakers, continues to feel its way forward in the fast-moving world of AI-enabled image generation. For reference, here are some of the statements I’ve been seeing:
Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:
This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.
Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:
On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:
In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:
I believe strongly that creative tools must honor the wishes & rights of creative people. Hopefully that sounds thuddingly obvious, but it’s been less obvious how to get to a better state than the one we now inhabit, where a lot of folks are (quite reasonably, IMHO) up in arms about AI models having been trained on their work, without their consent. People broadly agree that we need solutions, but getting to them—especially via big companies—hasn’t been quick.
Thus it’s great to see folks like Mat Dryhurst & Holly Herndon driving things forward, working with Stability.ai and others to define opt-out/-in tools & get buy-in from model trainers. Check out the news:
It’s wild to look back & realize that I’ve spent roughly a third of my life at this special place, making amazing friends & even meeting my future wife (and future coworker!) on a customer visit. I feel like I should have more profundity to offer, and maybe I will soon, but at the moment I just feel grateful—including for the banger of a party the company threw last week in SF.
Here’s a fun little homage to history, made now via Photoshop 1.0. (I still kinda wish I hadn’t been talked into donating my boxed copy of 1.0 to the Smithsonian! The ‘Dobe giveth…)
Raise your hand if you’re a Day 1.0 @Photoshop fan 🙋♀️
Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:
It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:
Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)
Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):
Check out the latest magic, as described by Gizmodo:
To make an age-altering AI tool that was ready for the demands of Hollywood and flexible enough to work on moving footage or shots where an actor isn’t always looking directly at the camera, Disney’s researchers, as detailed in a recently published paper, first created a database of thousands of randomly generated synthetic faces. Existing machine learning aging tools were then used to age and de-age these thousands of non-existent test subjects, and those results were then used to train a new neural network called FRAN (face re-aging network).
When FRAN is fed an input headshot, instead of generating an altered headshot, it predicts what parts of the face would be altered by age, such as the addition or removal of wrinkles, and those results are then layered over the original face as an extra channel of added visual information. This approach accurately preserves the performer’s appearance and identity, even when their head is moving, when their face is looking around, or when the lighting conditions in a shot change over time. It also allows the AI generated changes to be adjusted and tweaked by an artist, which is an important part of VFX work: making the alterations perfectly blend back into a shot so the changes are invisible to an audience.
As I say, another day, another specialized application of algorithmic fine-tuning. Per Vice:
For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire…
…while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses.
Meanwhile, the quality of generated faces continues to improve at a blistering pace: