This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual.
We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea.
I hadn’t heard of Disney’s Gallery: The Mandalorian, but evidently it revealed more details about the Luke Skywalker scene. In response, according to Screen Rant,
VFX team Corridor Crew took the time to share their thoughts on the show’s process. From what they determined, Hamill was merely on set to provide some reference points for the creative team and the stand-in actor, Max Lloyd-Jones. The Mandalorian used deepfake technology to pull together Hamill’s likeness, and they combed through countless hours of Star Wars footage to find the best expressions.
I found the 6-minute segment pretty entertaining & enlightening. Check it out:
I keep meaning to pour one out for my nearly-dead homie, Photoshop 3D (post to follow, maybe). We launched it back in 2007 thinking that widespread depth capture was right around the corner. But “Being early is the same as being wrong,” as Marc Andreessen says, and we were off by a decade (before iPhones started putting depth maps into images).
Now, though, the world is evolving further, and researchers are enabling apps to perceive depth even in traditional 2D images—no special capture required. Check out what my colleagues have been doing together with university collaborators:
By now you’ve probably seen this big gato bounding around:
I’ve been wondering how it was done (e.g. was it something from Snap, using the landmarker tech that’s enabled things like Game of Thrones dragons to scale the Flatiron Building?). Fortunately the Verge provides some insights:
In short, what’s going on is that an animation of the virtual panther, which was made in Unreal Engine, is being rendered within a live feed of the real world. That means camera operators have to track and follow the animations of the panther in real time as it moves around the stadium, like camera operators would with an actual living animal. To give the panther virtual objects to climb on and interact with, the stadium is also modeled virtually but is invisible.
This tech isn’t baked into an app, meaning you won’t be pointing your phone’s camera in the stadium to get another angle on the panther if you’re attending a game. The animations are intended to air live. In Sunday’s case, the video was broadcast live on the big screens at the stadium.
I look forward to the day when this post is quaint, given how frequently we’re all able to glimpse things like this via AR glasses. I give it 5 years, or maybe closer to 10—but let’s see.
Adobe is looking for a product manager to help build a world-class mobile camera app for Adobe—powered by machine learning, computer vision, and computational photography, and available on all platforms. This effort, led by Adobe VP and Fellow Marc Levoy, who is a pioneer in computational photography, will begin as part of our Photoshop Camera app. It will expand its core photographic capture capabilities, adding new computational features, with broad appeal to consumers, hobbyists, influencers, and pros. If you are passionate about mobile photography, this is your opportunity to work with a great team that will be changing the camera industry.
Adobe is looking for a product manager to help build a world-class community and education experience within the Lightroom ecosystem of applications! We’re looking for someone to help create an engaging, rewarding, and inspiring community to help photographers connect with each other and increase customer satisfaction and retention, as well as create a fulfilling in-app learning experience. If you are passionate about photography, building community, and driving customer success, this is your opportunity to work with a great team that is driving the future of photography!
Adobe is looking to hire a QA Technical Artist (contract role) to work with the Product Management team for Adobe Stager, our 3D staging and rendering application. The QA Technical Artist will analyze and contribute to the quality of the application through daily art production and involvement with product feedback processes. We are looking for a candidate interested in working on state-of-the-art 3D software while revolutionizing how it can be approachable for new generations of creators.
The song lyrics are in neither Italian or English, though at first they sound like the latter. It turns out that Celentano’s words are in no language—they are gibberish, except for the phrase “all right!” In a television clip filmed several years later, Celentano explains (in Italian) to a “student” why he wrote a song that “means nothing.” He says that the song is about “our inability to communicate in the modern world,” and that the word “prisencolinensinainciusol” means “universal love.” […]
“Prisencolinensinainciusol” is such a loving presentation of silliness. Would any grown performer allow themselves this level of playfulness now? Wouldn’t a contemporary artist feel obliged add a tinge of irony or innuendo to make it clear that they were “knowing” and “sophisticated”? It’s not clear what would be gained by darkening this piece of cotton candy, or what more you could know about it: it is perfect as is.
Sounds like an interesting opportunity to nerd out (in the best sense) in October 4-5:
Adobe Developers Live brings together Adobe developers and experience builders with diverse backgrounds and a singular purpose – to create incredible end-to-end experiences. This two-day conference will feature important developer updates, technical sessions and community networking opportunities.
There’s also a planned hackathon:
Hackathon brings Adobe developers from across the global Adobe Experience Cloud community with Adobe engineering teams to connect, collaborate, contribute, and create solutions using the latest Experience Cloud products and tooling.
The plants featured in Neil Bromhall’s timelapses are grown in a blackened, window-less studio with a grow light serving as artificial sunlight.
“Plants require periods of day and night for photosynthesis and to stimulate the flowers and leaves to open,” the photographer tells PetaPixel. “I use heaters or coolers and humidifiers to control the studio condition for humidity and temperature. You basically want to recreate the growing conditions where the plants naturally thrive.”
Lighting-wise, Bromhall uses a studio flash to precisely control his exposure regardless of the time of day it is. The grow light grows the plants while the flash illuminates the photos.
Seeking an experienced software engineer with expertise in 3D graphics research and engineering, a passion for interdisciplinary collaboration, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation 3D graphics software.
Seeking an experienced Senior Software Engineer with a deep understanding of 3D graphics application engineering, familiarity with CPU and GPU architectures, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation collaborative 3D graphics software
We’re hiring a Senior 3D Artist to work closely with an important strategic partner. You will act as the conduit between the partner, and our internal product development teams. You have a deep desire to experiment with new technologies and design new and efficient workflows. The role is full-time and based in Portland or San Francisco. Also open to other west coast cities such as Seattle and Los Angeles.
Click on the above links to see full job descriptions and apply online. Don’t see what you’re looking for? Send us your profile, or portfolio. We are always looking for talented engineers, and other experts in the 3D field. We may have a future need for contractors or special projects.
On the reasonable chance that you’re interested in my work, you might want to bookmark (or at least watch) this one. Two-Minute Papers shows how NVIDIA’s StyleGAN research (which underlies Photoshop’s Smart Portrait Neural Filter) has been evolving, recently being upgraded with Alias-Free GAN (which very nicely reduces funky artifacts—e.g. a “sticky beard” and “boiling” regions (hair, etc.):
Side note: I continue to find the presenter’s enthusiasm utterly infectious: “Imagine saying that to someone 20 years ago. You would end up in a madhouse!” and “Holy mother of papers!”
This is why I’m glad that the Sacramento delta (where we lived in a van down by the river last night) remains, to the best of my knowledge, gator-free: otherwise my drone might’ve met this kind of colorful fate:
The New York Public Library has shared some astronomical drawings by E.L. Trouvelot done in the 1870s, comparing them to contemporary NASA images. They write,
Trouvelot was a French immigrant to the US in the 1800s, and his job was to create sketches of astronomical observations at Harvard College’s observatory. Building off of this sketch work, Trouvelot decided to do large pastel drawings of “the celestial phenomena as they appear…through the great modern telescopes.”
Hmm—I’m not sure what to think about this & would welcome your thoughts. Promising to “Give people an idea of your appearance, while still protecting your true identity,” this Anonymizer service will take in your image, then generate multiple faces that vaguely approximate your characteristics:
Here’s what it made for me:
I find the results impressive but a touch eerie, and as I say, I’m not sure how to feel. Is this something you’d find useful (vs., say, just using something other than a photograph as your avatar)?
You might remember the portrait relighting features that launched on Google Pixel devices last year, leveraging some earlier research. Now a number of my former Google colleagues have created a new method for figuring out how a portrait is lit, then imposing new light sources in order to help it blend into new environments.
Two-Minute Papers has put together a nice, accessible summary of how it works:
Going back seven years or so, when we were working on a Halloween face painting feature for Google Photos (sort of ur-AR), I’ve been occasionally updating a Pinterest board full of interesting augmentations done to human faces. I’ve particularly admired the work of Yulia Brodskaya, a master of paper quilling. Here’s a quick look into her world:
This fruit of collaborative creation process, all keyed off of a single scene file, is something to be hold, especially when viewed on a phone (where it approximates scrolling through a magical world):
For Dynamic Machines, I challenged 3D artists to guide a chrome ball from point A to point B in the most creative way possible. Nearly 2,000 artists entered, and in this video, the Top 100 renders are featured from an incredible community of 3D artists!
Heh—my Adobe video eng teammate Eric Sanders passed along this fun poster (artist unknown):
It reminds me of a silly thing I made years ago when our then-little kids had a weird fixation on light fixtures. Oddly enough, this remains the one & presumably only piece of art I’ll ever get to show Matt Groening, as I got to meet him at dinner with Lynda Weinman back then. (Forgive the name drop; I have so few!)
My vices boil down largely to buying Bailey’s and semi-goofball cameras. I might need to combine the latter (but not the former!) in replicating a technique like this, which cuts between a chest-mounted Insta360 GO2 and a pole-mounted Insta360 One X2:
Back in 1999, before I worked at Adobe, a PM there called me to inquire about my design agency’s needs as we worked across teams and offices spread over multiple time zones. In the intervening years the company has tried many approaches, some more successful than others (what up, Version Cue! yeah, now who feels old…), but now they’re making the biggest bet I’ve seen:
With over a million users across media and entertainment companies, agencies, and global brands, Frame.io streamlines the video production process by enabling video editors and key project stakeholders to seamlessly collaborate using cloud-first workflows.
Creative Cloud customers, from video editors, to producers, to marketers, will benefit from seamless collaboration on video projects with Frame.io workflow functionality built natively in Adobe Creative Cloud applications like Adobe Premiere Pro, Adobe After Effects, and Adobe Photoshop.
I can’t wait to see how all this plays out—and if you’re looking for the ear of a PM on point who’d like to hear your thoughts, well, there’s one who lives in my house. 🙂
“Be bold, and mighty forces will come to your aid.” – Goethe
So I said nearly 15 (!) years ago (cripes…) when we launched the first Photoshop public beta. Back then the effort required moving heaven and earth, whereas now it’s a matter of “oh hai, click that little icon that you probably neglect in your toolbar; here be goodies.” Such is progress, as the extraordinary becomes the ordinary. Anyhoo:
Photoshop Beta is debuting this month. It is a new way Creative Cloud members can give feedback to the Photoshop team. Photoshop Beta is an exciting opportunity to test and provide feedback about stability, performance, and occasionally new features by using a version of Photoshop before it is released.
To get Photoshop Beta, Creative Cloud members can install it from the Beta section of the Creative Cloud desktop app. Look for Photoshop Beta and simply click Install.
To provide feedback, head over to the Photoshop Ecosystem Adobe Community and create a new post using the “Beta” topic. Stay tuned for a brand-new forum experience for the Photoshop Beta coming soon.
I was such a die-hard Apple dead-ender in the 90’s that I’d often fruitlessly pitch Macs anyone who’d listen (any many who wouldn’t). My roommate would listen to my rants about the vile inelegance of Windows, then gently shake his head and say, “Look, I get it. But the Mac is like a monorail: it’s sleek, it’s beautiful, and it’s just stuck on some little loop.” Then off he went to buy a new gaming PC.
This funny, informative video explains the actual mechanics & economics that explain why such “futuristic” designs have rarely made sense in the real world. Check it out.
I swear I spent half of last summer staring at tiny 3D Naomi Osaka volleying shots on my desktop. I remain jealous of my former teammates who got to work with these athletes (and before them, folks like Donald Glover as Childish Gambino), even though doing so meant dealing with a million Covid safety protocols. Here’s a quick look at how they captured folks flexing & flying through space:
Heh—I was amused to hear generative apps’ renderings of human faces—often eerie, sometimes upsetting—described as turning people into “rotten fruits.”
This reminded me of a recurring sketch from Conan O’Brien’s early work, which featured literal rotting fruit acting out famous films—e.g. Apocalypse Now, with Francis Ford Coppola sitting there to watch:
No, I don’t know what this has to do with anything—except now I want to try typing “rotting fruit” plus maybe “napalm in the morning” into a generative engine just to see what happens. The horror… the horror!
“A strange mixture between Futurama & Evil Los Angeles… The worst of urban planning and capitalism, plus some slavery for good measure. Welcome to Dubai, everyone.”
This darkly funny piece presents some eye-opening info on a petrodollar playground literally sinking into the sea. Along the way it draws comparisons to past misallocations of every sort of capital (e.g. as in Communist Romania, “Smooth-brained dictator + construction = dumb shit.”
I should hasten to say that I have never visited Dubai & don’t know of any connection with anyone connected with it.
In the magical, frequently bizarre world of generative adversarial networks, changing one attribute will often accidentally affect other “entangled” ones (e.g. I’ve seen a change of gaze cause people to grow beards!). This new tech promises better isolation of—and thus control over—things like hair style, lighting, skin tone, and more.
Back in 2013 I found myself on a bus full of USC film students, and I slowly realized that the guy seated next to me had created the Take On Me vid. Not long after I was at Google & my friend recreated the effect in realtime AR. Perhaps needless to say, they didn’t do anything with it. ¯\_(ツ)_/¯
In any event, now Action Movie Dad Daniel Hashimoto has created a loving homage as a tutorial video (!).
Dropping by the Tehachapi Loop (“the Eight Wonder of the railroading world”) last year en route to Colorado was a highlight of the journey and one of my son Henry’s greatest railfanning experiences ever—which is really saying something!
This year Hen & I set out for some sunset trainspotting. We were thrilled to see multiple trains passing each other & looping over one another via the corkscrew tracks. Giant hat tip to the great Wynton Marsalis & co. for the “Big Train” accompaniment here:
As a papa-razzo, I was especially pleased to have the little chap rocking my DSLR while I flew, capturing some fun shots:
Tangential bonus(-ish): Here’s a little zoom around Red Rocks outside Vegas the next day:
This simple but excellent question was put to me once by Merlin Mann. I’ve reflected on it many times over the years, and I’d ask it of promising candidates in job interviews. I’m asking myself now, as I mark one more revolution around the Sun.
Some people say “Money.” Okay, sure… but why?
Others say “Time.” That’s maybe closer to my heart—but again, to what end? What are you/we doing with the time we have now?
For me the answer has always been “Impact.” I don’t know whether that’s “right” (if such a thing exists), but it captures my eternal desire to make a positive dent in the universe, as Steve Jobs would put it. I want to leave things better than I found them—happier, more beautiful, more fun—for my family, friends, and the creative world at large.
Maybe better answers exist—Love, Courage, Wisdom; I want them all in great abundance. From those things would flow impact & all other goodness.
Indeed, but sitting in any big fat company, where any of one’s individual efforts is likely to have only a passing impact on the macro trends (growth, stock price, compensation), can be like living in Shawshank: even when free to do otherwise, you keep asking permission, even to pee.
On my way through the in door at Google, a burned-out PM who was about to depart told me about learned helplessness & Brownian motion (think dust particles in a room—energetic, but not actually going anywhere). “You know that guy Reek on Game of Thrones—just psychologically broken? Yeah, that’s what you become here.” Comedic exaggeration aside, he wasn’t wholly wrong.
All this comes to mind as a friend at Google shared a cautionary tale by Søren Kierkegaard that Thomas J. Watson, CEO of IBM in its glory days, used to tell:
There was once a wild goose.
In the autumn, about the time for migration, it became aware of some tame geese. It became enamored by them, thought it a shame to fly away from them, and hoped to win them over so that they would decide to go along with it on the flight. To that end it became involved with them in every possible way. It tried to entice them to rise a little higher and then again a little higher in their flight, that they might, if possible, accompany it in the flight, saved from the wretched, mediocre life of waddling around on the earth as respectable, tame geese.
At first, the tame geese thought it very entertaining and liked the wild goose. But soon they became very tired of it, drove it away with sharp words, censured it as a visionary fool devoid of experience and wisdom.
Alas, unfortunately the wild goose had become so involved with the tame geese that they had gradually gained power over it, their opinion meant something to it – and gradually the wild goose became a tame goose.
In a certain sense there was something admirable about what the wild goose wanted. Nevertheless, it was a mistake, for – this is the law – a tame goose never becomes wild, but a wild goose can certainly become tame.
If what the wild goose tried to do is to be commended in any way, then it must above all watch out for one thing – that it hold on to itself.
As soon as it notices that the tame geese have any kind of power over it, then away, away in migratory flight.
Or as Frederick Douglass said, “I prayed for twenty years but received no answer until I prayed with my legs.”
Now let’s go into the weekend, sticking it to The Man with some Arcade Fire.
They heard me singing and they told me to stop Quit these pretentious things and just punch the clock
Filmmaker Ryan McIntyre recently had the opportunity to use the Phantom TMX 7510 slow-motion camera’s 100,000 frames per second and combined it with a Laowa 24mm 2x Macro Probe lens to capture spectacular footage of vintage flashbulbs bursting brightly.
I quite enjoyed this Talk at Google by mathematician & concert pianist (what a slouch!) Eugenia Cheng. Wait, wait, don’t go—I swear it’s infinitely more down-to-earth & charming than one would think. Among other things she uses (extremely accessible math (er, “maths” 🙄) to illuminate touchy subjects like societal privilege, diet, and exercise. It’s also available in podcast form.
Emotions are powerful. In newspaper headlines and on social media, they have become the primary way of understanding the world. With her new book “The Art of Logic: How to Make Sense in a World that Doesn’t”, Eugenia has set out to show how mathematical logic can help us see things more clearly – and know when politicians and companies are trying to mislead us. This talk, like the book, is filled with useful real-life examples of logic and illogic at work and an essential guide to decoding modern life.
Break on through to the other side with the Photoshop master:
Learn the secret of turning daytime images into nighttime images with this advanced Adobe Photoshop tip and technique. This tutorial discusses painting techniques, masking, Levels controls, and Sky replacement.
“I will give you Del’s body, and it’s a great body, because you can study the effects of smoking, alcohol, cocaine, and heroin on the brain. All I need is the skull.”
So said Charna Halpern, the longtime creative partner of improve legend Del Close, who insisted that his skull be donated for use on stage (e.g. in Hamlet). To say that he sounds like a character would be an incredible understatement, and this new documentary about his life & work looks rather amazing:
A few years ago I found myself wasting my life in the bowels of Google’s enterprise apps group. (How & why that happened is a long, salty story—but like everything good & bad, the chapter passed.) In the course of that we found ourselves talking with IT folks at Ocado, a company that’s transformed from grocery shopping into the provider of really interesting robotics. Check out this rather eye-popping demonstration of how their bots fulfill orders at crazy speed:
Last summer my former teammates got all kinds of clever in working around Covid restrictions—and the constraints of physics and 3D capture—to digitize top Olympic athletes performing their signature moves. I wish they’d share the behind-the-scenes footage, as it’s legit fascinating. (Also great: seeing Donald Glover, covered in mocap ping pong balls for the making of Pixel Childish Gambino AR content, sneaking up behind my colleague like some weird-ass phantom. 😝)
Anyway, after so much delay and uncertainty, I’m happy to see those efforts now paying off in the form of 3D/AR search results. Check it out: