Last summer my former teammates got all kinds of clever in working around Covid restrictions—and the constraints of physics and 3D capture—to digitize top Olympic athletes performing their signature moves. I wish they’d share the behind-the-scenes footage, as it’s legit fascinating. (Also great: seeing Donald Glover, covered in mocap ping pong balls for the making of Pixel Childish Gambino AR content, sneaking up behind my colleague like some weird-ass phantom. 😝)
Anyway, after so much delay and uncertainty, I’m happy to see those efforts now paying off in the form of 3D/AR search results. Check it out:
One of my favorite flexes while working on Google Photos was to say, “Hey, you remember the liquid-metal guy in Terminator 2? You know who wrote that? This guy,” while pointing to my ex-Adobe teammate John Schlag. I’d continue to go down the list—e.g. “You know who won an Oscar for rigging at DreamWorks? This guy [points at Alex Powell].” I did this largely to illustrate how insane it was to have such a murderer’s row of talent working on whatever small-bore project Photos had in mind. (Sorry, it was a very creatively disappointing time.)
Anyway, John S., along with Michael Natkin (who went on to spend a decade+ making After Effects rock), contributed to this great oral history of the making of Terminator 2. It’s loaded with insights & behind-the-scenes media I’d never seen before. Enjoy!
Back in the 90’s I pleaded with Macromedia to enable a “Flash Interchange Format” that would allow me to combine multiple apps in making great animated content. They paid this no attention, and that’s part of why I joined Adobe & started working on things like integrating After Effects with LiveMotion—a code path that helps connect AE with other apps even two+ decades (!) later.
Point is, I’ve always loved aligning tools in ways that help creators combine apps & reach an audience. While at Google I worked with Adobe folks on 3D data exchange, and now I’m happy to see that Adobe is joining the new Open 3D Foundation, meant to “accelerate developer collaboration on 3D engine development for AAA-games and high-fidelity simulations.”
Amazon… is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms
As for Adobe’s role,
“Adobe is proud to champion the Open 3D Foundation as a founding member. Open source technologies are critical to advance sustainability across 3D industries and beyond. We believe collaborative and agnostic toolsets are the key to not only more healthy and innovative ecosystems but also to furthering the democratization of 3D on a global scale.” — Sebastien Deguy, VP of 3D & Immersive at Adobe.
We introduce HuMoR: a 3D Human Motion Model for Robust Estimation of temporal pose and shape. Though substantial progress has been made in estimating 3D human motion and shape from dynamic observations, recovering plausible pose sequences in the presence of noise and occlusions remains a challenge. For this purpose, we propose an expressive generative model in the form of a conditional variational autoencoder, which learns a distribution of the change in pose at each step of a motion sequence. Furthermore, we introduce a flexible optimization-based approach that leverages HuMoR as a motion prior to robustly estimate plausible pose and shape from ambiguous observations. Through extensive evaluations, we demonstrate that our model generalizes to diverse motions and body shapes after training on a large motion capture dataset, and enables motion reconstruction from multiple input modalities including 3D keypoints and RGB(-D) videos.
NVIDIA Research is revving up a new deep learning engine that creates 3D object models from standard 2D images — and can bring iconic cars like the Knight Rider’s AI-powered KITT to life — in NVIDIA Omniverse.
A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, tail lights and blinkers.
I’m thrilled that a bunch of Google friends (including Dan Goldman, who was instrumental in bringing Content-Aware Fill to Photoshop) have gotten to reveal Project Starline, their effort to deliver breakthrough 3D perception & display to bring people closer together:
Imagine looking through a sort of magic window, and through that window, you see another person, life-size and in three dimensions. You can talk naturally, gesture and make eye contact.
To make this experience possible, we are applying research in computer vision, machine learning, spatial audio and real-time compression. We’ve also developed a breakthrough light field display system that creates a sense of volume and depth that can be experienced without the need for additional glasses or headsets.
Check out this quick tour, even if it’s hard to use regular video to convey the experience of using the tech:
I hope that Dan & co. will be able to provide some peeks behind the scenes, including at how they captured video for testing and demos. (Trust me, it’s all way weirder & more fascinating than you’d think!)
It’s really cool to see the Goog leveraging its immense corpus of not just 2D or 3D, but actually 4D (time-based), data to depict our planetary home.
In the biggest update to Google Earth since 2017, you can now see our planet in an entirely new dimension — time. With Timelapse in Google Earth, 24 million satellite photos from the past 37 years have been compiled into an interactive 4D experience. Now anyone can watch time unfold and witness nearly four decades of planetary change. […]
Elsewhere I put my pal Seamus (who’s presently sawing logs on the couch next to me) through NVIDIA’s somewhat wacky GANimal prototype app, attempting to mutate him into various breeds—with semi-Brundlefly results. 👀
I was sorry to see the announcement that Google’s Poly 3D repository is going away, but I’m happy to see the great folks at Sketchfab stepping up to help creators easily migrate their content:
Poly-to-Sketchfab will help members of the Poly community easily transfer their models to Sketchfab before Poly closes its doors this summer. We’re happy to welcome the Poly community to Sketchfab and look forward to exploring their 3D creations.
Our Poly-to-Sketchfab app connects to both your Poly and Sketchfab accounts, presents you with a list of models that can be transferred, and then copies the models that you select from Poly to Sketchfab.
Inspired by the awesome work of photogrammetry expert Azad Balabanian, I used my drone at the Trona Pinnacles to capture some video loops as I sat atop one of the structures. My VFX-expert friend & fellow Google PM Bilawal Singh Sidhu used it to whip up this fun, interactive 3D portrait:
Imagine loading multi-gigabyte 3D models nearly instantaneously into your mobile device, then placing them into your driveway and stepping inside. That’s what we’ve now enabled via Google Search on Android:
Take it for a spin via the models listed below, and please let us know what you think!
Granted, it was a little confusing to explain that I knew the voice of the cartoon forklift & that he was actually a brainy Italian guy who worked at Pixar—but it worked. In any case, now Guido Quaroni—who spent 20 years at Pixar & who was always a fantastic host during Adobe customer visits—has now joined the Big Red A:
“I’ve been a customer of Adobe’s software for a number of years, and I always admired Adobe’s commitment to provide top of the line tools to creatives,” said Quaroni. “When I heard about Adobe’s renewed interest in entering into the 3D market, given how much more pervasive the consumption of 3D content is becoming, I knew it was something I wanted to be a part of. I’m excited to be joining the Adobe team to help accelerate and grow their 3D offerings for creatives worldwide.”
I remain proud to have delivered, at Guido’s urging, perhaps the most arcane feature request ever: he asked for per-layer timestamps in Photoshop so that Pixar’s rendering pipeline could discern which layers had actually been changed by artists, thereby saving a lot of rendering time. We got this done, and somehow it gives me roughly as much pleasure as having delivered a photo editor that’s used by hundreds of millions of people every month. 😌
Anyway, here’s to great things for Guido, Adobe, and 3D creators everywhere!
As part of Fiat Chrysler’s Virtual Showroom CES event, you can experience the new innovative 2021 Jeep Wrangler 4xe by scanning a QR code with your phone. You can then see an Augmented Reality (AR) model of the Wrangler right in front of you—conveniently in your own driveway or in any open space. Check out what the car looks like from any angle, in different colors, and even step inside to see the interior with incredible details.
A bit on how it works:
The Cloud AR tech uses a combination of edge computing and AR technology to offload the computing power needed to display large 3D files, rendered by Unreal Engine, and stream them down to AR-enabled devices using Google’s Scene Viewer. Using powerful rendering servers with gaming-console-grade GPUs, memory, and processors located geographically near the user, we’re able to deliver a powerful but low friction, low latency experience.
This rendering hardware allows us to load models with tens of millions of triangles and textures up to 4k, allowing the content we serve to be orders of magnitude larger than what’s served on mobile devices (i.e., on-device rendered assets).
And to try it out:
Scan the QR code below, or check out the FCA CES website. Depending on your OS, device, and network strength, you will see either a photorealistic, cloud-streamed AR model or an on-device 3D car model, both of which can then be placed in your physical environment.
I’m delighted to share that my team’s work to add 3D & AR automotive results to Google Search—streaming in cinematic quality via cloud rendering—has now been announced! Check out the demo starting around 36:30:
You can easily check out what the car looks like in different colors, zoom in to see intricate details like buttons on the dashboard, view it against beautiful backdrops and even see it in your driveway. We’re experimenting with this feature in the U.S. and working with top auto brands, such as Volvo and Porsche, to bring these experiences to you soon.
Cloud streaming enables us to take file size out of the equation, so we can serve up super detailed visuals from models that are hundreds of megabytes in size:
Right now the feature is in testing in the US, so there’s a chance you can experience it via Android right now (with iOS planned soon). We hope to make it available widely soon, and I can’t wait to hear what you think!
Some of the creatures include the Aegirocassis, a sea creature that existed 480 million years ago; a creepy-looking ancient crustacean; and a digital remodel of the whale skeleton, which is currently in view in the National History Museum’s Hintze Hall.
Exceedingly tangentially: who doesn’t love a good coelacanth reference?
My mom loves to remind me about how she sweltered, hugely pregnant with me, through a muggy Illinois summer while listening to cicadas drone on & on. Now I want to bring a taste of the 70’s back to her via Google’s latest AR content.
You can now search for all these little (and not-so-little) guys via your Android or iPhone and see them in your room:
Artist/technologist Erik Natzke has kept me inspired for the better part of 20 years. His work played a key role in sending me down a multi-year rabbit hole trying to get Flash (and later HTML) to be a live layer type within Photoshop and other Adobe apps. The creative possibilities were tremendous, and though I’ll always be sad we couldn’t make it happen, I’m glad we tried & grateful for the inspiration.
Anyway, since going independent following a multi-year stint at Adobe, Erik has been sharing delightful AR explorations—recently featuring virtual Legos interacting with realtime depth maps of a scene. He’s been sharing so much so quickly lately that I can’t keep up and would encourage you to follow his Twitter & Instagram feeds, but meanwhile here are some fun tastes:
Here’s an example of how they look (apologies for the visual degradation from the GIFfing; I’ll see whether I can embed the interactive original):
People seem to dig ’em:
Nissan saw an engagement rate that was 8X higher than rich media benchmarks for the automotive vertical.
For Adidas, Swirl ads drove a 4x higher engagement rate than rich media benchmarks and had an average viewable time of 11 seconds,The 3D creatives also drove a return on ad spend (ROAS) of ~2.8 for the Colombia market.
For Belvedere The Swirl ads drove 6.5x higher brand favorability and 4.9x higher purchase intent vs. category norms.
To get started creating a Swirl ad, you can upload 3D assets to Google Web Designer and use the new Swirl templates. Brands and agencies can also edit, configure, and publish models using Google’s 3D platform, Poly.
The Land Rover app for iOS & Android, leveraging Unity + ARKit & ARCore, goes beyond ye old “spin car in space & maybe change wheels” approach we’ve long seen. Somehow I can’t embed the video, but it’s worth a look, and you can get a taste of the immersive environments it enables here:
I’m more than a little snowed under right now with preparations for next week’s announcement, but I wanted to share a few interesting finds:
Photographers are always asking for better ways to prevent misuse of their works, and TinEye promises to help by using image-recognition techniques to find images in the wild. Ars Technica’s got details.
I’m delighted to see that Cooliris, the very cool browser technology formerly known as PicLens (see previous raving), is available once again for Safari. I’d forgotten how much I missed it until it returned recently. The developers are also offering an embeddable Flash-powered version for use on your own sites.
Or rather, it’s not just about 3D. But let me back up a second.
Remember the Newton? My first week at Adobe, I attended an outside "how to be a product manager" seminar at which the Newton was held up as a cautionary tale. The speaker pointed out that the product’s one critical feature–the thing on which everything else depended–was a handwriting recognition system that sucked at recognizing handwriting. Among many other things, the Newton also featured a thermometer. Customers, according to the speaker, had a conniption: what the hell were the product designers thinking, getting distracted with stuff like a thermometer when they couldn’t get the foundation right?
The moral, obviously, is that if you’re going to branch into new territory, you’d better have made your core offering rock solid. And even if it is solid, some customers may perceive any new work as coming at their expense.
I worry a bit about Photoshop users seeing the app branch into 3D and thinking we’ve taken our eye off the ball. Earlier this week reader Jon Padilla commented, "Some of my disgruntled co-workers grumbled ‘oh great! a bunch of cool features we’ll never learn to use…’" No matter what Photoshop adds specifically for your needs, the presence of other features can make it easy to say, "That looks like a great product… for someone else."
Obviously we care about improving the way Photoshop gets used in 3D workflows, especially around compositing and texture painting. If that’s all we had in mind, however, I think we would be overdoing our investment in 3D features relative to others. As it happens, our roadmap is broad and ambitious, so let me try to give some perspective:
At root, Photoshop’s 3D engine is a mechanism that runs programs on a layer, non-destructively and in the context of the Photoshop layer stack. At the moment it’s geared towards manipulating geometry, shading surfaces, etc., but shader code can perform a wide range of imaging operations.
Features that work on 3D data–being able to create & adjust lights, adjust textures and reflectivity, paint on transformed surfaces, etc.–work on 2D data as well. (Wouldn’t it be nice to have Lighting Effects written in this century?)
As photographers finally tire of chasing Yet More Megapixels, cameras will differentiate themselves in new ways, such as by adding depth-sensingtechnology that records 3D data about a scene. The same infrastructure needed for working with synthetic 3D objects (e.g. adjustable lighting, raytracing) can help composite together photographic data.
The field of photogrammetry–measuring objects using multiple 2D photos–is taking off, fueled by the ease with which we can now capture and analyze multiple images of a scene. The more Photoshop can learn about the three-dimensional structure of a scene, the more effectively it can manipulate image data.
I know I’m not providing a lot of specifics, but the upshot is that we expect Photoshop’s 3D plumbing to be used for a whole lot more than spinning Coke cans and painting onto dinosaurs. Rather than being a thermometer on a Newton, it’s a core investment that should open a lot of new doors over many years ahead, and for a very wide range of customers.
The resulting output delivers a high-quality, photo-realistic image, all from within the Photoshop Extended environment.
LightWave Rendition ships with sample projects and a library of 3D model art. The product also includes support for 3D models from a variety of applications, including LightWave 3D, Google™ SketchUp’s 3D Warehouse or many readily available 3D formats. It includes:
Slider Controls for Render and Anti-Alias Quality, allowing for quick preview renders up to photo-quality images.
Material Presets for the option to apply a preset material or any selected Photoshop materials to the surface of your 3D object for complete flexibility in design.
Light Environments open the use of the default Photoshop Extended lighting environment or users can add to the power of LightWave Rendition for Adobe Photoshop by using any 2D layer as a light map for complete control of the final light environment.
The product is $149 for Mac and Windows & is available for purchase and download from the NewTek site.
I’m pleased to see that NewTek, the folks behind the LightWave 3D modeling, animation, and rendering package, have announced a new product, LightWave Rendition for Photoshop. This plug-in technology builds on the 3D file format support in Photoshop CS3 Extended, adding on high-quality rendering and lighting manipulation. In this screenshot they show an image as displayed by Photoshop’s built-in renderer, then hit with the LightWave renderer & touched up in Photoshop. Here’s a second example.
According to their marketing docs, LightWave Rendition for Photoshop includes:
Slider Controls for Render and Anti-Alias Quality: Allows for quick preview renders up to photo-quality images.
Material Preset: You have the option to apply preset or selected Photoshop materials to the surface of your 3D object for complete flexibility in design.
Light Environment: Use the default Photoshop Extended lighting environment or add the power of LightWave Rendition for Adobe Photoshop by using any 2D layer as a light map for complete control of the final light environment.
Because the product is in beta form, you can buy it now for $99, discounted from the normal price of $149. The discount ends when the beta does.
Side note: I keep trying to tell developers that I think there’s an opportunity to knock together a very simple 3D extrusion/adjustment environment as a Photoshop plug-in, leveraging PS CS3 Extended’s ability to manipulate 3D layers. No one has yet seized the opportunity, but I’ll keep trying.
Heh–in the vein of sites like AwfulPlasticSurgery.com, now we’ve got the Photoshop Disasters blog–chock full of image manipulation mishaps. It’s good to indulge in a little schadenfreude now and then, and with phrases like “the culturally-ravaged, post-wardrobe-malfunction neo-fundamentalist, sexual dystopia we live in,” it has to be good. (Wasn’t that the Smucker’s slogan?) [Via Lori Grunin]
The Gough Map is said to be the oldest accurate map of Britain, dating from around 1360.
My little brother Ted let me ride along last month as he drove his garbage truck. This safeyman image (somewhat dodgy iPhone-cam quality, sorry) I snapped in his cab shows the truck really putting the “screw” back in “screw of Archimedes.”
When is a shopping site… something else? When it’s this viral site for Dutch chain Hema*. "It’s like an IKEA catalog was sliced up and fed to a Rube Goldberg machine," says Motionographer. "The magnifying glass bit is brilliant." [Via]
Who doesn’t like "secret interactive frivolity"? Design firm Baker and Hill lavishes attention on the details of their fun-to-navigate company site.
The Volvo XC70 site features a fully rotatable rendering of the car, festooned wih interactive touch points. Stick around through the intro, then hit the arrows to continue. (Yes, we have kid-haulers on the brain, and I’ll always have a thing for Volvo wagons.)
ASLuv busts out the fairy dust with this little particle sprayer. (Don’t break the glowsticks ’til you feel the beats hit.) [Via]
Import, export and modify image maps and textures onto 3D models in Photoshop
Composite 2D and 3D content seamlessly
Access DAZ’s full library of quality 3D content [DAZ gives away the editing application & sells adjustable content]
As for the Strata news, "In a nutshell, the technology from Strata’s 3D[in] plug-ins for Photoshop CS 3 Extended is now integrated into the Suite," says the crew on 3Dlayer.com. With it you can:
Send a 3D model to PS as a 3D layer
Send a finished rendering to PS as separate layers (shadow layer, reflection layer, color layer, etc)
Send a PS image to a 3D background for tracing or placement
Send a 3D model direct from PS to PDF or HTML and it embeds the 3D object (you read that correctly)
Link PS files as 3D textures – changes made are automatically updated in the 3D texture
Good stuff all around. We think that 3D in CS3 Extended is a big step forward, and of course we’re not planning to rest on those laurels. I love seeing great developers like Strata and DAZ jump on the opportunity to help enrich the story.
By the way, did you know that you can browse the Google 3D Warehouse right from within Photoshop CS3 Extended? Here’s more info. Also, Adobe’s Steve Whatley mentions that Adobe is on tour with Maxon, showing off 3D integration between the tools.
Several times now I’ve expressed my appreciation for PicLens, a beautiful (and free) little browser plug-in that enables full-screen, hardware-accelerated slideshows from Google Images, Flickr, MySpace, deviantART, and other sites. It’s changed my whole online photo viewing experience.
It features the all-new “3D Wall,” a magical virtual interface that can exhibit 100s, if not 1000s of images. There, you can drag, scroll, zoom, and, of course, jump into full-screen mode. You’ll have to try it out to really experience it. It brings the user one step closer to a fully immersive multimedia experience on the Web.
Once you download the 1MB plug-in (Mac or Win), go into a slideshow and try holding down and arrow key to cruise through the images. I’d take a screenshot, but it doesn’t seem to get along with Snapz Pro. [Update: Here’s one, though it doesn’t capture the motion.] Really nicely done, guys!
Here’s something pretty well guaranteed to put a smile on your face, I think: the Australian Centre for Visual Technologies has developed VideoTrace, "a system for interactively generating realistic 3D models of objects from video." A user sketches a few surfaces, after which the system works to generate 3D data. The short video demonstration is a little ho-hum until near the middle, which is where the uncontested smiling begins. 😉 [Via]
This demo makes me think of Strata’s Foto 3D, a tool for generating 3D models from within Photoshop, using just a series of photographs. By placing an object onto a specially printed piece of paper, then shooting it from a variety of angles, you give the software enough info to generate a 3D model that can then live as a 3D layer in Photoshop CS3 Extended.
It also reminds me of Extended’s ability to set 3D planes on a photograph using its Vanishing Point plug-in, then export the results as 3D data for use in After Effects and other tools. With it you can export an image like this as 3D data, then set camera movement in AE and create an animation like this.
ClaraCollins.com presents the fashion designer’s work in a novel way. Mouse over the little arrows that sit above the pages of each portfolio, and you’ll see the images whip by in little time lapses. You can also rotate each portfolio 180 degrees. [Via]
Reminding me why I could afford only 120 sq. ft. in Manhattan (hello, Brooklyn!), 5th on the Park offers 1,800 sq. ft. in Harlem–for a cool $1.6 mil. I mention it here because of the cool presentation of the building & its units. You can roll over each face of the structure, clicking any unit to see its floor plan & other details. [Via]
Art Is A Gift uses a Flash UI to let you style a little "Baby Qee" critter. Check out the gallery section, as well as the "About" link that shows kids painting the real thing as art therapy. [Via Jeff Tranberry]
Enfant Terrible sets off its shopping site with a cheerful, simple little animated illustration. [Via]
Adobe has created a 25th Anniversary Timeline of the company, on which you can see key developments in people, personnel, and the industry at a whole. I’m undecided as to how successfully I think the sort of "mystery meat" rollover approach works. There’s also a Flash-based 15-page overview document, complete with embedded video. (Weirdly I don’t see a downloadable PDF version.)
As I’m sure you know, we’re pretty excited to have 3D capabilities inside Photoshop CS3 Extended. That said, we know that what’s there today is really a first step into a pretty big realm.
Giving a glimpse into what the future might hold, the MIT Technology Review talks about Adobe’s research into real-time raytracing. In a nutshell, says principal scientist Gavin Miller, "Adobe’s research goal is to discover the algorithms that enhance ray-tracing performance and make it accessible to consumers in near real-time form."
These techniques scale particularly well on multi-core systems, which is why you tend to see rendering tests show up in high-end machines’ benchmarks. A brief slideshow accompanying the article demonstrates the differences between ray-traced images & those produced by the kind of interactive renderer used in Photoshop CS3. [Via AravindKrishnaswamy, who works in Gavin’s group]
Adobe’s own Russell Brown took his 3D head-scanning show (see previous) on the road to Photoshop World in Las Vegas this month. Not only could attendees get their heads scanned & turned into 3D models for use in Photoshop CS3 Extended; they could get the resulting skin texture files printed onto fabric. Scott Kelby volunteered to make sure the apparatus was safe (video), only to have his head printed onto a football that was kicked into the audience. Here’s a quick gallery featuring some deeply disturbing imagery ;-).
Most Intimidating. Salad. Ever. Artist Till Nowak has rendered an Alien xenomorph out of vegetables. His behind-the-scenes PDF is worth a look as well, especially if you’re interested in 3D techniques. [Via] In a slightly related vein, the boys at JibJab have posted a collection of substantially friendlier sandwich art. (Not sure who created these; the photo set has been going around as one of those things that gets forwarded by one’s mom.) [Via]
Pepakura Designer is a tool for for 3D papercraft. Pepakura is not a modelling app; rather, it converts 3D models into 2D designs that you can print out, then assemble into papercraft creations. The gallery features some impressive models, such as Cláudio Dias’s paper McLaren. [Via Florian Krüsch]
Plushie is "an interactive system that allows nonprofessional users to design their own original plush toys." Check out its novel interface for sculpting blobs–something even kids can use. [Via Nikolai Svakhin]
If you’re in the Bay Area and are interested in the technical details of some of Photoshop CS3’s advanced features (3D, auto-alignment, etc.), swing by the Apple campus (De Anza 3, specifically) tomorrow night for a meeting of the Silicon Valley SIGGRAPH chapter. Refreshments roll out at 7:30, and the talk begins at 8pm. It’s five bucks for non-members, free for students. Details below.
Ashley Still and Pete Falco of Adobe will give an overview of some of the new features in Photoshop CS3 Extended, including movie paint, 3D, and automatic alignment and blending of multiple images. In addition to demonstrating these new features, they will provide an overview of the Photoshop 3D Plug-in SDK that can be used to extend the current capabilities. There will be ample time for Q&A.
Pete Falco is currently Sr. Computer Scientist for Adobe Photoshop. Pete has been on the Photoshop team since 2005 and is focused on 3D and technology transfer for Photoshop. Prior to joining Adobe, Pete worked as an engineer on QuickTime VR at Apple, as the Director of Engineering at Live Picture and co-founded Zoomify. He holds a BS and ME from Rensselaer Polytechnic Institute.
Ashley Still is currently Sr. Product Manager for Adobe Photoshop. Ashley has been on the Photoshop team since 2004 and is focused on new markets and advanced technologies for Photoshop. Prior to joining Adobe, Ashley worked with an Entrepreneur in Residence at Sutter Hill Ventures developing and evaluating business plans and at eCircles.com, one of the first online sites offering photo-sharing and editing. She holds a BA from Yale University and an MBA from Stanford Graduate School of Business.
I’m pleased to report that Adobe has teamed up with Google on 3D, enabling Photoshop CS3 Extended to browse the Google 3D Warehouse, then download 3D models right into Photoshop. The upshot is that Photoshop users now have direct access to a large & growing repository of free, community-driven 3D content.
The plug-in and more info are available on Adobe Labs. (Note: The team is working to fix a bug found in the Windows version at the last minute. Therefore the Mac plug-in is up now, and the Windows version should be up tomorrow.)
Salty sea-dog Russell Brown has teamed up with friend & FX pro John McConnell to create a new Photoshop Films production, Software Pirates of the Caribbean. Despite Russell’s Malkovich-style multiplicity (playing a dozen characters, including the odd parrot), the credits swear that "No Russell Browns were harmed in the making of this feature." Heh–most excellent stuff, and I do believe I caught some ace usage of the WilhelmScream :-). A little advice to Russell: just don’t try to transport that mustache across state lines.
In related news, you can see the same 3D ship used in the movie get attacked by a sea monster, all inside Photoshop CS3 Extended, in Russell’s new tutorial. The good folks at Daz3D, creators of the sea monster, are making the file downloadable for free for use in Photoshop. And elsewhere Terri Stone shares some piratical photos from the just-wrapped ADIM Conference, where attendees had their heads scanned & turned into action figures. More on that soon!
Strata Design lets you drop 3D objects into 2D photos, matching perspective and lighting. The trick is that it leverages Vanishing Point’s new ability to export perspective planes as 3D objects. From there the plug-in can drop in models, move them around, and do a high-quality rendering pass to make the models fit the scene.
Strata Photo can transform a series of photos (taken of an object sitting on a specially printed piece of paper) into a 3D model for use in Photoshop.
Strata Live connects Photoshop with Acrobat, exporting 3D models for viewing inside PDFs. (Did you know that Acrobat does 3D?)
3Dconnexion (a part of Logitech) has announced that their SpaceNavigator for Photoshop. Here’s a video demonstration in which marketing mgr. Tad Shelby shows the device controlling a 3D model in Photoshop Extended. At less than a hundred bucks, it seems like a steal for any serious 3D user (and it works for 2D, too).
More good stuff is on the way as well:
NewTek has announced LightWave Rendition for Adobe Photoshop, bringing LightWave’s lighting and rendering tools to Photoshop (I’ll link to more details when they make ’em live);
I’m no expert on 3D modeling & rendering, and for all I know this kind of imaging may now be common. In any case I was blown away by the lifelike quality of Max Edwin Wahyudi’s rendering of South Korean actress Song Hye Kyo (make sure to check out the full-res version). [Via Mark Maguire] For a far less aesthetically pleasing 3D portrait, check out my Britney moment. Awful…!