My little brother is a trucker, and although I can’t imagine a solution like this working for the rural routes he drives, it’ll be interesting to see how it might work for long-haul highways. Check out the idea (not cheap, but potentially highly impactful):
10 years ago we put a totally gratuitous (but fun!) 3D view of the layers stack into Photoshop Touch. You couldn’t actually edit in that mode, but people loved seeing their 2D layers with 3D parallax.
More recently apps are endeavoring to turn 2D photos into 3D canvases via depth analysis (see recent Adobe research), object segmentation, etc. That is, of course, an extension of what we had in mind when adding 3D to Photoshop back in 2007 (!)—but depth capture & extrapolation weren’t widely available, and it proved too difficult to shoehorn everything into the PS editing model.
Now Mental Canvas promises to enable some truly deep expressivity:
I do wonder how many people could put it to good use. (Drawing well is hard; drawing well in 3D…?) I Want To Believe… It’ll be cool to see where this goes.
Semantic segmentation + tracing FTW!
By using machine learning to understand the scene, Project Make it Pop makes it easy to create and customize an illustration by distinguishing between the background and the foreground as well as recognizing connected shapes and structures.
And you’ve gotta stick around for the whole thing, or just jump to around 2:52 where I literally started saying “WTF…?”
What if Photoshop’s breakthrough Smart Portrait, which debuted at MAX last year, could work over time?
One may think this is an easy task as all that is needed is to apply Smart Portrait for every frame in the video. Not only is this tedious, but also visually unappealing due to lack of temporal consistency.
In Project Morpheus, we are building a powerful video face editing technology that can modify someone’s appearance in an automated manner, with smooth and consistent results.
Check it out:
I plan to highlight several of the individual technologies & try to add whatever interesting context I can. In the meantime, if you want the whole shebang, have at it!
I kinda can’t believe it, but the team has gotten the old gal (plus Illustrator) running right in Web browsers!
VP of design Eric Snowden writes,
Extending Illustrator and Photoshop to the web (beta) will help you share creative work from the Illustrator and Photoshop desktop and iPad apps for commenting. Your collaborators can open and view your work in the browser and provide feedback. You’ll also be able to make basic edits without having to download or launch the apps.
Creative Cloud Spaces (beta) are a shared place that brings content and context together, where everyone on your team can access and organize files, libraries, and links in a centralized location.
Creative Cloud Canvas (beta) is a new surface where you and your team can display and visualize creative work to review with collaborators and explore ideas together, all in real-time and in the browser.
From the FAQ:
Adobe extends Photoshop to the web for sharing, reviewing, and light editing of Photoshop cloud documents (.psdc). Collaborators can open and view your work in the browser, provide feedback, and make basic edits without downloading the app.
Photoshop on the web beta features are now available for testing and feedback. For help, please visit the Adobe Photoshop beta community.
So, what do you think?