Yesterday’s discussion of Smart Filters made me think that it’s worth writing up some thoughts on Smart Objects & the future of compositing in Photoshop in general.
I have a hypothesis, at least as regards Photoshop: flexibility generally breeds indirectness. That is, when you step away from the familiar world of applying pixel tools directly to a plane of pixels, you introduce complexity. Whether or not that complexity is worth accepting depends on bang for the buck.
Photoshop has always had a great advantage over other Adobe apps: to start using it, you can simply open an image, then start mashing away with the brushes, eraser, filters, etc. It’s pretty immediate gratification. Compare that to a program like Illustrator, where the price of admission is that you must gain at least some understanding of vector art. ("It’s like pounding nails and wrapping rubber bands around them," people have said since the beginning.) And in Flash, you immediately have to learn about symbols & the nature of parent/child relationships.
Of course, the price of that simplicity & directness is a lack of power and flexibility, and with each version people have (rightly) demanded more freedom. Layers were the first big step in this direction, and subsequent releases added text layers, shape layers, and layer sets. Each of these has added its own particular method of editing & removed some of the directness that you’d have just editing a single layer.
And now we come to a real crossroads, where Smart Objects represent a kind of "magic box" into which you can place anything you’d like, and to the outside of which you can apply a lot of things non-destructively (filters, scaling, rotation, warps, blending). The greater the flexibility, the greater the number of ways in which you can edit the content. But by and large you can’t have the same kind of directness you had before.
When framed in this context, it makes sense that Smart Objects (and non-destructiveness in general) would generate a good deal of existential angst & require some hard work to get right. I mean, there’s simply more cognitive overhead attached to "This blob of pixels is actually comprised of 10 layers, treated as a stack, that have been scaled down, warped, and had three filters applied" than there is to "These pixels are these pixels, period."
Weโve done kind of a classically "Photoshop" thing here. That is, instead of doing something narrow–a specific solution to one specific problem–we’ve opted to build a foundation for doing a million and one interesting things. (Think, "The power of AND, not the tyranny of OR.") The trade-off is that the solution has some limitations, at least for now, that a more narrow solution might not.
Smart Objects feel weird to a fair number of Photoshop users because you can’t simply grab a brush, eraser, etc. and start editing them directly. Instead, you edit the contents in a separate window, then hit Save to update it in the parent document. Why is that? The answer is that it’s really tricky to enable directly editing things inside the magic box (or black box, if you prefer).
For example, let’s say we wanted to make it possible to scale pixels layers up and down without loss–and that’s it. I can imagine it being possible to modify the various painting tools so that they’d have a straight X/Y transformation applied (scaling up or down) as you painted. To enable precise pixel-editing, you’d have to find some creative way of the native scale of a layer & letting people edit directly. For instance, maybe you’d require that the tips of brush tools scale up visually (as they do today when you zoom in) in order to match the scale of the source pixels. I don’t know, but it could be done.
But now, what if you wanted to let people warp non-destructively? The task of editing pixels directly gets a good deal harder. For example, let’s say you peeled back a corner of a layer and made it meet up with the opposite corner, then started painting a horizontal stroke. As you dragged from left to right, what would you expect the paint to do? Would you expect the horizontal stroke to remain horizontal as you painted, or should the cursor go off in some different direction, separate from your mousing?
What if you applied a filter to the layer, then started painting it? Using some filters, you could have a pretty reasonable experience (performance aside). That is, you could click on a certain location and be pretty confident that you were editing that location. But what if you ran Displace? Or Emboss, or Polar Coordinates? What would it mean to click on a certain spot? Are you painting the "before" spot or the "after" spot? So, it seems either we’d do something wacky in some cases, or Photoshop would prevent you from editing directly if certain filters were applied.
What if you want to manipulate Smart Object content isn’t natively comprised of pixels (say, vectors from Illustrator, or raw camera data)? What would it mean to paint directly onto it? Not much, I’d argue, in which case we’d have to come up with a different editing mechanism, and we’d have one of those "yes-but" situations: yes, you can edit non-destructively transformed layers directly, but not if X, Y, Z, Q, R, or S is present… I think that would be lamer than having to edit the content in a separate window.
Is that making sense? We could make the very simple case (scale only) work, but only at the cost of either not implementing other capabilities, or or doing them through some other system, or of making the system work sometimes and not others. The nice thing about the general, "classically Photoshop" solution is that there is a consistent set of things you can do to the outside of a Smart Object (scale, rotate, warp, filter, mask, adjust blending), and what you do to the inside is governed by whatever content is inside. The single system can work the same way for all kinds of content, regardless of its native form (pixels, vectors, raw, Flash, SVG, data feed from Flickr… whatever Adobe or plug-in developers could enable placing inside Photoshop).
So, in a nutshell, we’ve sacrificed ease/directness in one case in order to open a lot of other doors. It’s a trade-off, to be sure, but it seemed better than any alternatives we could devise.
Is the added complexity worth the cost? Well, for one thing it’s complexity you can ignore if you don’t need it, just as you can avoid making layers if they’re not valuable to you. We can also make the equation more favorable by adding more bang for the same buck. That is, we can improve what Smart Objects can do, and once you learn how they work, you can take advantage of more things without learning much more.
I should close by saying that I don’t intend this entry as a way of saying we can’t or won’t do better. We can, and we will. But I thought it would be interesting to raise some of the conceptual issues that make non-destructive editing a challenging–and rewarding–aspect of Photoshop.
It’s interesting to consider that people are starting to give Smart Objects a good run through with Photoshop CS3 now that it offers non-destructive filters. Makes me wonder how many users totally overlooked them in CS2. But being a user experience designer, I can totally relate to finding that the simple design solution while limiting at times is usually the more successful approach. In fact limiting possibilities to a certain scope tends to help out tremendously for users to get it enough to actually use it.
One thing you alluded to in the previous post on Smart Filters is Smart Objects being able to link files as an option over just embedding them. I must admit that I originally thought this capability existed in CS2. I was about to get an entire design team to take advantage of it when I sadly discovered it wasn’t the case.
(The team was working on web design comps for a large company about to release a major new OS and the global nav had been approved. So I had the brilliant idea to use a Smart Object for the global nav and have the design team link to this in all their Photoshop source files. Then if the global nav had minor changes, all the varied design comps would automatically update. Dream a little dream, right?)
So if you can and wouldn’t mind, can you share what challenges exist on making my Smart Object linking dream a reality?
[I don’t think it’s so much a technical or conceptual challenge to make this work, George, as it is simply a matter of time. We’ll get there, I think, but it just hasn’t been a high enough priority–particularly when we have yet to do things like enable linking between SO’s & their masks (a drag, I know).
As it happens, I think you can more or less do what you want today (that is, update a Smart Object when a file on disk changes). The only hitch relative to linking in other apps like Illustrator is that in Photoshop the process is manual: select an SO, choose Layer->Smart Objects->Replace Contents, then browse to the file on disk and hit OK. It’s just that this process isn’t automatic, and PS isn’t checking to see whether the external file has changed. Hopefully that helps at least a bit. –J.]
By adding more choices of getting to the same destination Photoshop became more complex faster then it become more powerful. It is the lack of guidance which of many approaches to use why which presents such a steep hurdle. This is a especially true as Photoshop is branching out to markets other then Pixel Artists.
[These are reasonable points, although Photoshop has always been used by a wide variety of customers/markets. But you’re right: there’s very little in the app itself that shows you the constellations among the stars, so to speak. That’s where authors, trainers, friends, trial & error, etc. come in. We have many ideas on making this better, and we continue to lay groundwork (customizability, Flash support, workspace improvements, etc.). But getting there is a multi-cycle journey. –J.]
As far as the indirectness of Smart Objects goes, I would like to see you explore “unrolling” objects for the purpose of directly editing the original. This would be almost like taking the original step out of the modifications pipeline and working on that. Otherwise you eliminate the all important direct feedback loop that you talk about in your first paragraph.
[There are certainly some good ways that we can improve context & feedback. –J.]
In fact this would go across all CS3 apps as needed if the original file came from another app. The pipleine should be able to cross CS3 apps as needed. The pipeline would also be visible so that at anytime the order or effect of treatments can be changed.
This of course would require a revamp of Adobe pro apps into re-combinable modules and a consistent cross-app UI that ties them all together. Based on past history I doubt Adobe can pull this off for real.
[Well, that’s where we part ways. ๐ –J.]
great posts about SO. it made the concept more clear to me.
The only thing I was unable to get is why is so hard to link the layer mask of the SO with it. Can we have a post about this? Or as you say adobe is high on this, can we have a little hope that it will make to the final CS3 ?
[Good questions. I will consult with some guys after the holiday break and see what other info I can provide. Frankly, though, I’m not sure how useful it would be to go into the technical details. Customers probably should say, “Well, you’re a bunch of smart guys; figure it out.” –J.]
The workaround I use today is to have a SO inside a group and have a layer mask in the group, so the mask is applied only to the SO inside, but it’s not a intuitive process, and it takes more time than just linking of the mask to the SO
Oh, and by the way, keep on blogging, you have a very nice style of text!
[Thanks! –J.]
Hi, I’m really glad to see non destructive filters in PS, and I wonder if you have any insight on what are the technical differences between Fireworks Live Effects and Smart Filters.
[Great question, and I’ll have to do some thinking/legwork to answer it. Offhand I see that FW does let you paint/erase directly onto a layer that has Live Effects applied to it, but it then wants to re-compute the effects after each brush stroke. Smart Objects in PS are actually quite similar to symbols in Flash and Fireworks. –J.]
I may be one of the few people who did but I actually used some of the web features in Image ready for quick website creation (I am not a web guy) since I was very familiar with the photoshop interface. One of the drags was that it did not support SO. Since it looks like Fireworks will replace imageready is it likly to support directly opening a PSD with a SO.
Thanks and keep up the blogging it is great info.
Hi John,
Earlier in the year I had a long spell working with some photographers in southwest London on what turned out to be three jewellery (jewelry for US readers!) catalogues for Goldsmiths.
I was mainly concerned with the watches, for which Smart Objects was absolutely the way to tackle the job.
A page might consist of from three to eight different watches from the same stable. Each watch would have been photographed several times with differing lighting and/or exposure, but without moving the watch. Invariably the winder would have been pulled out to ensure the hands remained in the same place (all the hands would be set to give the best angle – eight past ten). They were supported on perspex cylinders, and shot against white, with reflections.
The various images would then be layered into a composite file.
Each watch composite would have a series of masks to cater for the overall outline, the main face, the bezel, (often smooth and rounded, possibly inset with jewels, which were lit with sparkle), the strap the winder, the hands sometimes, and the watch ‘created’ and retouched. this would then be grouped as cutouts ‘watch and reflection’.
A new canvas for the final group would be created at the final size and resolution, and each watch would be brought in with its reflection, and immediately made into a Smart Object. Each SO would be named (or rather reference-numbered) and placed in position according to the layout, the final position was approved by the photographer, and the reflections would be 100%ย desaturated, and foreshortened with a grad mask. If a watch had to be moved to look as if it overlapped then additional copies of elements were added, so that a watch might seem to be behind the face of another, yet in front of its strap.
Any balancing of colour or alterations of contrast, all these could be carried out losslessly using the Smart Objects. A JPEG was submitted to the client for comments and alterations, and any changes were carried out with ease, before final flattening.
In this way every page of watches had each item at its best without sacrifice. It was a great way to work. The only snag was when an SO had a mask in the final comp, I had to move it using Free Transform and then carry out the ‘Transform again’ on the mask.
So thank you very much for Smart Objects!
[Cool–thanks for the details, Rod. –J.]
I have to take issue with the false dichotomy between simplicity and power. Einstein’s famous formula is incredibly simple but quite powerful at elucidating deep concepts. Google’s search UI is flat simple but shields a powerful algorithm and background complexity. Heck, a paper clip is alarmingly simple but nifty, clever and powerful little gadget. As are chopsticks.
So why not Photoshop? Through thoughtful reduction of core features, modular (community-oriented) components that extend more features, elegant UI design hiding all the exposed complexity, PS could become very simple but still retain the power.
Just do less better. And allow for more to be done equally well.
After Effects is the model. Make it work like After Effects. Smartly cache objects when the pixels are directly edited like AE does with text layers that are converted to outlines.
[Ah, but does AE let you paint right onto transformed media (e.g. erase a hole in a solid that’s warped, rotated, etc.)? I don’t think so, though I admit my AE skills have grown rusty. Letting users directly edit pixels at the output end of the pipeline is a dicey prospect, and apps (like AE) generally require you to edit the untransformed pixels, then look at the transformed result.
So, the upshot is that I think PS already *does* act like AE. But AE users are already accustomed to ways of working (nesting & so forth) that have been unfamiliar to most PS users. That’s why their addition to PS may feel odd. –J.]
thanks, John, for details and serious take on non-destructive workflow in photoshop. it’s a hard and very important topic in my workflow ๐
yes- lots of small nuances has to be polished to make general experience even better than it currently is in photoshop.. i really like how adobe is improving photoshop when adding serious new functionally (like smart objects) as ‘another layer’ on top of previous functionality and the way it works – if you want, you can improve your workflow and start using additional features or you can just ignore them and new things wouldn’t force you to change anything.. photoshop is too widely used application to suddenly flip core principles and all revolutionary addons has to be carefully integrated.
although sometimes i scratch my head why _adding_ completely new features which wouldnt’ collide with older photoshop workflow – ends with the result as plugin/filter finally. e.g. ‘vanishing point’, ‘lens correct’ from cs2 or even older ‘extract’: i hate how you have to work in new window with separate controls and feel, apply your changes, get results, have no undo (e.g. either post or after result) nor ability to fix/change some steps, nor be able to somehow save operations at least to reapply them later or on another layer. imho- ‘lens correct’ transform functionality had to be added in to edit>transform commands similar to ‘free transform’ where you can change values in option bar, run on multiple layers, etc. ‘vanishing point’- in my dreams would be a _regular_ group of tools in toolbar, adding planes as ‘plane layer’ to the main layer palette, be able to save them into normal psd/psb file and as for the usage- combining new cs3 cloning improvements with an additional option to either clone normally or use some ‘plane layer’.
as i’m on the way to spill over what weighs the way i work, let me add:
‘lens correct’ behaves differently than any other filter and i think it’s a bug: after applying it, you wouldn’t find it as filter>last filter (neither last filter shortcut work) nor starting it with ‘alt’ pressed will load parameters from last apply – i’d use this frequently and always step on this bug ๐
free transform’ warp mode: that 3×3 grid limitation bothers me – couldn’t we get an option to freely choose from 2 to 9 at least divisions per axis? also point/tangent translations works different to normal bezier curve implementation like in illustrator or photoshop’ pen tool: it needs manual tweaking by hand of each point since every point/tangent is treated as a very individual, e.g. clicking/moving a point and having tangents offset relatively- isn’t possible. make an option/shortcut at least. ability to convert a point into smooth/corner is also a must.. so in general- ‘envelope’ functionality in illustrator is what we need as a real warp solution in photoshop.
another thing with distort tools i’d find useful: there are plenty of times when ‘free transform’ with initial bounding box in square isn’t convenient/flexible enough. e.g. you select one side of a box in perspective and want to precisely rotate it some more – once launched free transform appears as a planar quad while i’d be glad if i could have _initial/undistorted_ square with corner points closer to selection’s in perspective.. this isn’t any kind of magic: just add an ability to hold some shortcut and be able to _transform_ ‘free translate’ corners _without distorting_. releasing the shortcut and later moving points would distort selection based on new initial form. ..also if initial grid shape could be also changed for warp mode or it’d be possible to use ‘plane layers’ from my previous dream as a start shape for free transforms/distortions of current selection- i’d kill for such abilities ๐
one more: crop tool and ‘perspective’ option which inverts-perspective transforms ๐ it’s great but.. how many times after seeing the result i wanted to go back and just tweak a little bit some point but had to start over since step back is document before crop operation and getting crop shape form back isn’t possible.. could we get a..preview at least how the result is going to look if intelligent stepping-back is quite complicated? e.g. just another checkmark like ‘shield’- if enabled photoshop would show how the result will look if applied.
now i’m sorry for such off-topic beginning but all this was about photoshop improvements and even making it more powerful without changing core ideas or replacing older heritage or making it non-simpler by the look..
closer to non-destructive issues:
another limitation/bug regarding smart objects i find a pain (the first is- ability to link to mask ๐ – somehow ‘free transform’ while allows to scale/rotate or even warp smart object – fails to provide perspective distortions. free distort of corners shortcuts only skews ๐ the partial solution is to use warp mode to move corners freely but than these damn tangent points repositioning are required..
could we get linked masks and perspective distorts in smart object in cs3? pleease ๐
as for the future of smart objects and paint on before-or-after distortions: personally since i understand general smart object concept- i don’t find it too limited when i can’t paint directly or edits are done in new window (although being able to edit similar to ‘edit in place’ as in flash, even if only scale/rotate/transform operations would be still applied while perspective distorts, filters and anything else get ignored/disabled – would be enough for 90% of situations).
to finish that “90% enough” version: if smart filters and adjustment layers could be applied as stacks to layers (which could be regular or smart object) or layer groups (it could probably internally flatten all layers inside before processing adjustments/filters stack) so we could for instance: add blur -> adjust alpha/transparency levels -> add emboss non-destructively (and with separate masks for each layer/filter/adjustment ..i said _masks_ since one mask per layer sometimes is too less and as a workaround- i just group layer and add another mask for the group. ability to set mask operation like in after effects – add/subtract/difference/etc would be also great).
i wouldn’t mind if when i choose non-destructive way of working and add filters/adjustments in realtime – it’d take more resources and/or i’d have to wait sometimes to proceed further. e.g.- for those who would be working in old-way – nothing would be different: you apply filter, it calculates and modifies layer. however if you add filter to a stack (it doesn’t matter is it filter or color adjustment) – photoshop would calculate the result and all the time internally would store native/before any processing layer data and the final with all stack manipulations applied (it’s not necessary to store before/after between each operation. there could be even option in preferences- you either save full data with your document which stores all original “before” layers data and final stack results of each layer – so you just load and get everything. or you save only original “before” data, have smaller file, but once opened photoshop would “render” stacked processes for all layers and create cache for further modifications). if user decides that it needs to paint/edit text/etc e.g. change original layer data: stack filters/adjustments would get temporally disabled and re-calculated again after changes (the more operations you add- the more you’d have to wait until photoshop processes all operations from the updated “real” contents of layer). changing some filter parameters in middle of stack would work similar: photoshop would just calculate all operations from original layer data to show the result. there could be an new option similar to ‘preview’ checkmark which exist in every filter- that would control if user wants to see the final result of complete stack processed or only up to current operation. for example on that blur>levels>emboss stack: while changing levels it’d be useful sometimes to see how final emboss will look or only blur>levels result.
actually all these ideas are very close to how after effects with layers and filters work (they should add
ability to rasterize result to layer data, all those pixel pushing photoshop tools, selections, real free-transform, cmyk mode and could call it new decade “after photoshop v11” ๐
so basically you just have layer data and (re)process it if/when needed. the question is only flow priorities: masks>filters>transforms or transforms>filters>masks which makes huge difference and such control is a main thing i miss in after effects compared to real node-based compositing applications.
there’s a short reply to yesterday’s comment:
Mr. Adobe: have a look at Shake, and learn on how to manipulate pixels and colors.[Does Shake let you paint directly onto filtered pixels? I don’t know of any apps that do, but I’d be happy to look at some examples. –J.]
in general- yes, you can add ‘paint’ node and have abilities to paint, clone, erase, etc. the result calculated up-stream to paint operation. John, if you’ve never seen in details real node workflow- grab a shake trial from apple.com and check yourself. the general concept of node trees is very easy to grasp – you wouldn’t need to learn what particular node/operation does or build complex trees to make hollywood magic, but you should feel the power and unrivaled flexibility in such way of working. i’m not pushing that photoshop has to be this (..better after effects would be ๐ but it’s a very strong example in pure non-destructive image manipulations..
wheew.. sorry for such a long post ๐ฎ
The SO concept sounded amazing to me when first reading about CS2, but I haven’t used it much because of the transformation limitations (no perspective/distort). I was/still am hoping to see this in CS3.
[Ah, I wish I could hold out hope, but it’s not planned. Its absence is a drag, I know. I will mention it one more time, but the best dude to do the work has been ill. –J.]
Using smart filters with Lens Distortion we can kind of get this, but it’s not as intuitive when trying to align multiple images.
This leads to the “Auto-Align Layers…” feature. It’s good to see this addition (and that the SIFT algorithm has made its way into PhotoShop). At the same time, as convenient and powerful as the auto functions are, we all know times when we need to more tightly control adjustments. To this end, I’m surprised that Free Transform hasn’t added a means to select the 4 adjustment points to define the homography for the transformation (i.e. instead of making us use the layer’s bounding box corners, sometimes which aren’t even above actual pixel content). This would be a killer feature. Better yet, add the ability to control lens distortion here as well. Free Transform is a special case already, so it doesn’t seem like it would do harm to move this out of a single layer modal filter dialog.
All the PanoTools derivative programs work great if you’re rotating your camera around or close to the nodal point. This is also where Auto-align layers works best. But people don’t always take pictures this way and often for good reason; parallax can be used in many interesting ways despite being more often thought of as an error.
Great work on the SO and especially the new smart filters. The insight into the decision process for all this is helpful and interesting!
[Cool; thanks for checking it out. –J.]
I just posted a comment on Shake (as mentioned above) to yesterdays post. The short line is, yes it works, and yes there are limitations.
However, it made me realise that all I really want is live sharpening (and possibly blurring). Realistically, a direct Adjustment Layer style Sharpening Layer is the only thing that would directly and practically improve my workflow, and even that might be limiting (since I tend to sharpen channels separately).
I appreciate the thought that’s gone into Smart Objects, but I’m not too comfortable with converting my entire, HUGE multi-layered composites to Smart Objects, just to gain live sharpening.
Related question: are Smart Objects really as robust as normal layered files? The extra Save step to commit changes has always given me chills…
[Ben, I don’t think you’ll have any problems with Smart Objects. They are PSB files that live inside your PSD or PSB file. You can always choose to save that file out separately.
Anyway, I hear you about the overhead tied to Smart Objects. We could offer the option not to store that data inside the saved file, but you’d have to be cool with re-rendering it (e.g. applying warping, filtering, whatever) on the fly when opening it, and you’d have to live with reduced compatibility with other apps, older versions of PS, and even copies of PS that don’t have the same filters installed. I do think it’s reasonable to offer the option to make that trade-off, but it’s not something we could undertake for this rev. –J.]
“After Effects is the model. Make it work like After Effects. Smartly cache objects when the pixels are directly edited like AE does with text layers that are converted to outlines”.
Actually, After Effects’ paint engine (the one area in which AE operates in the same playing field as PS) requires to open the layer in a separate window from the main composition. The reason for that, of course, is that it could become unbearable slow if it worked in the main comp panel, in the conext of a fairly complex composition. So smart objects are very AE-like in this regard. There are other functions in AE, like motion tracking, which also require opening the layer in a separate panel.
I don’t understand what you meant by caching text created to outlines. When AE 7 converts text to outlines it simply creates a solid, with as many masks as needed to reproduce the original vector outlines for type. No caching of any kind.
dd – you can already use Smart Objects with multiple layers (or other Smart Objects). Smart Objects don’t place any limit on the contents of the child document/file. If you have the layers in the parent document, just select the ones you want and Convert to Smart Object. Or you can place a file that you’ve already saved with all the layers.
Plugins: sometimes it’s much easier to test new code and new ideas in a plugin framework instead of the main application. And some of those operations require so many custom controls that having them run inside Photoshop would make a really dreadful UI. Thus, we isolate them in their own window. At that point, it almost doesn’t matter that they’re plugins….
Also, plugins are a whole lot easier to patch/dot release than the whole application. And plugins load/unload on demand, so their large chunks of code don’t impact the main application.
Warp: we just haven’t had time to increase the functionality. Getting it working right for the simple (single patch) case took longer than expected (lots and lots of edge cases).
Also, node/graph based editing confuses the heck out of photographers, printers, artists, illustrators, etc. The people who seem to grasp node based editing are the ones who have some idea of the operations going on “behind the scenes”. Yeah, we’ve tested the UI, and it didn’t fare well.
Ben – blurring and sharpening filters as adjustment layers lead to the performance problems that John already mentioned.
We’ve tried it, and it was too @#$%@#%$ painful to inflict on our customers (you should have seen the colorful comments from our alpha testers).
Adolfo – yes, it is quite a bit like AE.
We looked at a lot of existing approaches in different applications, plus a few new ideas, and weighed the advantages and problems of each.
(well, that plus lots of hair pulling, screaming in frustration, gnashing of teeth and long discussions in John’s office).
I am encouraged that you are discussing this because I have been praying for a photoshop-replacement. Something with half the power but much simpler. Some of the choices in how tools work in Photoshop baffle me. The crop tool in Motion versus Photoshop highlights what could have been.
Illustrator is near unusable but I do. Reluctantly.
Once again, a great read.
I’ve been teaching Smart Objects in a number of venues for since they came out… I get the resistance and confusion a lot, thus the book outlining a “foundation” workflow.
It’s funny, I usually start the class with a, “while you’re in THIS class, you’re doing it MY way”. ๐
Thanks again, John!
One of the really disappointing things with PSCS4 is that “Convert to Smart Object” still seems to be single-threaded. Is there a technical reason for this or is this something Adobe just hasn’t got around to updating? I’m often converting the luminosity component of large film scans to a SO for sharpening etc and it takes forever on my 8-way Mac Pro.
The conversion to smart objects can’t use threads for much – it’s just doing some in-memory file conversions. And the compositing/transform parts already use threads. Smart FIlters also use threads, just like normal filters.