It’s nice to see these developments drawing a favorable response:
“By far my favorite thing announced at I/O today…” — John Gruber
“Google’s transition from a company that used to think about design the same way as it thought about human resources—as a cost of doing business—to a company that prioritizes design is remarkable, at least insofar as its products look and feel and work so much better today than they used to.” — Khoi Vinh
This transition will take us plenty of work, and it will be worth it. [YouTube]
Photoshop has become a symbol of our society’s unobtainable standards for beauty. My project, Before & After, examines how these standards vary across cultures on a global level. […]
With a cost ranging from five to thirty dollars, and the hope that each designer will pull from their personal and cultural constructs of beauty to enhance my unaltered image, all I request is that they ‘make me beautiful’.
Below is a selection from the resulting images thus far. They are intriguing and insightful in their own right; each one is a reflection of both the personal and cultural concepts of beauty that pertain to their creator.
My team has built a fun little feature: Add a hashtag to your photo (e.g. #PaintUSA), share it on Google+, and we’ll automatically apply the country flag of your choice. My colleague Alex Powell (who recently joined us from Dreamworks) did this work and writes,
Take a photo of yourself and up to 5 friends who’d like their faces painted
Share it on Google+ with the hashtag #PaintUSA or any other country in the knockout round (list of hashtags)
After a few minutes check back in your stream. The faces will be painted with the country flag of your choice
If this doesn’t get you ready to take on the day… what could?
You’re about to face your greatest battle be it your nemesis/space aliens or just life in general. What do you do? You need to GEAR UP. Check out this awesome supercut of all finest ‘gearing up’ scenes from movie land.
Google’s been talking about non-destructive, mobile/Web photography for a long time, but until now the benefit has been mostly theoretical: You could apply edits to your images, but you didn’t have an interface for adjusting edits or moving them among images.
Check out the newly upgraded G+ editor (i.e. Snapseed for Mac/Windows in all but name) shows the adjustments applied to your image, letting you adjust each one, delete them, and copy them from image to image.
To use the new feature:
Open an image in the Google+ Web editor.
Apply one or more edits (for example, choose Black & White and then add a frame).
Click the “Edits” button in the lower-right corner.
Note that each step you’ve applied appears in the list.
To change the settings of any step, click its name in the list, then click the pencil icon.
To delete a step, click its name in the list, then click the X icon to the right of the name.
To copy the appearance you’ve applied on one image to another, click the “Copy” button underneath the list of edits, then use the arrow icons beneath the main image to move to another image, then click “Paste edits” (or “Paste” if edits already exist on the image).
The other interesting thing is that we’re starting to analyze images & then apply editable sets of adjustments. To start we’re detecting certain landscape & urban shots, then applying an interesting combination of blurs, HDR effects, and frames. I think that the combination of computer vision (being able to identify & classify image content) + application of style + editability is really promising.
Browser-compatible type rendering; live previews of cloud-hosted fonts; smarter Smart Objects & guides; slicker layer comps; enhanced automatic export through Generator… it’s all pretty damn cool. Check out this tight little demo from Paul Trani:
Having started my career as a Web designer before joining Adobe, I’m so pleased to see folks like Stephen Nielson, Tim Riot, Bradee Evans, and many others really championing the needs of designers. Onward!
So adios, Configurator, Mini Bridge, Kuler, and crew. It was real, it was fun—it wasn’t all real fun. Here’s hoping that developers take advantage of the new path to build what matters in the real world. (Cloud-backed Library panel for collaborating through linked Smart Objects, anyone?)
Homeless Fonts works with homeless people from the streets of Barcelona to translate the handwriting they use on their signs into typefaces. The hope is that advertising agencies and corporations will use license the resulting works, with the proceeds going back into programs to help the homeless. The results are often distinctive and quite elegant.
…without losing any files or visual quality. 1.5GB of storage is now down to 500MB.
In Lightroom select some raw files.
Select Library->Convert to DNG.
Choose “lossy” compression.
Choose to delete the originals (scary sounding, but it shouldn’t be).
Honestly I’m thinking the misleading “lossy” option should be called “visually lossless,” because as I demonstrated the other day, there’s almost zero chance you’ll ever be able to perceive a difference between this & the lossless compression option. (You’d have to crank up a very dark photo by more than 4 exposure stops.)
Congrats to all my Adobe pals on what looks to be a killer release! (I’ve just arrived home after a week in Germany & am just catching up on videos & other materials. So much to go through—a nice problem to have!)
I’ve been fooled more than once before, but after 27 years AI has at last matched the shape-modifying chops of everything from Photoshop to PowerPoint. All good-natured ribbing aside, this should make interface designers seriously happy.
Rectangles now have quickly modifiable corners, including independent radius control. Corner attributes are retained if you scale and rotate your rectangle. Now Illustrator remembers your work — width, height, rotation, corner treatment — so you can return to your original shape.
Wow—there is literally no way this could end badly. PetaPixel writes,
Pic Nix is a free online service that allows you to subtly and anonymously call out your friends for committing the most heinous of Instagram crimes. Created by ad agency Allen & Gerritsen, using Pic Nix is simple: just enter the name of the offender, choose from their list of 16 offenses, select one of the pre-written captions and submit your request.
FiftyThree introduces Surface Pressure—an industry-first feature that uses Pencil’s uniquely-designed tip to vary the lines you create. Surface Pressure comes to Pencil with the release of iOS 8 this Fall.
When I took the original iPad to visit master digital artist Bert Monroy & artists at Pixar, I noticed how their eyes lit up (after an initially tepid response) when they used their fingers to smudge pixels. Something magical happens when tools feel true to themselves, when you’re using your fingers not as crappy fake pens but as fingers.
…and replace them with lossy DNG proxies? Would I ever see a visual difference?
A) Yes. B) No.
So, a little background:
Lightroom & the free DNG Converter added the ability to apply lossy compression when creating DNG files.
When you apply this compression, your raw data get mapped from a higher bit depth (10-14 bits per channel) to 8 bit.
That sounds horrible (“what about my highlight & shadow data?!”), but the mapping (quantization) is done cleverly, before a perceptual curve is applied. (See nerdy footnote if interested.)
You retain the same white balance flexibility you always had.
You save a lot of disk space—between 40% & 70% in my experience. (You can also elect to save at a reduced resolution, in which case you’ll obviously save a lot more.)
What I’ve always wondered—but somehow never got around to testing—is whether I’d be able to see any visual differences between original & proxy images. In short, no.
Here’s how I tested:
I started with a typical photo taken by my wife—one with really under- and over-lit areas.
I imported the original file into Lightroom, then exported a copy as DNG with lossy compression, then imported the copy back into LR (so that the original & proxy would sit side-by-side).
Just to stress-test, I cranked up the Shadows to +100 and cranked down Highlights to -100.
Then to stress things further, I used a brush to open up the shadows by another full stop.
I copied & pasted settings from the original to the proxy.
Failing to notice any visual differences at all, finally I opened the original & proxy versions as layers in Photoshop. I set the blending mode of the top layer to Difference in order to highlight any variation between the two versions.
Having failed to see any difference even then (i.e. the result of Difference appeared to be pure black—i.e., identical pixels), I applied an Exposure adjustment layer and—just for yuks—cranked it up 14 stops.
I repeated the experiment with other images, including some with subtle gradients (e.g. a moonrise at sunset). The results were the same: unless I was being pretty pathological, I couldn’t detect any visual differences at all.
I did find one case where I could see a difference between the lossy & lossless versions: My colleague Ronald Wotzlaw shot a picture of the moon, and if I opened up the exposure by more than 4 stops, I could see a difference (screenshot). For +4 stops or less, I couldn’t see any difference. Here’s the original NEF & the DNG copies (lossless, lossy) if you’d like to try the experiment yourself.
No doubt a lot of photographers will tune out these findings: “Raw is raw, lossless is lossless, the end.” Fine, though I’m bugged by some photogs’ fetishistic, gear-porn qualities (the kind of guys who insist on getting a giant lens & an offsetting full-frame camera) & old-wives’ mentalities (“You can’t reformat your memory card with your computer: this one time, in 2003, my buddy tried it and it made his house burn down…”).
So, to each his or her own. As for me, I’m really, really encouraged by these findings, and I plan to start batch-converting my DNGs to be “lossy” (a great misnomer, it seems).
Nerdy footnote: Zalman Stern spent many years building Camera Raw & now works with me on Google Photos. He’s added a bit more detail about how things work:
“Downsampling” is reducing the number of pixels. Reducing the bit-depth is “quantizing.” The quantization is done in a perceptual space, which results in less visible loss than doing quantization in a linear space. Raw data of the sensor is linear where the data going into a JPEG has a perceptual curve applied. (“Gamma” and sRGB tone curves are examples of the general thing around perceptual curves.)
Dynamic range should be preserved and some small amount of quantization error is introduced. (Spatial compression artifacts, as in normal JPEG, are a different form of quantization error. That happens with proxies too.) Quantization error is interesting in that if it is done without patterning, it takes a very large amount of it to be visible.
The place you’d look for errors with lossy raw technology are things like noise in the shadows and patterning via color casts in highlights after a correction. That is the quantization error gets magnified and somehow ends up happening differently for different colors.
From 14-year-old dropout to mountain adventurer to NatGeo cover photographer, Cory Richards gives a lightning tour of his life while meditating on how photography connects us. PopPhoto notes,
There are not many people who would survive a deadly avalanche, and then get up and keep climbing. There are fewer still who would keep taking photos. But that’s exactly what Richards did, and what part of what makes his photography so impressive.
Treat yourself to five minutes of gorgeous aerial photography from Thailand courtesy of Philip Bloom together with After Effects (lens correction), Colorista (color correction), a Phantom 2 drone (featuring prop guards while buzzing young children), and a GoPro. (More info is in Philip’s blog post.)
[T]he movie is the result of a two month-long contest that invited kids ages 5-18 to submit videos on KIDZBOP.com that showcase their action moves and healthy eating habits in one of nine different scripted scenes. […] More than 1,300 kids submitted videos and auditioned for the film, and over 5,000 kids cast their vote to select real kids from across the nation to star in G.A.M.E. alongside Ryan Ochoa.
Even as a fairly minor work in the director’s incredible canon, this video for Metronomy is fun. The Fox Is Black writes,
Shot in a single-take, the video sees the band stuck inside a painted set while a camera rotates around the outside. In typical Gondry fashion the video is fun, playful and filled with the director’s distinctive sense of whimsy.
For our 7th annual Doodle 4 Google competition, we asked kids, grades K-12, to draw an invention that would make the world a better place. Out of more than 100,000 submissions, 250 state finalists, 50 state winners, and 5 national age group winners, we are excited to present the 2014 Doodle 4 Google winner: 11-year old Audrey Zhang of New York!
Audrey won a $30,000 college scholarship from Google, which also gave a $50,000 tech grant to her school, Island Trees Middle School in Levittown, New York. Google also donated $20,000 in Audrey’s name to provide clean water and bathrooms at 10 schools in Bangladesh.
a clever application that lets you change your facial features, add funny effects, or transform your face into a 3D character in real-time… [It] offers tools that let you give yourself eyes of a different color. Or you can make yourself look all-around better, with specialized filters that let you soften your skin, give yourself a thinner face, change your chin and neck shape, change your nose size, make your eyes bigger, and even remove your pimples.
Stuff like this is, in my long Photoshop experience, a cultural minefield. What do you think?
Thus I’m delighted that in iOS 8 Apple is adding the ability for apps to provide one another services. The news reminded me to re-read what I wrote three years ago when requesting just this:
Poor integration leads to bloated apps: if jumping among apps/modules is slow, customers gravitate towards all-in-one tools that offer more overall efficiency, even if the individual pieces are lacking. […]
Remember the promise of OpenDoc? Despite all its well documented faults, I still love the idea of assembling a dream team of little parts, each the best in its class for doing what I need. […]
Why did Photoshop 1.0 succeed? It offered excellent (and focused) core functionality, plus a simple extensibility system that enabled efficient flexibility (running a filter brought no need to save, navigate, re-open, etc.). The core app could remain relatively simple while aftermarket tuners tailored it to specific customer needs.
With this support coming to iOS (and already on Android & Windows), I think all our lives (as app users) and my life (as an app developer) are about to get a lot more efficient & interesting. We shall see.
Your Lightroom Catalogs can be stored anywhere. This means that, so long as you know you’ll have Internet, they could technically be stored on Dropbox, Google Drive, Creative Cloud, or the upcoming iCloud Drive to make for easy syncing across computers.
“Automatically write changes to XMP” preference makes sure that edits in Lightroom carry over to other Adobe programs such as Photoshop and Bridge.
“Optimize Catalog” command removed unused caches to speed up catalog performance.
Advice on how to make the most of Adobe’s impressive Smart Previews feature.
Move media across your drive directory using only Lightroom, not something such as Windows Explorer or Finder, both of which can leave data behind.
Photographer Alexander Khokhlov and Makeup Artist Veronica Ershova worked on a project that transforms regular women into magical works of art… Despite some of the photos looking like drawings, they are all actual women with their makeup perfectly done to capture the vision of the artists.
Helping get this stuff into Photoshop remains one of my proudest accomplishments. It remains cryptic & obscure to more people than it should, however. Here’s a quick primer to get you working more flexibly:
I’m delighted to say that we’ve rewritten the Snapseed editing pipeline from the ground up, making it non-destructive & setting the stage for a really exciting future. Just yesterday it arrived on iOS inside the new Google+ app (which, by the way, offers to back up all your photos & videos for free). Engineer Todd Bogdan writes,
Easily perfect your photos with a powerful new editing suite in the Google+ app for iPhone and iPad. With these Snapseed-inspired tools you can crop, rotate, add filters and 1-tap enhancements like Drama, Retrolux, and HDR Scape, and more. Add a personal touch to your photos, then easily share them with friends and family. As an added bonus: you can start editing on one device, continue on another, and revert to your originals at any time!
The overall workflow is a work in progress (e.g. right now you don’t get an interface for re-editing your adjustments), but stay tuned: we’re starting to cook with gas.
For years at Adobe I’d joke, “If we’d come up with the idea for Instagram, we still wouldn’t have shipped it, because we’d still be debating, ‘Hmm—do you think we need 16 sliders per filter… or is it more like 32?’ The idea of no sliders at all (which makes people feel more confident, because you can’t feel too responsible for getting things really ‘right.'”
If the new effects feel a bit buried in the editing flow, that’s the point. Systrom tells me “I believe that flexibility and simplicity are often at odds.” So instead of cramming the features into the main composition flow, they’re hidden behind the wrench so hardcore users can dig them out, but they don’t complicate things for casual users.
It’ll be interesting to see how people respond to these.