“People tend to overestimate what can be done in one year and to underestimate what can be done in five or ten years,” as the old saying goes. Similarly, it can be hard to notice one’s own kid’s progress until confronted with an example of that kid from a few years back.
My son Henry has recently taken a shine to photography & has been shooting with my iPhone 7 Plus. While passing through Albuquerque a few weeks back, we ended up shooting side by side—him with the 7, and me with an iPhone 12 Pro Max (four years newer). We share a camera roll, and as I scrolled through I was really struck seeing the output of the two devices placed side by side.
I don’t hold up any of these photos (all unedited besides cropping) as art, but it’s fun to compare them & to appreciate just how far mobile photography has advanced in a few short years. See gallery for more.
It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:
Photoshop: Edit, composite, and create beautiful images, graphics, and art.
Illustrator: Create beautiful vector art and illustrations.
Fresco: Draw and paint with thousands of natural brushes.
Spark Post: Make stunning social graphics — in seconds.
Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.
Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.
We recently launched Touch-to-fill for passwords on Android to prevent phishing attacks. To improve security on iOS too, we’re introducing a biometric authentication step before autofilling passwords. On iOS, you’ll now be able to authenticate using Face ID, Touch ID, or your phone passcode. Additionally, Chrome Password Manager allows you to autofill saved passwords into iOS apps or browsers if you enable Chrome autofill in Settings.
Shapr3D is an iPad drawing app that lets you create 3D drawings without having to use a desktop computer or CAD software. Designs created in this “pro-level” tool are compatible with major CAD file formats and support instant exports for 3D printing.
This is kinda inside-baseball, but I’m really happy that friends from my previous team will now have their work distributed on hundreds of millions, if not billions, of devices:
[A] face contours model — which can detect over 100 points in and around a user’s face and overlay masks and beautification elements atop them — has been added to the list of APIs shipped through Google Play Services…
Lastly, two new APIs are now available as part of the ML Kit early access program: entity extraction and pose detection… Pose detection supports 33 skeletal points like hands and feet tracking.
Let’s see what rad stuff the world can build with these foundational components. Here’s an example of folks putting an earlier version to use, and you can find a ton more in my Body Tracking category:
TBH I’m a little nonplussed about the specific effects shown here, but I remain intrigued by the idea of a highly accessible, results-oriented app that could also generate layered imagery for further tweaking in Photoshop and other more flexible tools.
The main goal of apps like this might simply be to introduce more people to the Adobe ecosystem. Adobe CTO Abhay Parasnis said as much in an interview with The Verge, in which he calls Photoshop Camera “the next one in that journey for us.” Photoshop Camera could act as the “gateway drug” to a Creative Cloud subscription for anybody who discovers a dormant love of photo editing.
I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?
I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them), to painting with slick features that got pulled from Photoshop before release & somehow have never returned. I still wish we’d been able to shoehorn GPU-powered watercolor into Photoshop’s, er, venerable compositing engine, but so it goes. (A 15-year-old demo still lives at one of my best URLs ever, jnack.com/BlowingYourMindClearOutYourAss )
[Please note: I don’t work on the Pixel team, and these opinions are just those of a guy with a couple of phones in hand, literally shooting in the dark.]
In Yosemite Valley on Friday night, I did some quick & unscientific but illuminating (oh jeez) tests shooting with a Pixel 4 & iPhone 11 Pro Max. I’d had fleeting notions of trying some proper astrophotography (side note: see these great tips from Pixel engineer & ILM vet Florian Kainz), but between the moon & the clouds, I couldn’t see a ton of stars. Therefore I mostly held up both phones, pressed the shutter button, and held my breath.
Check out the results in this album. You can see which camera produced which images by tapping each image, then tapping the little comment icon. I haven’t applied any adjustments.
Overall I’m amazed at what both devices can produce, but overall I preferred the Pixel’s interpretations. They were darker, but truer to what my eyes perceived, and very unlike the otherworldly, day-for-night iPhone renderings (which persisted despite a few attempts I made to set focus, then drag down the exposure before shooting).
Check out the results, judge for yourself, and let me know what you think.
Oh, and for a much more eye-popping Pixel 4 result, check out this post from Adobe’s Russell Brown:
Boy, what I wouldn’t have given to have had this tech in Photoshop Touch, where Scribble Selection was the hotness du jour. Pam Clark writes,
This feature on the iPad works exactly the same as on Photoshop on the desktop and produces the same results, vastly enhancing selection capabilities and speed available on the iPad. With cloud documents, you can make a selection on the desktop or the iPad and continue your work seamlessly using Photoshop on another device with no loss of fidelity; no imports or exports required.
We originally released Select Subject in Photoshop on the desktop in 2018. The 2019 version now runs on both the desktop and the iPad and produces cleaner selection edges on the mask and delivers massively faster performance (almost instantaneous), even on the iPad.
The feature is rolling out today; I was able to try it on my Pixel 4 without a hitch. It works across 44 languages, and is available on both Android and iOS. Google Assistant is built into Android phones and no separate app is required. For iOS, simply download the Google Assistant app to try it out.
Now, you can turn a photo into a portrait on Pixel by blurring the background post-snap. So whether you took the photo years ago, or you forgot to turn on portrait mode, you can easily give each picture an artistic look with Portrait Blur in Google Photos.
I’m also pleased to see that the realtime portrait-blurring tech my team built has now come to Google Duo for use during video calls:
My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,
I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:
Let me leave my computer at home & carry just my tablet** & camera
Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.
This remains a bizarre failure of our industry.
Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.
This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).
I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.
This looks so rad. Back in the day, I really wanted a solution that would record the “bizarre, freewheeling bedtime stories” my sons & I made up every night, then let us put them into an illustrated journal. The new Recorder app solves the most critical piece of that puzzle.
The new Recorder app on Pixel 4 brings the power of search and AI to audio recording. You can record meetings, lectures, jam sessions — anything you want to save and listen to later. Recorder automatically transcribes speech and tags sounds like music, applause, and more, so you can search your recordings to quickly find the part you’re looking for. All Recorder functionality happens on-device, so your audio never leaves your phone. We’re starting with English for transcription and search, with more languages coming soon.
My old pals Will & Bryan and their teams have been hard at work on the brushing-savvy iPad app Fresco (see previous thoughts). Gizmodo offers a quick look at its current state, and Bryan has shared some perspective on its development.
My teammates have been hard at work to enable not only unlocking your phone using your face, but also using hand gestures to “skip songs, snooze alarms, silence phone calls,” and more. Check out the blog post and the quick demo below:
It removes issues like halos and artifacts at the edges and horizon, allows you to adjust depth of field, tone, exposure and color after the new sky has been dropped in, correctly detects the horizon line and the orientation of the sky to replace, and intelligently “relights” the rest of your photo to match the new sky you just dropped in “so they appear they were taken during the same conditions.”
Check out the article link to see some pretty compelling-looking examples.
People have been trying to combine the power of vector & raster drawing/editing for decades. (Anybody else remember Creature House Expression, published by Fractal & then acquired by Microsoft? Congrats on also being old! 🙃) It’s a tough line to walk, and the forthcoming Adobe Fresco app is far from Adobe’s first bite at the apple (I remember you, Fireworks).
Back in 2010, I transitioned off of Photoshop proper & laid out a plan by which different mobile apps/modules (painting, drawing, photo library) would come together to populate a share, object-centric canvas. Rather than build the monolithic (and now forgotten) Photoshop Touch that we eventually shipped, I’d advocated for letting Adobe Ideas form the drawing module, Lightroom Mobile form the library, and a new Photoshop-derived painting/bitmap editor form the imaging module. We could do the whole thing on a new imaging stack optimized around mobile GPUs.
Obviously that went about as well as conceptually related 90’s-era attempts at OpenDoc et al.—not because it’s hard to combine disparate code modules (though it is!), but because it’s really hard to herd cats across teams, and I am not Steve Fucking Jobs.
Sadly, I’ve learned, org charts do matter, insofar as they represent alignment of incentives & rewards—or lack thereof. “If you want to walk fast, walk alone; if you want to walk far, walk together.” And everyone prefers “innovate” vs. “integrate,” and then for bonus points they can stay busy for years paying down the resulting technical debt. “…Profit!”
But who knows—maybe this time crossing the streams will work. Or, see you again in 5-10 years the next time I write this post. 😌
I’m intrigued by the wealth of enhancements arriving in Procreate for iPad, including new tapered strokes & “QuickShapes.” These remind me of shape-recognition tech in Adobe apps that dates back 20+ years to early Flash, but which is cleverly executed here (enabling quick movement & manipulation of what’s drawn):
This is a watershed moment for me: After 11+ years of shooting on iPhones & Canon DSLRs, this is the first time I’ve shot on an Android device that plainly outshines them both at something. Night Sight on Pixel 3 blows me away.
First, some important disclaimers:
I work at Google & get to collaborate with the folks responsible for this tech, but I can take no credit for it, and these are just my opinions & non-scientific findings.
I’m not here to rain on anybody’s parade. My iPhone X is great, and the 70D has been a loyal workhorse. I have no plans to ditch either.
The 70D came out in 2013, and it’s obviously possible to get both a newer DSLR & a lens faster than my 24-70mm f/2.8.
It’s likewise possible to know a lot more about manual exposure than I do. I went only as far as to choose aperture priority, crank the exposure wide open, and set ISO to Auto.
Having said all that, I think my results reasonably represent what a normal-to-semi-savvy person would get from the various devices. Here’s what I saw:
Pixel 3 vs. 70D shots (set one, set two), all unedited. CR2 files from the 70D got converted to JPEG using default processing in Lightroom. In many cases the 70D struggled to focus (whereas the Pixel never did), so some of its shots are soft as well as dark.
Pixel 3 vs. iPhone X on a separate evening. With a few subjects (e.g. this one) I tried taking an iPhone shot with default (auto) exposure, then one with exposure manually cranked up, and finally one with Pixel 3 Night Sight. Here’s another triplet. Regrettably I didn’t think to try shooting raw on either phone.
Google has launched “Mini” stickers for iOS and Android, which use machine learning to craft personalized emoji from your photo. More precisely, the feature uses a combination of machine learning, neural networks and artist illustrations to conjure up the best representation of you, taking into account various characteristics like your skin tone, hair color and style, eye color, face shape and facial hair. Just access Mini from within Gboard and start the creation process by taking a selfie. It will then automatically create your avatar and generate packs of stickers you can use.
In order to give everyone the opportunity to experience just how natural AI-powered interactions can now be, we’re launching 猜画小歌 (“Guess My Sketch”) from Google AI, a fun, social WeChat Mini Program in which players team up with our AI to sketch everyday items in a race against the clock. In each round, players sketch the given word (like “dog”, “clock”, or “shoe”) for their AI teammate to guess correctly before time runs out.
When the AI successfully guesses your sketch, you’ll move on to the next round and increase your sketching streak. You can invite friends and family to compete for the longest streak, share interesting sketches with each other, and collect new words and drawings as you continue playing.
The one-man (I believe) band behind the Focus app for iOS continues to apply the awesome sauce—now adding the ability to create & modify light sources in portrait-mode images (which it treats as 3D). Check it out:
A dog (clear favorite), UFO, heart, basketball, and spider join the dinosaur, chicken, alien, gingerbread man, planet, and robot. The latter six stickers have been slightly rearranged, while the new ones are at the beginning of the carousel.
Enjoy! And let us know what else you’d like to see.
As I mentioned the other day, Moment is Kickstarting efforts to create an anamorphic lens for phones like Pixel & iPhone. In the quick vid below, they explain its charms—cool lens flares, oval bokeh, and more:
Well, that escalated quickly: For this new set of mobile filmmaking tools (lens, battery, gimbal) Moment hit their $50k funding goal in in just over half an hour, and as of this writing they’ve easily cleared the $750k mark. Check ‘em out:
Now available on both iOS & Android, and offering a few neat tricks:
Lens works on photos of business cards, books, landmarks and buildings, paintings in a museum, plants or animals, and flyers and event billboards. When you use Lens on a photo that has phone numbers or an address, you can automatically save this information as a contact on your phone, while events will be added to your calendar.
Unity integration will also allow developers to customize maps with what appears to be a great deal of flexibility and control. Things like buildings and roads are turned into objects, which developers can then tweak in the game engine. During a demonstration, Google showed off real-world maps that were transformed into sci-fi landscapes and fantasy realms, complete with dragons and treasure chests.
Jacoby says that one of the goals of the project was to help developers build detailed worlds using Maps data as a base to paint over. Developers can do things like choose particular kinds of buildings or locations — say, all stores or restaurants — and transform each one. A fantasy realm could turn all hotels into restorative inns, for instance, or anything else.
Flutter apps don’t directly compile to native Android and iOS apps; they run on the Flutter rendering engine (written in C++) and Flutter Framework (written in Dart, just like Flutter apps), both of which get bundled up with every app, and then the SDK spits out a package that’s ready to go on each platform. You get your app, a new engine to run the Flutter code on, and enough native code to get the Flutter platform running on Android and iOS.
Also, I’m totally creating a band called Stateful Hot Reload. 🙂
The app, Soundscape, calls out roads and landmarks as they’re passed, and lets users set audio beacons at familiar destinations. If at any time you’re unsure of where you are, or which direction to head in, you can simply hold the phone flat in your hand and use the buttons on the bottom of the screen to locate nearby roads and familiar destinations.
This app (sadly unavailable in the US, it seems) looks really creative & fun:
“To achieve a seamless transition from the TV ad to Augmented Reality we use computer vision to detect the quattro coaster TV ad. Then, we sync and position the augmented content on the screen. What’s interesting is that the car remains in the room even after the ad has ended. [more]
Hooray! My first real project to ship since joining my new team is here:
Today, we are excited to announce the new Augmented Reality (AR) mode in Motion Stills for Android. With the new AR mode, a user simply touches the viewfinder to place fun, virtual 3D objects on static or moving horizontal surfaces (e.g. tables, floors, or hands), allowing them to seamlessly interact with a dynamic real-world environment. You can also record and share the clips as GIFs and videos.