Category Archives: Mobile

Demo: Camera Raw is coming to Photoshop for iPad

Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)

If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯

Anyway, back in 2021, take ‘er away, Ryan Dumlao:

What a difference four years makes in iPhone cameras

“People tend to overestimate what can be done in one year and to underestimate what can be done in five or ten years,” as the old saying goes. Similarly, it can be hard to notice one’s own kid’s progress until confronted with an example of that kid from a few years back.

My son Henry has recently taken a shine to photography & has been shooting with my iPhone 7 Plus. While passing through Albuquerque a few weeks back, we ended up shooting side by side—him with the 7, and me with an iPhone 12 Pro Max (four years newer). We share a camera roll, and as I scrolled through I was really struck seeing the output of the two devices placed side by side.

I don’t hold up any of these photos (all unedited besides cropping) as art, but it’s fun to compare them & to appreciate just how far mobile photography has advanced in a few short years. See gallery for more.

Adobe introduces the Design Mobile Bundle

It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:

  • Photoshop: Edit, composite, and create beautiful images, graphics, and art.
  • Illustrator: Create beautiful vector art and illustrations.
  • Fresco: Draw and paint with thousands of natural brushes.
  • Spark Post: Make stunning social graphics — in seconds.
  • Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.

More good stuff is coming to Fresco soon, too:

Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.

New Google Photos widget puts memories onto your iPhone homescreen

YouTube, the Google app, and Photos now offer options to show widgets via the new iOS 14:

To install a Google Widget, first make sure you have the Google Photos appYouTube Music app or Google app downloaded from the App Store. Then follow these steps:

  1. Press and hold  on the home screen of your iPhone or iPad
  2. Tap the plus icon on the upper left corner to open the widget gallery
  3. Search for & tap on the Google app, YouTube Music or the Google Photos app
  4. Swipe right/left to select the widget size
  5. Tap “Add Widget”
  6. Place the widget and tap “Done” at the upper right corner

Chrome for iOS improves password autofill, Face ID integration

Teamwork makes the dream work, baby:

Improvements to password filling on iOS

We recently launched Touch-to-fill for passwords on Android to prevent phishing attacks. To improve security on iOS too, we’re introducing a biometric authentication step before autofilling passwords. On iOS, you’ll now be able to authenticate using Face ID, Touch ID, or your phone passcode. Additionally, Chrome Password Manager allows you to autofill saved passwords into iOS apps or browsers if you enable Chrome autofill in Settings.

ML Kit gets pose detection

This is kinda inside-baseball, but I’m really happy that friends from my previous team will now have their work distributed on hundreds of millions, if not billions, of devices:

[A] face contours model — which can detect over 100 points in and around a user’s face and overlay masks and beautification elements atop them — has been added to the list of APIs shipped through Google Play Services…

Lastly, two new APIs are now available as part of the ML Kit early access program: entity extraction and pose detection… Pose detection supports 33 skeletal points like hands and feet tracking.

Let’s see what rad stuff the world can build with these foundational components. Here’s an example of folks putting an earlier version to use, and you can find a ton more in my Body Tracking category:

[Via]

Adobe releases Photoshop Camera

After being announced last fall, it’s free to download for iOS & Android.

TBH I’m a little nonplussed about the specific effects shown here, but I remain intrigued by the idea of a highly accessible, results-oriented app that could also generate layered imagery for further tweaking in Photoshop and other more flexible tools.

As for the rationale, PetaPixel notes,

The main goal of apps like this might simply be to introduce more people to the Adobe ecosystem. Adobe CTO Abhay Parasnis said as much in an interview with The Verge, in which he calls Photoshop Camera “the next one in that journey for us.” Photoshop Camera could act as the “gateway drug” to a Creative Cloud subscription for anybody who discovers a dormant love of photo editing.

Action Blocks make tasks more accessible for those with cognitive impairments

I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?

Cool multicolor painting tools arrive in Adobe Fresco

I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them), to painting with slick features that got pulled from Photoshop before release & somehow have never returned. I still wish we’d been able to shoehorn GPU-powered watercolor into Photoshop’s, er, venerable compositing engine, but so it goes. (A 15-year-old demo still lives at one of my best URLs ever, jnack.com/BlowingYourMindClearOutYourAss )

In any event, the Adobe Fresco team has just unveiled a raft of new features, including some trippy multicolor painting capabilities. Check it out:

Quick Comparison: Pixel 4 vs. iPhone 11 at Night

[Please note: I don’t work on the Pixel team, and these opinions are just those of a guy with a couple of phones in hand, literally shooting in the dark.]

In Yosemite Valley on Friday night, I did some quick & unscientific but illuminating (oh jeez) tests shooting with a Pixel 4 & iPhone 11 Pro Max. I’d had fleeting notions of trying some proper astrophotography (side note: see these great tips from Pixel engineer & ILM vet Florian Kainz), but between the moon & the clouds, I couldn’t see a ton of stars. Therefore I mostly held up both phones, pressed the shutter button, and held my breath.

Check out the results in this album. You can see which camera produced which images by tapping each image, then tapping the little comment icon. I haven’t applied any adjustments.

Overall I’m amazed at what both devices can produce, but overall I preferred the Pixel’s interpretations. They were darker, but truer to what my eyes perceived, and very unlike the otherworldly, day-for-night iPhone renderings (which persisted despite a few attempts I made to set focus, then drag down the exposure before shooting).

Check out the results, judge for yourself, and let me know what you think.

UntitledImage

Oh, and for a much more eye-popping Pixel 4 result, check out this post from Adobe’s Russell Brown:

Select Subject comes to Photoshop for iPad

Boy, what I wouldn’t have given to have had this tech in Photoshop Touch, where Scribble Selection was the hotness du jour. Pam Clark writes,

This feature on the iPad works exactly the same as on Photoshop on the desktop and produces the same results, vastly enhancing selection capabilities and speed available on the iPad. With cloud documents, you can make a selection on the desktop or the iPad and continue your work seamlessly using Photoshop on another device with no loss of fidelity; no imports or exports required.

We originally released Select Subject in Photoshop on the desktop in 2018. The 2019 version now runs on both the desktop and the iPad and produces cleaner selection edges on the mask and delivers massively faster performance (almost instantaneous), even on the iPad.

NewImage

Google Pixel introduces post-capture Portrait blur

🎉

Now, you can turn a photo into a portrait on Pixel by blurring the background post-snap. So whether you took the photo years ago, or you forgot to turn on portrait mode, you can easily give each picture an artistic look with Portrait Blur in Google Photos.

I’m also pleased to see that the realtime portrait-blurring tech my team built has now come to Google Duo for use during video calls:

Bittersweet Symphony: Lightroom improves iPad import

“Hey, y’all got a water desalination plant, ‘cause I’m salty as hell.🙃

First, some good news: Lightroom is planning to improve the workflow of importing images from an SD card:

I know that this is something that photographers deeply wanted, starting in 2010. I just wonder whether—nearly 10 years since the launch of iPad—it matters anymore.

My failure, year in & year out, to solve the problem at Adobe is part of what drove me to join Google in 2014. But even back then I wrote,

I remain in sad amazement that 4.5 years after the iPad made tablets mainstream, no one—not Apple, not Adobe, not Google—has, to the best of my knowledge, implemented a way to let photographers to do what they beat me over the head for years requesting:

  • Let me leave my computer at home & carry just my tablet** & camera
  • Let me import my raw files (ideally converted to vastly smaller DNGs), swipe through them to mark good/bad/meh, and non-destructively edit them, singly or in batches, with full raw quality.
  • When I get home, automatically sync all images + edits to/via the cloud and let me keep editing there or on my Mac/PC.

This remains a bizarre failure of our industry.

Of course this wasn’t lost on the Lightroom team, but for a whole bunch of reasons, it’s taken this long to smooth out the flow, and during that time capture & editing have moved heavily to phones. Tablets represent a single-digit percentage of Snapseed session time, and I’ve heard the same from the makers of other popular editing apps. As phones improve & dedicated-cam sales keep dropping, I wonder how many people will now care.

On we go.

[YouTube]

Adobe announces Photoshop Camera

This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).

I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.

NewImage

[YouTube]

Check out Google’s new AI-powered on-device transcription

This looks so rad. Back in the day, I really wanted a solution that would record the “bizarre, freewheeling bedtime stories” my sons & I made up every night, then let us put them into an illustrated journal. The new Recorder app solves the most critical piece of that puzzle.

The new Recorder app on Pixel 4 brings the power of search and AI to audio recording. You can record meetings, lectures, jam sessions — anything you want to save and listen to later. Recorder automatically transcribes speech and tags sounds like music, applause, and more, so you can search your recordings to quickly find the part you’re looking for. All Recorder functionality happens on-device, so your audio never leaves your phone. We’re starting with English for transcription and search, with more languages coming soon.

[YouTube]

Adobe & Apple partner for hands-on sneaks of Adobe Fresco

All through October. Looks like fun—and great to see how far the world has come from the “Thoughts On Flash”/“Sympathy for the Devil” days.

Begone, lame skies!

Does anyone else remember when Adobe demoed automatic sky-swapping ~3 years ago, but then never shipped it… because, big companies? (No, just me?)

Anyway, Xiaomi is now offering a similar feature. Here’s a quick peek:

And here’s a more in-depth demo:

Coincidentally, “Skylum Announces Luminar 4 with AI-Powered Automatic Sky Replacement”:

It removes issues like halos and artifacts at the edges and horizon, allows you to adjust depth of field, tone, exposure and color after the new sky has been dropped in, correctly detects the horizon line and the orientation of the sky to replace, and intelligently “relights” the rest of your photo to match the new sky you just dropped in “so they appear they were taken during the same conditions.”

Check out the article link to see some pretty compelling-looking examples.

NewImage

[YouTube 1 & 2]

Check out Fresco, Adobe’s new tablet drawing app

People have been trying to combine the power of vector & raster drawing/editing for decades. (Anybody else remember Creature House Expression, published by Fractal & then acquired by Microsoft? Congrats on also being old! 🙃) It’s a tough line to walk, and the forthcoming Adobe Fresco app is far from Adobe’s first bite at the apple (I remember you, Fireworks).

Back in 2010, I transitioned off of Photoshop proper & laid out a plan by which different mobile apps/modules (painting, drawing, photo library) would come together to populate a share, object-centric canvas. Rather than build the monolithic (and now forgotten) Photoshop Touch that we eventually shipped, I’d advocated for letting Adobe Ideas form the drawing module, Lightroom Mobile form the library, and a new Photoshop-derived painting/bitmap editor form the imaging module. We could do the whole thing on a new imaging stack optimized around mobile GPUs.

Obviously that went about as well as conceptually related 90’s-era attempts at OpenDoc et al.—not because it’s hard to combine disparate code modules (though it is!), but because it’s really hard to herd cats across teams, and I am not Steve Fucking Jobs.

Sadly, I’ve learned, org charts do matter, insofar as they represent alignment of incentives & rewards—or lack thereof. “If you want to walk fast, walk alone; if you want to walk far, walk together.” And everyone prefers “innovate” vs. “integrate,” and then for bonus points they can stay busy for years paying down the resulting technical debt. “…Profit!”

But who knows—maybe this time crossing the streams will work. Or, see you again in 5-10 years the next time I write this post. 😌

[YouTube]

Night Sight is outta sight!

This is a watershed moment for me: After 11+ years of shooting on iPhones & Canon DSLRs, this is the first time I’ve shot on an Android device that plainly outshines them both at something. Night Sight on Pixel 3 blows me away.

NewImage

First, some important disclaimers:

  • I work at Google & get to collaborate with the folks responsible for this tech, but I can take no credit for it, and these are just my opinions & non-scientific findings.
  • I’m not here to rain on anybody’s parade. My iPhone X is great, and the 70D has been a loyal workhorse. I have no plans to ditch either.
  • The 70D came out in 2013, and it’s obviously possible to get both a newer DSLR & a lens faster than my 24-70mm f/2.8.
  • It’s likewise possible to know a lot more about manual exposure than I do. I went only as far as to choose aperture priority, crank the exposure wide open, and set ISO to Auto.

Having said all that, I think my results reasonably represent what a normal-to-semi-savvy person would get from the various devices. Here’s what I saw:

What do you think?

By the way, Happy New Year! Here’s an animation created last night by shooting a series of Night Sight images, then combining them in Google Photos & finally cropping the output in Photoshop.

PS—I love the Queen-powered “Flash!” ad showing Night Sight:

[YouTube]

Nice improvements to Google Lens for iOS

My team makes tech that Lens uses to do things like track text in the screenshot below, so I’m pleased that Google Lens is getting integrated into Google search on iOS and upgraded in Google Photos:

It’s easier than ever to do more with your photos with a new, redesigned Google Lens experience on Android and iOS–now available in English, Spanish, French, German, Italian, Portuguese, and Korean.

Enjoy!

NewImage

Create mini stickers of yourself automatically via Gboard

Groovy:

Google has launched “Mini” stickers for iOS and Android, which use machine learning to craft personalized emoji from your photo. More precisely, the feature uses a combination of machine learning, neural networks and artist illustrations to conjure up the best representation of you, taking into account various characteristics like your skin tone, hair color and style, eye color, face shape and facial hair. Just access Mini from within Gboard and start the creation process by taking a selfie. It will then automatically create your avatar and generate packs of stickers you can use.

NewImage

Google brings an AI game to WeChat

Fun stuff from the Shanghai office:

In order to give everyone the opportunity to experience just how natural AI-powered interactions can now be, we’re launching 猜画小歌 (“Guess My Sketch”) from Google AI, a fun, social WeChat Mini Program in which players team up with our AI to sketch everyday items in a race against the clock. In each round, players sketch the given word (like “dog”, “clock”, or “shoe”) for their AI teammate to guess correctly before time runs out.

When the AI successfully guesses your sketch, you’ll move on to the next round and increase your sketching streak. You can invite friends and family to compete for the longest streak, share interesting sketches with each other, and collect new words and drawings as you continue playing.

NewImage

New AR stickers in Motion Stills

My team has just added some fun new characters to Motion Stills for Android. 9to5 Google writes

A dog (clear favorite), UFO, heart, basketball, and spider join the dinosaur, chicken, alien, gingerbread man, planet, and robot. The latter six stickers have been slightly rearranged, while the new ones are at the beginning of the carousel.

Enjoy! And let us know what else you’d like to see.

 NewImage

Try Google Lens inside Google Photos

Now available on both iOS & Android, and offering a few neat tricks:

Lens works on photos of business cards, books, landmarks and buildings, paintings in a museum, plants or animals, and flyers and event billboards. When you use Lens on a photo that has phone numbers or an address, you can automatically save this information as a contact on your phone, while events will be added to your calendar.

 

NewImage

 

 

Google Maps + Unity FTW!

Get ready for a whole new wave of AR gaming:

Per The Verge,

Unity integration will also allow developers to customize maps with what appears to be a great deal of flexibility and control. Things like buildings and roads are turned into objects, which developers can then tweak in the game engine. During a demonstration, Google showed off real-world maps that were transformed into sci-fi landscapes and fantasy realms, complete with dragons and treasure chests.

Jacoby says that one of the goals of the project was to help developers build detailed worlds using Maps data as a base to paint over. Developers can do things like choose particular kinds of buildings or locations — say, all stores or restaurants — and transform each one. A fantasy realm could turn all hotels into restorative inns, for instance, or anything else.

NewImage

[YouTube]

The secrets behind rock-solid microvideos on Pixel 2

Continuing our series of Research Blog posts (see realtime segmentation, motion tracking), my teammates have provided an inside look at the tech they’ve developed—this time covering how motion photos get stabilized on the fly:

By combining software-based visual tracking with the motion metadata from the hardware sensors, we built a new hybrid motion estimation for motion photos on the Pixel 2. 

Check out the blog post for details, or just enjoy lots of good before/after examples of stabilization in action.

NewImage

Google introduces Flutter, a cross-platform UX library

Check it out:

As Ars Technica explains,

Flutter apps don’t directly compile to native Android and iOS apps; they run on the Flutter rendering engine (written in C++) and Flutter Framework (written in Dart, just like Flutter apps), both of which get bundled up with every app, and then the SDK spits out a package that’s ready to go on each platform. You get your app, a new engine to run the Flutter code on, and enough native code to get the Flutter platform running on Android and iOS.

Also, I’m totally creating a band called Stateful Hot Reload. 🙂

NewImage

[YouTube]

Microsoft’s 3D Soundscape app helps blind users navigate the world

Augment all the humans! Check out this new perceptual enhancement:

The app, Soundscape, calls out roads and landmarks as they’re passed, and lets users set audio beacons at familiar destinations. If at any time you’re unsure of where you are, or which direction to head in, you can simply hold the phone flat in your hand and use the buttons on the bottom of the screen to locate nearby roads and familiar destinations.

NewImage

[YouTube]

Audi’s AR app lets you draw a racetrack in your living room

This app (sadly unavailable in the US, it seems) looks really creative & fun:

“To achieve a seamless transition from the TV ad to Augmented Reality we use computer vision to detect the quattro coaster TV ad. Then, we sync and position the augmented content on the screen. What’s interesting is that the car remains in the room even after the ad has ended. [more]

Here’s what it looks like in action:

NewImage

NewImage

[YouTube] [Vimeo]