I’m intrigued by the wealth of enhancements arriving in Procreate for iPad, including new tapered strokes & “QuickShapes.” These remind me of shape-recognition tech in Adobe apps that dates back 20+ years to early Flash, but which is cleverly executed here (enabling quick movement & manipulation of what’s drawn):
This is a watershed moment for me: After 11+ years of shooting on iPhones & Canon DSLRs, this is the first time I’ve shot on an Android device that plainly outshines them both at something. Night Sight on Pixel 3 blows me away.
First, some important disclaimers:
I work at Google & get to collaborate with the folks responsible for this tech, but I can take no credit for it, and these are just my opinions & non-scientific findings.
I’m not here to rain on anybody’s parade. My iPhone X is great, and the 70D has been a loyal workhorse. I have no plans to ditch either.
The 70D came out in 2013, and it’s obviously possible to get both a newer DSLR & a lens faster than my 24-70mm f/2.8.
It’s likewise possible to know a lot more about manual exposure than I do. I went only as far as to choose aperture priority, crank the exposure wide open, and set ISO to Auto.
Having said all that, I think my results reasonably represent what a normal-to-semi-savvy person would get from the various devices. Here’s what I saw:
Pixel 3 vs. 70D shots (set one, set two), all unedited. CR2 files from the 70D got converted to JPEG using default processing in Lightroom. In many cases the 70D struggled to focus (whereas the Pixel never did), so some of its shots are soft as well as dark.
Pixel 3 vs. iPhone X on a separate evening. With a few subjects (e.g. this one) I tried taking an iPhone shot with default (auto) exposure, then one with exposure manually cranked up, and finally one with Pixel 3 Night Sight. Here’s another triplet. Regrettably I didn’t think to try shooting raw on either phone.
Google has launched “Mini” stickers for iOS and Android, which use machine learning to craft personalized emoji from your photo. More precisely, the feature uses a combination of machine learning, neural networks and artist illustrations to conjure up the best representation of you, taking into account various characteristics like your skin tone, hair color and style, eye color, face shape and facial hair. Just access Mini from within Gboard and start the creation process by taking a selfie. It will then automatically create your avatar and generate packs of stickers you can use.
In order to give everyone the opportunity to experience just how natural AI-powered interactions can now be, we’re launching 猜画小歌 (“Guess My Sketch”) from Google AI, a fun, social WeChat Mini Program in which players team up with our AI to sketch everyday items in a race against the clock. In each round, players sketch the given word (like “dog”, “clock”, or “shoe”) for their AI teammate to guess correctly before time runs out.
When the AI successfully guesses your sketch, you’ll move on to the next round and increase your sketching streak. You can invite friends and family to compete for the longest streak, share interesting sketches with each other, and collect new words and drawings as you continue playing.
The one-man (I believe) band behind the Focus app for iOS continues to apply the awesome sauce—now adding the ability to create & modify light sources in portrait-mode images (which it treats as 3D). Check it out: