Category Archives: Mobile

iPhone’s new “Portrait Lighting” looks compelling

I’m eager to try this out: 

When framing a subject, you’ll have a number of different lighting options to choose from for giving your portrait different looks — things like Contour Light, Natural Light, Studio Light, Stage Light, and Stage Light Mono.

These “aren’t filters,” Apple says. Instead, the phone is actually studying your subject’s face and calculating the look based on light that’s actually in the scene using machine learning.

Check out PetaPixel or Apple’s site for larger sample images. 


Style transfer & computer vision as a service

The makers of the popular Prisma style-transfer app are branching into offering an SDK:

[U]nderstand and modify the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API or SDK for iOS or Android apps.

One example use is Sticky AI, a super simple app for creating selfie stickers & optionally styling/captioning them.

According to TechCrunch, Prisma shares at least one investor with Fabby, the tech/SDK that Google acquired last week. Meanwhile, there’s also YOLO: Real-Time Object Detection:


This mass proliferation of off-the-shelf computer vision makes me think of Mom & Pop at Web scale: It’s gonna enable craziness like when Instagram was launched by two (!) guys thanks to the existence of AWS, OAuth, etc. It’ll be interesting to see how, thanks to Fabby & other efforts, Google can play a bigger part in enabling mass experimentation.


Sh*t gets real: Google acquires AIMatter, maker of the Fabby computer vision app

This won’t seem like much right now, I’m sure—but I’m really excited. Per TechCrunch:

The search and Android giant has acquired AIMatter, a startup founded in Belarus that has built both a neural network-based AI platform and SDK to detect and process images quickly on mobile devices, and a photo and video editing app that has served as a proof-of-concept of the tech called Fabby.

In a lot of ways it’s the next generation of stuff we started developing when I joined Google Photos (anybody remember Halloweenify?). If you’ve ever hand-selected hair in Photoshop or (gulp) rotoscoped video, you’ll know how insane it is that these tasks can now be performed in realtime on a friggin’ telephone.

As to what happens next—stay tuned!



Google’s DeepMind Is Now Capable of Creating Images from Your Sentences

Another day, another “Whoa”: Futurism writes,

[T]he prompt of “A yellow bird with a black head, orange eyes, and an orange bill” returned a highly detailed image. The algorithm is able to pull from a collection of images and discern concepts like birds and human faces and create images that are significantly different than the images it “learned” from.

Check it out:


[YouTube] [Via Gabriel Doliner]

Guy builds his own Tinder clone to propose to his girlfriend

NERDS!! I love these crazy kids:

“Zane and I met on Tinder, and I wanted her to relive the experience of our first date, so I decided to mock-up my own version of Tinder,” said Lee… He created an immersive treasure hunt for Zane, mimicking the buttons and interactions of Tinder. The prototype led her from their home to the street corner where they first met, then on to the coffee shop where they had their first date.

Check out the whole story on the Adobe XD blog, as Lee (a non-designer/coder) used the app to create his prototype.


Google & MIT unveil realtime image retouching on mobile devices

“Teaching Google Photoshop.” That’s the three-word mission statement I chose upon joining Photos. I meant it as shorthand for “getting computers to see & think like artists.” Now researchers are enabling that kind of human-savvy adjustment to run in realtime, even on handheld devices:

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google are presenting a new system that can automatically retouch images in the style of a professional photographer. It’s so energy-efficient, however, that it can run on a cellphone, and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot.

And yes, it’s a small world: “The researchers trained their system on a data set created by Durand’s group and Adobe Systems;” and Jiawen interned at Adobe; and then-Adobe researcher Aseem Agarwala collaborated with Frédo before joining Google.