Monthly Archives: February 2020

Check out the gesture-sensing holographic Looking Glass

This little dude looks nifty as heck:

The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.

This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.

UntitledImage

[Vimeo]

AR: The floor is lava

Back in 2014, Action Movie Dad posted a delightful vid of his niño evading the hot foot:

But now instead of needing an hour-long tutorial on how to create this effect, you can do it it realtime, with zero effort, on your friggin’ telephone. (Old Man Nack does wonder just how much this cheapens the VFX coin—but on charges progress.)

https://twitter.com/tomemrich/status/1230535407609057281

UntitledImage

[YouTube]

Typography: The super-fraught history of Blackletter/Fraktur

Man, who knew just how much cultural identity could be wrapped up in a style of printing?

This excellent 99% Invisible episode covers the origins of blackletter printing (faster & more reliable for medieval scribes), the culture wars (from Luther to Napoleon) in which it battled Roman faces, its association with (and revilement by!) Nazis, and more.

Bonus: stick around for a discussion of revanchist, Trumpian mandates around government architecture, featuring that delightful term of art, CHUD. *chef’s kiss*

UntitledImage

Happy 30th birthday, Photoshop!

Good lord… seems like I was just posting about the 25th anniversary (see fun links from then), and before that the 20th (do I even dare to listen to whatever presumable ear poison I recorded back then? (“A toast to Photoshop and 20 years of pain and pleasure! A toast to our guests and the spontaneous creation of a drinking game based on every time Bryan says “Scott Kelby” or John says “Configurator!””))… and yet here we are, old friends.

The team is marking the occasion by releasing new features for desktop & mobile versions of the app, including ML-powered object selection in both:

 

As for mobile, it sounds like things are going in the right direction. Per TechCrunch:

It’s no secret that the original iPad app wasn’t exactly a hit with users as it lacked a number of features Photoshop users wanted to see on mobile. Since then, the company made a few changes to the app and explained some of its decisions in greater detail. Today, Adobe notes, 50% of reviews give the app five stars and the app has been downloaded more than 1 million times since November.

Back in 2010 Russell Brown & friends got Photoshop 1.07 running on an iPhone; a decade later he’s showing the iPad version running machine learning:

[YouTube]

Cloaking device engaged: Going invisible via Google’s browser-based ML

Heh—here’s a super fun application of body tracking tech (see whole category here for previous news) that shows off how folks have been working to redefine what’s possible with. realtime machine learning on the Web (!):

“Améliorer!” is the new “Enhance!”

I’ve long heard that 19th-century audiences would faint or jump out of their seats upon seeing gripping, O.G. content like “Train Enters Station.” If that’s true, imagine the blown minds that would result from this upgraded footage. Colossal writes,

Shiryaev first used Topaz Lab’s Gigapixel AI to upgrade the film’s resolution to 4K, followed by Google’s DAIN, which he used to create and add frames to the original file, bringing it to 60 frames per second.

Check out the original…

…and the enhanced version:

Update: Conceptually related:

[YouTube]

Google releases AutoFlip, an open-source framework for intelligent video reframing

I’m pleased to say researchers have built on my team’s open-source MediaPipe framework to create AutoFlip, helping make video look good across a range of screens.

Taking a video (casually shot or professionally edited) and a target dimension (landscape, square, portrait, etc.) as inputs, AutoFlip analyzes the video content, develops optimal tracking and cropping strategies, and produces an output video with the same duration in the desired aspect ratio.

Check out the team’s post for more details. I don’t know how the results compare to Adobe’s Auto Reframe tech, but hopefully this release will help more people collaborate in delivering great tools.

UntitledImage

The charming story of “Red Monster”

I’m a longtime superfan of (now longtime!) Adobe designer Dave Werner. Just as the one good thing to come of my first two futile years at Adobe was meeting my wife, the clear good that came from my final project was hiring Dave. Our project mostly crashed & burned, but Dave hung in there & smartly made his way onto Character Animator. Here he provides a delightful inside look at how the app’s little red mascot came to be.

UntitledImage

UntitledImage

[YouTube]

AuraRing, a trippy ring + wristband combo gesture system

Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐

VentureBeat writes,

[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.

[YouTube] [Via]