Category Archives: User Interface

Check out the gesture-sensing holographic Looking Glass

This little dude looks nifty as heck:

The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.

This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.

UntitledImage

[Vimeo]

AuraRing, a trippy ring + wristband combo gesture system

Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐

VentureBeat writes,

[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.

[YouTube] [Via]

Google & Adobe team up on XD -> Flutter

I love seeing mom & dad getting along 😌, especially in a notoriously hard-to-solve area where I spent years trying to improve Photoshop & other tools:

Flutter is Google’s UI toolkit for developers to create native applications for mobile, web, and desktop, all from a single codebase. […]

XD to Flutter simplifies the designer-to-developer workflow for teams that build with Flutter; it removes guesswork and discrepancies between a user experience design and the final software product.

The plugin generates Dart code for design elements in XD that can be placed directly into your application’s codebase.

You can sign up for early access to the plug-in here.

NewImage

AR: Adobe & MIT team up on body tracking to power presentations

Fun, funky idea:

Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]

The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.

Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.

Don’t nag your family. Make Google do it.

I’ve gotta give this new capability a shot:

To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”