I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?
This little dude looks nifty as heck:
The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.
This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.
Crafty Rube Goldberg-ing for social good (making tech more accessible):
Control your Mac using head movements. Rotate your head to move the cursor and make facial expressions to click, drag, and scroll. Powered by your iPhone’s TrueDepth camera.
Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐
[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
I love seeing mom & dad getting along 😌, especially in a notoriously hard-to-solve area where I spent years trying to improve Photoshop & other tools:
Flutter is Google’s UI toolkit for developers to create native applications for mobile, web, and desktop, all from a single codebase. […]
XD to Flutter simplifies the designer-to-developer workflow for teams that build with Flutter; it removes guesswork and discrepancies between a user experience design and the final software product.
The plugin generates Dart code for design elements in XD that can be placed directly into your application’s codebase.
Mark Coleran is a mograph O.G., about whose “Fantasy User Interface” (“FUI”) work for movies I used to write about a lot back at Adobe. It was fun listening to him & other designers share a peek into this unique genre of visual storytelling via Adobe’s great Wireframe podcast. I think you’ll enjoy it:
Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]
The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.
Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.
I’ve gotta give this new capability a shot:
To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”