AR & AI help blind users navigate space & perceive emotions

I love assistive superpowers like this work from Caltech:

VR Scout shares numerous details:

[T]he team used the Microsoft HoloLens’s capability to create a digital mesh over a “scene” of the real-world. Using unique software called Cognitive Augmented Reality Assistant (CARA), they were able to convert information into audio messages, giving each object a “voice” that you would hear while wearing the headset. […]

If the object is at the left, the voice will come from the left side of the AR headset, while any object on the right will speak out to you from the right side of the headset. The pitch of the voice will change depending on how far you are from the object.


Meanwhile Huawei is using AI to help visually impaired users “hear” facial expressions:

Facing Emotions taps the Mate 20 Pro’s back cameras to scan the faces of conversation partners, identifying facial features like eyes, nose, brows, and mouth, and their positions in relation to each other. An offline, on-device machine learning algorithm interprets the detected emotions as sounds, which the app plays on the handset’s loudspeaker.


[YouTube] [Via Helen Papagiannis]

Leave a Reply

Your email address will not be published. Required fields are marked *