I kicked off this blog by exploring the definitions of augmented reality. It turns out there is no one universally accepted way of doing so. While most definitions revolve around visual input, I make a point of recognizing a definition that encompasses other sensory input. Lara Jongedijk, Researcher, Instructional Design at University of Calgary states, “Augmented reality (AR) is an environment where a real life is enhanced by virtual elements in real time. The purpose of AR is to enhance the information we naturally receive through our five senses, by adding superimposed, constructed virtual elements to bring complementary information and meaning that may not be possible by natural means.” This came to mind recently when I read about a KickStarter Campaign that is currently underway for a product called Gest.

Gest is a gesture controller touted as “a new way of working with your computer” that “understands your hand and finger positions with a high degree of precision”. It’s intended to replace many of the things we currently do with our mouse and keyboard. As we predictably evolve away from the PC as a means of digital communication, it is clear that devices such as this will play an important role in this transformation. Gesture control is considered an integral component of both the augmented and virtual reality experience (which I wrote about in April of 2015) and Gest addresses these use cases explicitly in their campaign write-up.

aa5c633bed6a368a223ca81e42a048a5_original[1]

Gest works by integrating 15 low latency motion sensors on each hand that allow the software to build an accurate model of the position and orientation of each finger and the hand (the thumb position is inferred) in real time. The software instantiates a statistical model for each user to understand how they perform common gestures like pointing, swiping, flicking, and grabbing which hones the accuracy of recognition over time. The first use case they are enabling is Photoshop interaction, but their SDK will allow myriad possibilities for other integrations. 

I envision a higher purpose for Gest. Wearing Gest on both hands can theoretically enable persons who are nonverbal or are verbally impaired (as the deaf often are) to use sign language to communicate with non-signers. Mike Fister, Gest’s CEO, confirmed to me that though this is not a space they are focused on developing, their SDK could be used by independent developers to write an app that converts the sign language movements to words and sentences. After all, signing can be thought of as nothing more than a series of gestures. The app would then play back those words in real time using a voice synthesizer (think Steven Hawking) played through a wearable BlueTooth speaker on the wrist. Imagine the barriers this would eliminate for this population throughout the world. 

This is surely not what people envision when they think about AR, but it does suit Jongedijk’s sensibility of enhancing the information we receive through our senses by adding virtual elements to bring meaning that may not be possible by natural means. This is not the first attempt to use technology to translate sign language. Researchers have been working at this problem for years using motion sensing and computer vision approaches. Based on the papers I have reviewed, this is still in the lab and not yet ready to be released into the wild. Certainly it has not been approached in the context of augmented reality. But other assistive technologies have. 

Babelfisk is a concept by Danish designer Mads Sukhdev Hindhede to aid the deaf. The idea is essentially smart glasses that present text displayed in the user’s line of sight. Two stereoscopic microphones affixed to the sides of the frames filter speech then voice recognition software displays it as text inside the lenses via waveguide projection. The visualization also allows users to sense where sound is coming from, adding a layer of spatial orientation to the listening experience. Though Babelfisk has not been built, the technology exists today to do so. 

babelfisk03[1] 

The blind can also benefit from AR. Intel is using their RealSense 3D camera technology integrated into customized clothing fitted with a computing module that connects wirelessly to eight thumb-sized vibrating sensors: three across the chest, three across the torso and two near the ankles of each leg. The system is able to ‘see’ objects within a couple yards of the user, and it is able to tell the user approximately where the object is located: high, low, right or left and whether the object is getting closer or moving away. When the wearer is walking and approaches an object, such as a wall or another person, the sensor boxes vibrate. Vibrations intensify the closer you get to the object.

DSC6782_small-812x1024[1]

Most people who are classified as legally blind actually retain some vision, but may not be able to pick out faces and obstacles, particularly in low light. Stephen Hicks, a neuroscience and visual prosthetics research fellow at the University of Oxford believes that his device, called SmartSpecs, could make it easier for some people with sight impairment to explore their surroundings. He is the cofounder of VA-ST, a startup that’s building glasses that use a depth sensor and software to highlight the outlines of nearby people and objects and simplify their features. The glasses have four different modes that show the world around you in black, white, and gray with varying degrees of detail, as well as a regular color mode that can be used to simply zoom in on or pause objects.

vastx299[1]    vastx519[1]

Prosopagnosia, also called face blindness, is a cognitive disorder of face perception where the ability to recognize familiar faces is impaired, while other aspects of visual processing and intellectual functioning remain intact. Alzheimer’s sufferers and some who have undergone traumatic brain injury also struggle to recognize even the most familiar people in their lives. Facial recognition technology can be integrated into an augmented reality system to prompt users with the names and identity of those people to help them better relate to the world around them. The technology exists today to build a personalized database of photos of the people in their lives such as family, friends and caregivers that contains information about the features of their faces. A camera built into the AR headset searches for faces it encounters and compares them to the database. When a match is found the name of the person and their relationship is displayed inside the lenses which the subject can read and allow them to greet and interact with that person accordingly.

This small sampling of assistive AR technology has been directed to the sensory challenged population, but there are also assistive AR technologies being developed for the general population that address limitations of human capability. One example of this is language translation. The new version of the Google Translate app can act as a real-time translator between two people speaking in different languages. In order to use the real-time speech translation in a conversational setting, users need only open the app and press the microphone button. If speech in a foreign language is detected first it will be immediately translated into the user’s native language, spoken aloud and displayed on-screen. The user can then press the microphone button again to respond in their native tongue and have it immediately translated to the foreign language. From this point forward, the app will recognize both languages as they are spoken and will translate them each time a phrase is uttered allowing for a seamless conversation with their smartphone or tablet acting as interpreter. The app also has a Word Lens function that allows users to train their device’s camera on some foreign text, such as a street sign, and get an instant translation on-screen. This technology can easily be adapted to use the camera built into smart glasses and have the translation appear inside the lenses (or spoken through earphones) for a more natural eye-to-eye interaction.

The future of assistive AR technologies is to make them disappear. By disappear, I mean that the sensors, displays, speakers and even the mobile devices that tie them together must be miniaturized and integrated into everyday clothing, eyeglasses and jewelry so that they are unobtrusive as our own sensing organs. The Gest, for example, must lose the cyborgian wires and straps and be reduced to a set of jewelry-like rings that are worn throughout the day and invoked when needed. A person who constantly depends upon such a system to communicate with the outside world might go so far as to have tiny sensors implanted directly into their fingers and hands.

Augmented reality has the potential to change lives for the better. The technology is here. The hard part is weaving it all together then weaving it into our lives in such a way that it is seamless and adaptive to our needs. This future is coming and it’s coming soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation