Augmented reality head mounted displays (HMDs) leave your hands free to be productive, but they also pose the greater challenge of how to interact with this new face-bound form factor. Voice control is one means of interaction that is well suited for navigating menus and inputting commands or text, but is inefficient for many of the tasks that we are accustomed to accomplishing with mice or touch screens. There are also situations where voice commands are socially awkward or not feasible.
There is another means for humans to interface with computers that will be integral to HMD use, and that is through gestures. Most of us have experienced gesture control using video game consoles. The Nintendo Wii uses a wireless handheld controller (a.k.a. the Wiimote) that has micro mechanical motion sensing capability. Its accelerometers and gyroscope allows the user to interact with and manipulate items on screen by recognizing motion patterns that the hand makes while holding it. A different type of gesture control system known as Kinect is built into the Microsoft Xbox gaming system which uses computer vision technology. Kinect is a small device positioned above or below the video display that contains a time-of-flight camera and depth sensor which control the onscreen actions through 3D body motion capture.
These two types of gesture recognition systems (sometimes referred to as “natural user interfaces”) can be adapted for controlling augmented reality experienced through HMDs. There are two classes of gestures to consider for interfacing with the HMD computer: The first is the navigation of menus which is analogous to the point and click of a mouse; the second is manipulation of on-screen content such as selecting, highlighting, scaling, rotating, dragging, etc. In this post I will be comparing the two types of gesture controllers capable of performing these gesture classes then reviewing the options available today for implementation. Read More →