Augmented reality head mounted displays (HMDs) leave your hands free to be productive, but they also pose the greater challenge of how to interact with this new face-bound form factor. Voice control is one means of interaction that is well suited for navigating menus and inputting commands or text, but is inefficient for many of the tasks that we are accustomed to accomplishing with mice or touch screens. There are also situations where voice commands are socially awkward or not feasible.

There is another means for humans to interface with computers that will be integral to HMD use, and that is through gestures. Most of us have experienced gesture control using video game consoles. The Nintendo Wii uses a wireless handheld controller (a.k.a. the Wiimote) that has micro mechanical motion sensing capability. Its accelerometers and gyroscope allows the user to interact with and manipulate items on screen by recognizing motion patterns that the hand makes while holding it. A different type of gesture control system known as Kinect is built into the Microsoft Xbox gaming system which uses computer vision technology. Kinect is a small device positioned above or below the video display that contains a time-of-flight camera and depth sensor which control the onscreen actions through 3D body motion capture.

These two types of gesture recognition systems (sometimes referred to as “natural user interfaces”) can be adapted for controlling augmented reality experienced through HMDs. There are two classes of gestures to consider for interfacing with the HMD computer: The first is the navigation of menus which is analogous to the point and click of a mouse; the second is manipulation of on-screen content such as selecting, highlighting, scaling, rotating, dragging, etc. In this post I will be comparing the two types of gesture controllers capable of performing these gesture classes then reviewing the options available today for implementation.

Motion Sensing Systems
Motion sensing systems come in several different wearable form factors including bracelets, arm bands, rings, gloves and wands. Most devices employ tiny three-axis accelerometers. An accelerometer is an electromechanical device used to measure acceleration forces by employing Newton’s second law of motion:  acceleration = force / mass. They are commonly found in smart phones and perform functions such as switching the display from portrait to landscape mode when the device is rotated or facilitating gameplay controlled by subtle movements of the device. A typical accelerometer works like this:

accelerometer

  1. Electrode moves up or down when device is tilted
  2. Cantilevered beam flexibly holds electrode in place
  3. Electrical connection to chip circuit
  4. Second electrode allows for capacitance change to be measured when distance changes between them
  5. Third electrode allows for capacitance change to be measured for movement in opposite direction
  6. Terminals allow accelerometer to be integrated into device circuitry
    [ExplainThatStuff]

Motion sensing systems also commonly employ gyroscopes to complete the array of positioning information transmitted to the processor.  A gyroscope is a device that uses Earth’s gravity to help determine orientation. Its design consists of a freely-rotating disk called a rotor, mounted onto a spinning axis in the center of a larger and more stable wheel. As the axis turns, the rotor remains stationary to indicate the central gravitational pull, and thus which way is “down.” Motion sensor devices transmit the telemetry from both types of sensors via radio (i.e. Bluetooth) to the HMD that then uses software to interpret the motions in relation to what is being presented to the user’s view screen in order to effect gestural interaction with that information in three dimensions.

The primary advantage that motion sensing gesture recognition devices have over computer vision systems is a larger library of possible gestures enabled by the precision sensor array. This may include things such as “air handwriting” recognition and the subtle nuances of sign languages.  Furthermore, the gesturing itself may be less conspicuous because it can be done with smaller movements made closer to the body thus calling less attention to the user.

The disadvantage is that motion sensing devices are separate pieces of hardware that must be toted along with the user. They have the additional burdens of needing their own power sources and having to establish and maintain a radio connection to the HMD. Today’s devices are mostly made by third parties and compatibility may be limited to select HMDs.


Computer Vision Systems

Depth sensing time of flight systems that are suitable for integration into compact HMDs all work in a similar fashion. They consist of two or more camera sensors and one or more infrared laser or LED light projectors. The cameras track infrared light which is outside of the visible spectrum and work in stereo to aid depth perception through parallax as human eyes do. The angle of the camera lenses defines the width of the sensor system’s field of view which gets wider as it gets farther from the lenses, however the viewing range is limited by the output of the light source. The lights are the most energy intensive feature and need reach only 2 feet (60 cm) or so in front of the HMD (arm’s length). Readings are taken hundreds of times per second. At this point, the sensor data buffers into the device’s own local memory before being streamed to the tracking software.

The magic happens in the software where the heavy mathematical lifting is performed.  After compensating for background objects and ambient environmental lighting, the images are analyzed to reconstruct a 3D representation of what the device sees. It compares these features to known skeletal models of fingers, hands and joints to identify them as such and tracking algorithms infer the positions of the occluded features. Filtering techniques are applied to ensure smooth temporal coherence of the data and the results are expressed as a series of frames. This all behaves as a service organized as classes in a library that can then be consumed as an API by apps that utilize the recognized gestures for interaction.

realsense-gesture-tracking-table
Intel RealSense Gestures include 8 static poses and 6 dynamic gestures.  Static pose examples are things like thumbs up or peace sign, while dynamic gesture examples might be a wave or circular motion.

Computer vision based systems have several advantages. Unlike motion sensing peripherals, the depth sensing time of flight components can be integrated directly into the HMD. This means that there is only one device to carry, one battery to charge and no pairing necessary between devices. Also the gesture recognition software is integrated into the operating system of the HMD which should make for smoother, more efficient user experience because there is no interoperability overhead.

There are also disadvantages of this technology which are related to the physical aspects of hand gesturing. The sensors have a relatively narrow band of sight which means the gestures must be performed immediately below ones line of sight. Such gesturing may be socially awkward in certain public situations (this has been referred to as the “gorilla arms” effect). In comparison, motion sensed gestures can potentially be performed more subtly. Also, holding ones arms elevated in front of them tends to lead to fatigue after a period of time. As the technology matures it is likely that the angle of view of the sensors will improve and the recognition of gestures will allow for more subtle gesturing. Building the sensors into the HMD also has the disadvantage of adding the bulk and weight of the sensors, circuitry and chips into the unit. This has implications for the comfort of the device as well as the aesthetic qualities. Consumer acceptance of HMDs may rely upon them being somewhat attractive and as similar to normal eyeglasses as possible and an integrated computer vision system detracts from that objective.

meta pro

The Meta Pro augmented reality glasses slated for future release has built in depth sensing camera

Not all augmented reality computer vision gesture control use cases require a head mounted display, but HMDs is certainly where the current focus is in the industry. The following considerations should be taken into account when evaluating such a system’s viability for mobile AR:

  • Dimensions: Must be small enough to be integrated into or mounted onto a head mounted display
  • Weight: Must be light enough to prevent the HMD from being awkwardly or uncomfortable weighted.
  • Field of View (FOV): Also known as angle of view, FOV can be thought of as an invisible cone-shaped field in front of the sensor wherein the gestures must be performed. It is measured in degrees at a specific distance. It should be wide enough so that the user can comfortably gesticulate without having to hold their hands so high as to block their view of the scene they are viewing.
  • Minimum Vision Distance: This is effectively determined by the FOV angle of the camera sensors. Hand gestures made too close to the sensor cannot fully fit into the camera’s frame. A wider FOV thus makes for a more versatile sensing experience. Parallax comes into play as well with systems that use multiple camera and/or light sources. Depth will be more difficult to sense with less parallax thus making the readings less accurate.
  • Maximum Vision Distance: This is determined by the strength of the infrared light source. A measure exceeding the length of human arms (about 1 meter) would only place undue stress on battery longevity.
  • Precision: The greater the precision of depth measurement the more consistently accurate the system will be able to interpret gestures. A good sensor will be rated as low as 0.5 mm error at 40 cm distance (0.15%).
  • Resolution: This refers to the sensitivity of the camera sensors as measured in pixels. A fair minimum standard would be VGA (640 x 480).
  • Frame Rate: Refers to how often the camera sensors sample the reflected infrared light. The higher the sampling rate the better the system can interpret movement. 60 frames per second is a fair minimum standard.
  • Battery Life: For systems that are integrated directly into the HMD there must be consideration given to the power demands of the computer vision subsystem itself when determining the battery power to bestow upon the design of the whole HMD. Add-on sensors should have their own power supply. In either case 3-4 hours of active sensing is a fair expectation.
  • Illumination type: The infrared light projector source may use LED or laser technology, however LEDs are generally more compact and use less energy for this application. Lasers are better suited to non-battery powered stationary systems as the Kinect.

Available Gesture Controllers
The following is an overview of gesture control technology that is either on the market or has sufficient momentum to make it to the market by the end of 2015 with solutions intended for, or adaptable for AR.

Motion Sensing Systems 

Arcus by Arcus Motion
The Arcus ring is a motion tracking device that gives users valuable information and insights regarding their movements while doing sports as well as provide a platform for hands-free use and control of smart devices using simple finger gestures. Arcus can detect even the slightest finger movements giving it the ability to control Bluetooth enabled devices and to work seamlessly with a variety of unique mobile applications. It’s waterproof and is compatible with Android, iOS, Windows Phone and Windows. It’s wireless charger powers it for up to 6.5 hours of active use. Arcus Motion’s attempt to fund the development through KickStarter failed to reach its goal and as of the time of publication the company has not responded with their plans to move forward.
arcus_image_04_1024x768

The Fin by Fin Robotics
Fin is a thumb ring that makes your whole palm into a gesture interface by associating touch to each segment of the fingers with different commands on connected devices. It can connect with up to three different digital gadgets like HMDs, smartphones, smart TVs, automobiles, and home automation devices through low energy Bluetooth. It can also be used for security authentication. Originally crowd funded on Indiegogo, these attractive devices are waterproof, rechargeable, have an LED indicator and are compatible with Windows, MAC, iOS and Android. The Fin device offers what is perhaps the most subtle movements of all available gesture controllers which may be favorable under many use cases.
fin ring

Myo by Thalmic Labs
The Myo armband reads the electrical activity in your muscles and the motion of your arm to wirelessly control technology with hand gestures. It detects five distinct gestures out of the box and communicates with the controlled devices through Bluetooth. Gestures can be mapped to key strokes for customized control, or the open API and free SDK can be used to create scripts and integrations with devices such as AR HMDs. It contains 8 electromyographical sensors, three-axis gyroscope, three-axis accelerometer, three-axis magnetometer sensors and provides haptic feedback with variable length vibrations. It’s runs on an ARM Cortex M4 processor and has a Li-ion battery rechargeable through a micro USB that lasts a full day. The Myo is compatible with Windows, Mac, iOS and android. It is available for $199 on the Thalmic site or Amazon.
Myo-Armband-Thalmic-Labs

Nod by Nod, Inc.
Nod is a ring device that transforms movements into commands. Using gestures, motion-tracking, and tactile input, Nod allows you to engage with virtually any platform. Precision skeletal tracking allows one to interact with VR environments, command drones, and control smart devices including AR HMDs. The OpenSpatial SDK facilitates the building of custom apps. Inside is a nine-axis accelerometer and two Cortex M3 processors and on the outside there is a touch-bar and button controls. The ring is waterproof up to 5 ATM and the charge lasts a day. The Nod comes in 12 sizes and is worn over the forefinger — point and move your finger, and a cursor moves around wherever you’re aiming with 32,000 dpi accuracy. Nod could also help unlock a phone, or be used to log into a computer. As of the time of this writing Nod is sold out but the device is priced at $149.
nod ring

Ring Device by Fujitsu
The (yet to be named) ring-like device is being developed for industrial applications where it is impractical to directly operate a computing device or the hands are gloved or prohibitively messy. It is best suited for use in repetitive and predictable tasks and relies on NFC tags to trigger operational modes. The contact sensor detects the touch of object and activates the NFC tag reader, the relevant data is then wirelessly transmitted to the user’s HMD. Hand and arm gesture movements are detected by a gyro sensor and accelerometer which may then be used to work through the HMD’s menus. The ring also identifies the fingertip movements users make as they write in the air, and recognizes that tracing as letterform. It weighs less than 10 grams. Fujitsu Laboratories plans to bring the product to market in fiscal 2015.
fujitsu ring

Ring Zero by Logbar
Ring Zero allows the wearer to use gestures to control just about anything and aims to “shortcut everything” from typing a text to controlling home appliances. It is worn on the index finger and the thumb is used to touch a sensor while the finger gesture is made. This product is positioned particularly for home automation tasks, but the Open URI SDK will allow it to control devices such as AR HMDs as well. It is compatible with iOS, Android and Windows Phone, Google Glass and smart watches such as Pebble. Ring includes a micro vibrator and a 6-axis motion sensor. It comes in four sizes and has a range of about five meters via Bluetooth. It is not waterproof. The Li-ion battery promises a continuous usage time of about one to three days. It can be ordered from their website for $149.99 and is scheduled to ship at the end of April 2015. Ring was originally crowd funded through KickStarter in early 2014.
ring gesture


Computer Vision Systems
With so many consumer electronics products such as smart phones, cars, PCs and TVs coming on the market with integrated gesture controllers, there are a number of companies producing OEM components to compete in this space. The competition has pushed this technology to get smaller, lighter, cheaper and more effective. Other companies are producing stand-alone products that consumers can plug-and-play with on their own. Some of these are suitable to be attached to an AR or VR head mounted display and integrated into the system through their SDK. In this section I have grouped the controllers accordingly.

Stand Alone Solutions

Carmine by PrimeSense (company acquired by Apple in November 2013)
PrimeSense developed a system that uses an infrared projector and camera and a special microchip to track the movement of objects and individuals in three dimensions. The system can interpret specific gestures, making completely hands-free control of electronic devices a reality. PrimeSense’s platform was an integral part of Microsoft’s original Kinect, which enables users to play video games without a controller. Eight million of the devices sold in the 60 days after it went on sale. Since being acquired by Apple, PrimeSense products have been taken off the market. It is unknown what Apple intends to do with this technology but integration into an AR HMD is certainly a possibility. 

Kinect for Windows by Microsoft
The Kinect is a bar device with depth sensing technology, a built-in color camera, an infrared emitter, and a microphone array, enabling it to sense the location and movements of up to six people with 25 body joints per person as well as their voices. It senses depth with 512 x 424 resolution at 30 Hz with a 70 x 60 FOV at a range of .5-4.5 meters. The Visual Gesture Builder in the Kinect Studio SDK lets developers build their own custom gestures that the system recognizes and uses to write code by using machine learning, increasing productivity and cost efficiency.
kinect-for-windows

Leap Motion
The Leap Motion Controller senses how you naturally move your hands to control computers by pointing, waving, reaching and grabbing. It tracks all 10 fingers with up to 1/100th of a millimeter accuracy. It sports a 150° field of view and can track movements at a rate of over 200 frames per second. It is a scant 13mm high , 13 mm wide, 76 mm deep and weighs 45 grams and works with Windows or OS X systems. A mount can be purchased for integration with VR goggles. It sells for $79.99.
Leap-Motion-Controller

Nimble Sense by Nimble VR (company acquired by Oculus in December 2014)
Nimble Sense is a depth sensing camera that captures your hands for VR input across a 110 degree field of view. Combined with the robust skeletal hand tracking software, it delivers low-latency, accurate hand input to provide the simple experience of having both hands in VR. The Nimble Sense is perfectly suited for integration on an Oculus Rift or other head mounted display, but can also be mounted above a monitor or sit on your desk. It uses time-of-flight technology that employs a laser to capture a detailed 3D point cloud and an infrared image of the world.  This 3D point cloud is then interpreted by skeletal hand tracking software to track the location, identity and joint angles of each finger.
nimble_sense_002

Senz3D by Creative Interactive
Depth and gesture recognition camera detects hand gestures and head movement, opening new possibilities for interacting with MS Windows devices. This sensor bar device has a frame rate of 30fps and 720p (1280 x 720) HD video resolution. It sells for $129.
creative senz3d

Touch+ by Reactiv
Touch+ is a small sensor bar that turns any surface into multitouch. It contains two “normal” cameras that track the 3D positions of your fingers and allow you to use multitouch gestures such as tap, swipe, or zoom by detecting the height of your fingers. It also enables gesture shortcuts by detecting the hand gestures you make and can be used as a touch/mouse replacement for any program. Touch+ succeeded the original product called Haptix which was created as a Kickstarter project in 2014.

Xtion Pro Live by ASUS
The Xtion PRO uses infrared sensors, adaptive depth detection technology, color image sensing and audio stream to capture a user’s real-time image, movement and voice, turning hands into controllers. It has more than 8 predefined poses that translate into push, click, circle, wave and much more. The built in microphones support voice control and other voice recognition applications. It has a field of view that is 58° H, 45° V, 70° D; RGB/ depth image resolution is up to 60 fps SXGA (1280×1024); compatible with Windows and Linux Ubuntu. Dimensions are 18 x 3.5 x 5 cm. It sells for $170.
asus_xtion


OEM Solutions
The following companies provide components that enable gesture recognition on others’ devices.

AMD – Gesture Recognition Software
AMD provides software to owners of laptops having certain of their hardware profiles which can turn the webcam into a gesture control device.

ArcSoft – Gesture Recognition Software
ArcSoft’s partners with device manufacturers to provide gesture technology that uses natural human hand gestures with one or both hands, such as wave, grab and move, as well as face, eye and finger motions. The technology supports single-lens devices such as a consumer webcam as well as stereoscopic devices at distances up to 16 feet even under low light or back lighting conditions. 

Cognivue – Image Cognition Processors
CogniVue offers a comprehensive consumer embedded solution for next generation vision-based applications in mobile and wearable devices. From dedicated hardware IP, code development tools and libraries, to algorithm development toolkit and application software demos, they enable their customers and partners to create or improve their embedded vision solution.

Code Laboratories – Image Sensor
The DUO mini lx is an ultra-compact imaging sensor intended for use in research, industrial applications and integration for vision based applications. It provides configurable and precise stereo imaging for robotics, inspection, microscopy, human computer interaction and beyond. The solution includes stereo imaging, USB interface, 6DoF accelerometer/gyroscope, 3 LED programmable array and an SDK.

GestureTek – Gesture Recognition Software
GestureTek makes a fully gestural touch-free user interface for any platform or environment. They have adapted their solution to be compatible with any device enabled with a depth sensing camera. Consumer electronics can be controlled with gestures such as hand waves, finger counting and pointing. API’s include full body analysis and motion tracking, along with feature-specific tracking such as face, hand, color, motion and object tracking.

Industrial Technology Research Institute – Smart Glasses
iAT (i-Touch-in-Air) is mounted on wearable smart glasses to allow human interaction based on the concept of “touch what you see”. This is an assembly of three technologies including see-through wearable display, precise finger positioning and detection technology, and air-touch interface. The user interacts with the 3D image in front of the eyes with hand and finger movements to effect true augmented reality.

Intel – Depth Sensing Solution
Intel RealSense 3D camera enables interaction with devices using natural movements. The device contains conventional and infrared cameras and an infrared laser projector that infer depth by detecting infrared light that has bounced back from objects in front of it. This visual data, taken in combination with their motion-tracking software, creates a touch-free interface that responds to hand, arm, and head motions as well as facial expressions. 

Mantis Vision – Depth Sensing Solution
Mantis Vision aims bring their MV4D solution to the masses by enabling device makers to push the limits of 3D. Their 3D range-imaging technology projects an IR light pattern upon physical objects and environments which is then captured by a synced camera, and processed by their algorithms to produce an accurate, detailed point-cloud depth map of the captured scene which can be exported to common 3rd party applications.

Microchip – E-Field Based Gesture Control Chip
The MGC3030 is a 3D Gesture controller that enables gesture based user interfaces as a single 32bit SOC chip solution. The chip uses an electrical-field for 3D gesture recognition based on their GestIC technology. GestIC enables user command input with natural hand movements in free-space. Through its configurable sensing states power consumption as low as 150 microwatts can be realized for mobile-friendly devices.

Orbbec – Depth Sensing Solution
Orbbec is a scientific group specializing in 3D measurement and artificial intelligence based in Guangdong, China. They have developed their own 3D computational chip and 3D sensor which are integrated into a 3D camera, camera module and gesture recognition SDK. They seek to work with consumer electronics companies wishing to offer gesture control in their products.

Pebbles Interfaces – Depth Sensing Solution
Pebbles provides a natural interface with every object, real or virtual. Their technology extends human behavior, enabling simple and intuitive interaction with any consumer electronic device. Their advanced motion sensors display physical objects within the digital space, at any range or angle, with no latency. Pebbles’ minimal hardware is being embedded in partners’ smart products and devices.

PMD  – Depth Sensing Solution
Nimble UX is an integrated gesture control system. The CamBoard pico camera provides the real 3D data and Nimble pmd gesture middleware tracks the position and orientation of a user’s hands as well as the individual finger joints and fingertips. Use predefined or custom gestures to add touchless user experience to applications. PMD is partners with Google on Project Tango.

PointGrab – Gesture Recognition Software
PointGrab’s PointTouch software solution enables devices to be operated by a natural user interface using hand shapes and movements. Their technology is currently integrated into over 25 million consumer devices through many of the world’s top brands. It reliably tracks fingers, one hand, and two hands from 3 inches up to 17 feet and works in a wide range of challenging lighting and background conditions.

Softkinetic – Depth Sensing Solution
Softkinetic’s DepthSense 3D Imaging Time-of-Flight CMOS sensors understand the most subtle of everyday human gestures and the shape, size and behavior of objects. They measure how long it takes for infrared light to make the trip from the camera and back – the Time-of-Flight – and give the 3D DepthSense camera power to turn raw data into real time, 3D images. Their 3D vision middleware can take full advantage of any depth-sensing camera. 


Head Mounted Displays with Integrated Computer Vision Systems
At this time, there are only a few augmented reality head mounted displays that come with integrated depth sensing gesture control out of the box. This does not necessarily mean that others cannot be controlled with gestures though. SDKs can be leveraged to enable gesture control for many of the HMDs on (or near) the market using a connected third party peripheral depth sensor device (i.e. Leap Motion). Here are the HMDs with integrated depth sensing: 

Meta
The Meta 1 development kit is now available to the Meta Pioneers Program intended to get developers to begin building a library of content and create buzz for the upstart company. The $667 Meta 1 currently gets its capabilities through a connection to a PC. The integrated 3D time-of-flight depth camera has a resolution of 320×240 (QVGA) pixels with a 74 x 58 x 87 (H x V x D) degrees Field of View and a frame rate of 30fps. The self-contained consumer release may not be available until 2016.
meta

Microsoft
Not a great deal of information has been released about the specs of the new HoloLens that was announced in January of 2015, but it is known to have Kinect-like depth sensing technology integrated into it that purportedly has a field of view of 120 x 120 degrees. The gesture control capabilities build on those capabilities already implemented in Kinect and have been demonstrated to be effective at interaction with the user interface.
Microsoft Windows 10


Generic SDKs
While the above gesture control systems come with their own SDKs that allow developers to integrate them into custom solutions, there are several independent SDKs available that are generic or open in nature and are interoperable with other systems. 

Augumenta
The Augumenta Interaction Platform (AIP) SDK brings hand gesture control and virtual surfaces to the hands of enterprise users. These methods are a robust alternative to voice and touch control, and they enable rich data input capabilities in harsh field conditions. The SDK is being used on various smart glasses, such as Google Glass, Epson BT-200, ODG R-6 and ChipSiP SiME, and happily co-exists with AR and gaming toolkits that also require a real-time camera access. 

Elliptic Labs
Elliptic’s SDK for smartphones, tablets and laptops provides consumer friendly touchless gesturing for device manufacturers. Ultrasound signals sent through the air from speakers integrated in smart phones and tablets bounce against one’s hands and are recorded by a device’s microphones. The software recognizes the hand gestures and uses them to move objects on a screen, very similar to how bats use echolocation to navigate.

Gestoos
Gestoos SDK is a software development kit for gesture recognition and robust hand tracking. It is cross-platform working with Linux, Windows and MacOSX and is 64 bit ready. Out of the box it works with the ORBBEC depth sensor camera (see above).

HandGKET
This toolkit facilitates integration of hand gesture control with games and PC applications by recognizing user’s hand gestures and generates keyboard or mouse events to control applications. HandGKET operates on 3D cameras which support the OpenNI framework. This tool is free to use and distribute for noncommercial purposes.

PointGrab
PointGrab’s proprietary Hybrid Action Recognition Technology enables a device to better anticipate, understand and analyze a user’s body language. The system can detect and analyze complex situations of shapes and motion by tracking finger, one hand, and two hands from 3 inches up to 17 feet and works in a wide range of challenging lighting and background conditions. Their flexible technology is designed for rapid integration into new and existing products as a software-only solution for any consumer electronic devices.

Qualcomm
The FastCV library offers a mobile-optimized computer vision (CV) library which includes the most frequently used vision processing functions for use across a wide array of mobile devices. Developers can use it to build the frameworks needed by computer vision apps including gesture recognition. FastCV is designed for efficiency on all ARM-based processors, but is tuned to take advantage of Qualcomm’s Snapdragon processor.

Rithmio
Rithmio’s gesture recognition platform integrates with motion-sensing devices at the sensor, OS or application-level to learn, track and analyze motion. Brands utilize Rithmio’s software to create accurate and personalized gesture-based products such as wearables, smartwatches or connected sporting equipment. Rithmio automatically learns the sensor’s characteristics, making it easy to integrate with any motion-sensing device.


The Gesture Control Market
According to DisplaySearch, Nearly 330 million smart devices with gesture-sensing will be shipped in 2015, a nearly 70 percent increase over the 2014 forecast and a doubling of shipments in 2013. Smart devices are expected to drive gesture-sensing market growth in 2015, as the adoption of 3D depth sensing becomes more accurate and affordable, not only on smartphones and tablets, but also on smart TVs, all-in-one PCs, and other large-screen devices. Gesture sensing is the next critical user interface trend, designed to improve the usefulness of, and user experience on, smart TVs and other large smart devices.

 gesture control market report

Shipment Forecast for Gesture Sensing Solutions for Smart Devices

Source: NPD DisplaySearch Gesture Sensing Control for Smart Devices Report

3 Thoughts on “17: Gesture Control

  1. Nice review!

    Gestoos SDK works nicely with all OpenNI compatible cameras (Asus, Kinect, Structure, Orbbec, …)

    And yes! It is available for Windows, Linux and Mac. Give it a (free) try at http://www.gestoos.com and do magic with your hands!

  2. Hi,

    I am Kapil, Embedded Engineer in Exploride technologies Pvt. Ltd.

    Thanks for giving the response.

    We are making a product called Exploride.

    Exploride is a futuristic head up display for any car. Access music, maps, calls, texts and more all in one place. Reduce distractions and keep your focus on the road with its transparent display, gesture and hands-free voice controls.

    You can also go to our website (www.exploride.com) and see the details .

    I am searching for sensors which detect hand gestures(left, right, up, down, air wheeling, hold, etc.). Can you please suggest me some gesture sensors?

    • Ron Padzensky on August 16, 2016 at 3:28 pm said:

      I would take the time to request and review the solution specs of all of the 14 companies referenced in the OEM section of this post to see which 3 most closely align with your requirements. I would then reach out to those 3 to see if they are willing to partner with you.

      Your Exploride idea looks exciting but frankly the features that distract the driver scare me! Should we really be taking in stock and news updates while we drive?

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation