In my previous two posts I explored the definition of augmented reality then discussed its place in the context of related technologies. In this post I will be covering important technological concepts in computing that provide an even broader context for AR. These are technologies that must be in place for the true potential of AR to come to fruition. I will not be covering the specific hardware and software that goes into rendering AR here, but rather exploring the infrastructure that will co-evolve with AR in order to deliver extraordinary transformation to society. These technologies all exist today at various levels of maturity, but have yet to be woven together into a cohesive and seamless fabric. This accounting is by no means exhaustive, feel free to leave comments about others that belong here.
Pervasive Computing, which is also analogous to the terms “Ubiquitous Computing” and “Internet of Things”, refers to sensors and actuators embedded in almost any physical object (appliances, vehicles, clothing, pacemakers, homes, people, etc.) that are linked through networks or the Internet. The data from these networks flow to servers and databases for control or analysis. According to Michael Chui, et al. of McKinsey and Company “When objects can both sense the environment and communicate, they become tools for understanding complexity and responding to it.” Vangie Beal states on Webopedia that “The goal of pervasive computing… is to create an environment where the connectivity of devices is embedded in such a way that is unobtrusive and always available.” And Margaret Rouse, editor of WhatIs.com notes that “In a 1996 speech, Rick Belluzo, who was then executive VP and general manager of Hewlett-Packard, compared pervasive computing to electricity. He described it as being ‘the stage when we take computing for granted. We only notice its absence, rather than its presence.’”
Pervasive Computing is now becoming a reality enabled by advances in wireless networking technology, standardization of communications protocols, ever-smaller silicon chips with greater capabilities and lower costs, massive increases in storage and computing power, cloud computing and big data analytics.[Chui] “An example of a practical application of pervasive computing is the replacement of old electric meters with smart meters. In the past, electric meters had to be manually read by a company representative. Smart meters report usage in real-time over the Internet. They will also notify the power company when there is an outage, reset thermostats according to the homeowner’s directives, send messages to display units in the home and regulate the water heater.” [Rouse]
Augmented Reality will evolve to interact with the Internet of Things. Mobile devices contain networking technologies such as Wi-Fi, Bluetooth and Near Field Communications (NFC) that can be used to identify objects, devices, persons and places to use as triggers for AR experiences or to provide contextual metadata to otherwise enhance the experience. As pervasive computing emerges in the coming years, AR will evolve in capability and ubiquity along with it.
Context Aware Computing
Context-aware computing refers to the providing of situational and environmental information about people, places and things that is used to anticipate immediate needs and proactively offer enriched, situation-aware and usable content, functions and experiences.[Gartner] “Context-aware computing is at the nexus of social, mobile, cloud and information. Market growth is driving richer user experiences, stronger customer loyalty and better business processes.” [Pete Basiliere, Analyst at Gartner]
Context-aware computing is different from the simple sensor-based applications that have been available on smart phones for the past several years. For instance, consumers today go to an app like Yelp and search for restaurants nearby or by cuisine and price. A context-aware application takes this a step further to know what restaurants you have picked in the past, how you liked the food and then make suggestions for restaurants nearby based on those preferences.[Priya Ganapati, Wired magazine]
An existing application called Google Now promises “The right information at just the right time.” If you let it learn about your self and your habits, it can serve up information that it thinks you might be interested in. News, sports scores, weather, and traffic information is provided in real time based on your previous movements and searches. It goes beyond this to remind you to leave early for the airport to catch your flight when there’s heavy traffic and when you arrive at your destination it provides a map to the hotel where you have reservations.
Context aware computing has a relationship to pervasive computing as mobile devices begin to interact with the Internet of Things to provide context, and in turn, mobile devices become contributing members to the Internet of Things. For instance, some Android devices like the Google Nexus phone contain a barometric sensor to assist in speedier geolocation. pressureNET is a global network of crowdsourced atmospheric pressure readings taken from devices that have the app installed. This data is displayed as markers on an embedded Google map, and users can view the data graphed over time. But the more interesting proposition is their primary goal to “improve weather forecasting for everyone on Earth by dramatically increasing the availability of live data that describes the atmosphere.” “…if the researchers [to whom this sensor data is streamed] can incorporate a large volume of pressure readings into climate models to define features associated with severe weather events, they can begin predicting when a severe storm will hit a specific part of a city up to six hours in advance.” [Nancy Gohring, Wired]
Once Augmented Reality headsets become pervasive, the door will open for mobile AR to become context aware. In similar ways to how Google Now serves relevant information to one’s mobile device screen, AR apps will passively engage when there is an opportunity to enhance one’s view with graphical overlays, info bubbles, facial recognition, commercial promotions, navigation, etc. based on one’s habits, preferences, plans, location, time of day, season and usual behaviors.
Every day, 2.5 quintillion bytes of data is produced. “This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few.” [IBM] Petabytes of information are being crunched for such purposes as analyzing Tweets to divine product sentiment, digesting billions of meter readings to better predict power consumption and for time-sensitive processes such as catching fraud in millions of trade transactions. Big data may be structured or unstructured and includes text, sensor data, audio, video, click streams, log files and more. New insights are found when analyzing these data types together. Big data is answering questions that were previously beyond the reach of commercial technology.
The IoT is increasingly responsible for generating much of this big data. Advanced statistical analytical capabilities have grown to play a crucial role in the powering of context aware computing by consuming big data. It is these capabilities that augmented reality will increasingly rely upon to deliver meaningful and relevant experiences.
Gartner defines cloud computing as a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. The implication is that an individual or company of modest means can leverage enterprise class server technology with little or no upfront infrastructure investment and continue to scale their offerings as demand materializes while only paying for the resources they are using. Cloud computing democratizes the delivery of applications, data and services in such a way that is allowing entrepreneurs to compete with the offerings of the largest of companies.
Cloud computing has been identified as the most imminent growth driver for AR. “The cloud is a natural fit for AR developers, considering how big benefits cloud-based content libraries present for image recognition technologies,” says Aapo Markkanen, senior analyst for ABI Research. Qualcomm’s Vuforia and Metaio introduced cloud recognition capabilities in their SDKs in 2012 and cloud is also integral to HP’s Aurasma AR browser.
AR will become a significant enabler for the Internet of Things, specifically big-data analytics, an area where AR and data visualizations will have a close connection to the emergent wearable computing products. “In a world where a countless number of physical objects and structures will be connected by sensors, AR can serve as a visualization medium that will make the sensor data situational, bridged to the real-world surroundings,” predicts Dan Shey, ABI practice director.
As of July 2013, Facebook claims 1.3 billion registered users, Flickr has 8 billion photos, Reddit has 4.8 billion monthly page views, Youtube gets 4 billion views per day and Twitter has more than 200 million active users.[DMR] Social media sites like these have become the hubs of our connection to people in the online world. Augmented reality is destined to leverage the social graph to make AR apps more engaging and social apps themselves will become more interesting by integrating AR. Social Media enables troves of crowdsourced information that may be unleashed in GPS-driven AR apps.
In a recent marketing campaign, Swedish clothing retailer H&M and a mobile application called GoldRun created a virtual photo scavenger hunt allowing users to take a picture, using GPS-synced smartphones, of virtual items that appeared in front of certain H&M stores in Manhattan. Doing so unlocked a 10 percent discount for an in-store purchase. Then, with photo integration, shoppers could see how they looked in those clothes and upload the altered images onto the social networking site Facebook to share with friends.[Forbes]
Geographic Information Systems (GIS) refers to “a special-purpose digital database in which a common spatial coordinate system is the primary means of reference.” [Kenneth E. Foote et al. The University of Colorado at Boulder] GIS systems interact with GPS sensors on mobile devices to not only tell you where you are, but also what is around you. GIS technology enables the imposition of layers of information from disparate sources about such things as roads, public transit routes, buildings and landmarks, population, and other GPS enabled mobile devices. “The variety, utility and popularity of location-based services are growing rapidly. Location is now an important component of Augmented Reality (AR), location-based marketing and advertising, and social networking. In the future, location will be vital to …domains under the large umbrella of the “Internet of Things” (IoT).” [OGC] “These technologies and markets are creating new demands for location information. One new source of location information is users who volunteer crowdsourced content. Another source is the ever-increasing number of network-accessible sensors.” [OGC]
Wearable computers and their interfaces are designed to be worn on the body, such as a wrist-mounted screen or head-mounted display, to enable mobility and hands-free/eyes-free activities. Traditional uses are for mobile industrial inspection, maintenance and the military. Consumer uses include display peripherals, computer-ready clothing and smart fabrics.[Gartner]
There are many examples of existing wearable computers in use today. Devices such as Nike+ promote weight loss or physical fitness. The BodyMedia FIT Armband allows the user to get information about daily calorie burn or how well they’ve slept at night. Business Users incorporate devices like a Pebble watch into their daily lives to receive notifications about text messages, emails, incoming calls, etc. on their wrist. Diabetics use glucose monitors to determine levels on a continuous basis. A typical system consists of a subcutaneous disposable sensor which transmits results to a radio receiver which is worn like a pager to display continuous updates and trends.[Wikipedia]
AR’s primary relationship to wearable computing comes in the form of visual display. The maturing of head mounted display technology is expected to launch AR technology into the mainstream by freeing up the hands to perform AR-guided tasks or gestures to interact with the AR user interface. For example, an AR application that guides the user through an automotive maintenance task is of limited use if the user must hold their smart phone in one hand, but if both hands are freed up the app can identify target components and overly instructions in real time while the user is performing the task.