November 2018
Text by Ralf Heindoerfer

Image: © metamorworks/

Ralf Heindoerfer is a senior development architect at SAP, working on the SAP Cloud Platform. With more than 25 years of experience in the Enterprise Software Industry, he has held various technical and customer-facing roles along the entire product lifecycle. Since 2015, he has been focusing on Virtual and Augmented Reality applications for the enterprise, pushing new frontiers of human computer interaction.


Twitter: @RalfHeindoerfer

User assistance 4.0 – How personalization, VR and AR change the way we communicate with our users

Virtual and Augmented Reality can teach us how to use, operate and navigate the physical world like no user instructions ever could. These new technologies are on their way to revolutionize the way we consume information.

Since its earliest days, it has been the ultimate dream of the user assistance profession to provide a help system that allows users to learn how to use a product right at the time and place when they need it. 

An iconic scene in the 1999 sci-fi movie The Matrix extrapolates this vision when the two protagonists Neo and Trinity, on a chase, end up on a building’s rooftop with an abandoned helicopter as the only escape. Unable to fly this machine, Trinity instantly has the helicopter’s flight manual uploaded to her brain, learning to operate the aircraft and taking off just seconds later.


The evolution of user assistance

While this is an admittedly dystopian vision, user assistance for software products has come a long way towards providing users with techniques and systems to help them operate and navigate software "on the fly". Starting with printed manuals, user assistance evolved through help systems embedded in software and increasingly contextual and interactive support into an assistant-based mechanism that guides users from screen to screen to complete a task at hand. 

However, this evolution applied to software products only. But where did that leave user assistance for physical products such as cars, coffee machines, or industrial goods such as air compressors, cranes or jet engines? Here, we were often restricted to the classic printed user or operations manuals or perhaps, more recently, we moved on to how-to videos and tutorials – often provided by end customers on online platforms such as YouTube.


From text and images to immersive technologies

So far, these types of user assistance have fallen short of helping users to effectively learn to use a product. Back in 1969, Edgar Dale described in his Cone of Experience how well different types of audio-visual media can support a person in understanding and learning a given topic or operation.

Figure 1: Effectiveness of user assistance approaches based on Edgar Dale's "Cone of Experience" from 1969


And here, reading a text or viewing pictures and video material – the classic media for user assistance as of today – rank at the lower end of the scale. On the upper end of the scale are interactive demonstrations and direct, hands-on experiences that let the user have a try. 

But how do you get the direct experience of, say, disassembling and maintaining an aircraft engine? Access to both a real engine and an expert tutor or trainer might be difficult, expensive and hard to scale. But what if the user could learn how to use a product or maintain a machine instantly, at the exact time and place he needs to?

New immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR) might come to the rescue here.

Virtual Reality (VR) is a technology that can enable direct experiences of things, situations and places even if the user is not physically present in the environment. Virtual Reality allows the user to completely immerse himself in a three-dimensional simulated world. With a tracked headset that covers the user’s field of view and hand controllers that allow the user to reach out into and interact with the virtual world, VR creates a fully computer-generated, spatial environment that the brain accepts as physically real and present. And similar to how trainee pilots learn to fly and operate an airplane in many hours of training in multimillion-dollar flight simulators, Virtual Reality can help users learn to use, operate and maintain industrial machines as well as consumer products – at just a fraction of the cost. And the virtual training is so immersive that it even builds up the user’s muscle memory when repeatedly exercising the interaction in the virtual environment.

Augmented Reality (AR) – sometimes also called "Mixed Reality" (MR) – on the other hand uses headsets with transparent displays (like the Microsoft HoloLens) or smartphones and tablets with pass-through cameras that allow the user to experience virtual objects placed and projected onto their physical environment, like the office or workspace they are currently located in. With this, AR targets a different set of scenarios than VR – even though it uses similar technological components. Seeing the real world around the user through those transparent displays, Augmented Reality can "decorate" and annotate the real world with digital information and additional virtual objects, thus giving the user an extended, "superhuman" vision of his physical environment.

Using a technique called SLAM (Simultaneous Localization and Mapping), AR constantly scans the environment and creates an invisible 3D model of the place the user is located in. With that, AR can place and anchor virtual objects in the physical environment and keep them physically stable in that place even if the user walks around them. This gives the user the perception that these virtual objects really exist in the physical space next to him. Click here to see how this works.


VR & AR and the digital twin for immersive user assistance

Digital twins allow us to bring these physical assets into the virtual world and interact with them. A digital twin is a digital representation or replica of a physical asset that represents the properties, the state and the dynamic behavior of this asset, stored in an IT system like a database or, e.g., an SAP backend system. For example, any car or other industrial machine that is custom-built might have a digital twin that describes the exact configuration of the individual product, exactly as it was built and delivered. And the digital twin also reflects information such as which part was maintained, replaced or extended, including the date on which it was exchanged.

Digital twins typically reflect the exact current state of their physical counterpart, for example by using various sensors across the machine that update the current values for oil pressure, temperature, fuel level or power consumption in the IT system on a regular basis. Often, they also contain a 3D model of the physical asset for visualization, originating from the engineer’s CAD drawings. In this way, digital twins are the perfect basis for simulating and visualizing in a virtual world. 


Use cases for immersive user assistance

Virtual and Augmented Reality are still an emerging technology with first generation headsets available in the market today, including the HTC Vive and Oculus Rift for VR and the Microsoft HoloLens or the Magic Leap One for AR. And while these are still bulky devices with narrow use cases, we can expect the next generation of headsets to become cheaper and increasingly smaller, until at some point they might have a form factor of regular sunglasses that we can wear for extended periods of time. And with mass adoption expected in the consumer space, additional usage scenarios might be developed that could partly replace smartphones as our main information device. 

Already today, VR and AR increasingly find adoption in the enterprise space in areas such as manufacturing, training, real estate, construction, healthcare and medical, and other industries. Many of these applications enable new scenarios that could not have been reasonably addressed before without Virtual or Augmented Reality. And a lot of these scenarios require the user to be provided with information about the product or the situation that he is currently dealing with to help him understand, operate, maintain, navigate or execute his daily business tasks at hand, right where he is. This is the field of immersive user assistance.

While the whole domain of Virtual and Augmented Reality applications is fairly new, there are no clear established standards yet as to what the user experience and user interaction in these immersive technologies look like and what kind of user assistance works well in a given scenario. However, we can already see some repeating patterns for immersive user assistance emerging from this broad spectrum of applications. These include the following concepts:

Immersive annotations

Immersive annotations describe a scenario for physical objects using AR that most closely resembles what context-sensitive help or tooltips provide for software products. The car manufacturer Daimler implemented the "Ask Mercedes" app for smartphones annotating all the buttons and controls in your Mercedes cockpit with interactive virtual beacons, allowing you to identify and get detailed information on the purpose and function of the various controls in front of you. For that, you only need to point your smartphone’s camera at the part of the cockpit you want to explore, and the app uses a virtual 3D model of the car’s interior to exactly map these interactive buttons onto their physical counterpart in the smartphone’s video feed. From here, you get detailed descriptions or instructional videos on what these controls do and how they work.

Guided procedures

Thyssen Krupp, a manufacturer of elevator lifts, goes one step further in supporting their maintenance workers with Augmented Reality, not only by marking up spare parts on the elevators but even providing guided procedures, i.e., giving step-by-step instructions on how to disassemble and replace parts, animating the digital twin’s 3D representation superimposed on the actual elevator’s engine. It will interactively show which parts need to be unbolted and removed, and in what order, to successfully access and replace a spare part. 

If a field technician is out on a maintenance job in a remote location, the guided procedures will enable him to fix even older, non-standard or custom machines, as he might have online access with his AR headset to a large database of those technical twins. This might even include machines from decades ago. And as all the information is displayed through AR goggles, the field technician can work hands-free without the need to hold another mobile device or handbook, allowing him to have all relevant product information right in sight.

Remote access to experts

However, there is always the risk that a specific problem or malfunction occurs that the field technician at the remote site can’t handle, which requires an expert to fly in and investigate. Augmented Reality allows "Remote Access to Experts" without requiring this expert to be physically present at the remote site. Microsoft has shipped an application called "Remote assist" for its HoloLens AR device.

With this, the remote field technician can initiate a Skype call via his HoloLens to the expert that might be back at the company’s headquarters. While the remote technician wears the headset, the expert joins the Skype call via a tablet or PC. The video feed from the camera built into the technician’s AR device is transmitted to the expert so he can see what the technician sees. The expert can then analyze the situation and mark up and highlight parts of the inspected machine on his screen by drawing lines or placing arrows on the screen, which will be visible by the remote technician as virtual 3D annotations right in front of him, anchored to the machine parts that the expert has highlighted. In this way, the expert can enable even less experienced workers to visit remote places and get the maintenance job done without the need for the expert to travel.

Guided tours and indoor navigation

In some scenarios, users do not require guidance on how to operate a product or machine, but rather on how to navigate a place or site such as a museum or a large retail store. Media Markt Saturn, one of Europe’s largest consumer electronics retailers, has implemented a guided tour through its retail stores using the Microsoft HoloLens AR headset. Customers are guided through the store’s aisles by a small virtual avatar named Paula, who leads the customer to selected product offerings across the store based on the customer’s interest. Once the customer arrives at the respective offer, Paula highlights and explains various key features of the product offered, just as a staff member would do. 

Other scenarios, some of them even life-saving, are enabled when combining computer-generated virtual images in an AR headset with images from cameras that capture information beyond the visible light spectrum. San Francisco-based company Qwake-Tech has developed a firefighter helmet with integrated AR glasses named "C-Thru". These glasses combine thermal image camera technology with visual edge-detection algorithms in AR to enable firefighters to see in smoke-filled, zero-visibility environments, assisting these first responders to better and faster detect victims and fellow firefighters, thus saving crucial minutes and, ultimately, lives. Future versions of the helmet may also include the building’s floor plan to provide firefighters with indoor navigation, guiding them to the next door in the room and the quickest escape path out of the hazardous situation. 

Superhuman vision

A similar kind of "superhuman vision" enabled by Augmented Reality is also used in medical scenarios. Doctors use AR headsets projecting 3D imaging from X-ray or MRI scans onto the patient’s body in real time during surgery, improving their spatial location of tumors or critical blood vessels or displaying additional data from the life-monitoring system. And, most importantly, all of this information is available hands-free. With the help of Virtual Reality headsets, doctors can also use the body scan data of patients, and the resulting 3D imagery, to plan and practice the operation ahead of time. Similarly, it can be used to teach medical students, allowing them to receive guided assistance and training on medical procedures before practicing on a real body for the first time. 

Virtual training

In contrast to Augmented Reality, which supports users by annotating the real world with contextual digital information and images, Virtual Reality creates lifelike experiences of physical assets, places or situations that may be too expensive, too dangerous or too hard to recreate and practice in real life. That’s why one of the main use cases for Virtual Reality today is in virtual training. Using digital twins to simulate the exact look, size, and behavior of a physical object, the trainee gets immersive user assistance that enables him not only to understand and learn the subject but to actually create muscle memory from it.

The movements and actions needed to perform a given task or procedure are naturally recognized and help the user to apply them when working with the real thing later. However, this is not only valuable for training medical students or pilots. It can also enable maintenance workers to practice a repair procedure for a remote oil rig, or a crane operator to learn the delicate handling of heavyweight construction material at a hundred-foot altitude. The company ITI (Industrial Training International), for example, offers VR Training simulators for a lot of different crane types and brands.

They offer the experience of operating a life-size crane from the safety of your office or home, providing the crane’s virtual control booth high up in the air as well as the many buttons of the crane’s dashboard and the actual levers to control and move it. For this, the simulator even provides a physical replica of the real crane’s controls to give the trainee the proper haptic feedback when maneuvering the virtual crane.     


Content creation for VR & AR user assistance

So how do you create content for immersive user assistance? What tools and workflows can you use? Virtual and Augmented Reality are new media that come with new possibilities but also new affordances that will require new content creation tools and workflows to leverage their potential. While you can embed classic content such as text and video material into VR or AR experiences, virtual worlds and objects are inherently three-dimensional and spatial, so content creation for these media often requires the additional aspect of spatial modeling of objects, places and info tags.

So far there are only a few specialized 3D modeling tools available on the market, including CAD tools such as AutoCAD used in engineering or 3D modeling tools such as 3Ds Max or Maya from Autodesk used in industrial design, movie or game development. These are highly specialized expert tools that are very comprehensive but hard to master. Additionally, so far, the content created from these tools must typically be integrated programmatically into VR and AR applications, thus providing a relatively high entry barrier for content creators in this new medium.

However, VR & AR are still emerging technologies, and tools and workflows for content creation for these scenarios are still in evolution. There is an increasing number of tools, like Google Tiltbrush or Google Blocks, that are simplifying the content creation process by allowing the creation of VR content directly in VR, or AR content directly in AR. Content creators can model or paint content in a 3D space right inside the headset without the need to abstract from a 2D screen representation in classical PC-based modeling tools when creating 3D models.

This means that they can create content in the same medium in which it is consumed. And while the user can directly interact with his hands and fingers within the virtual world instead of using a mouse and keyboard to create 3D objects, the content creation process becomes intuitive, like working with clay or using a brush in real life. This will lower the entry barrier and democratize content creation for a broader user community that will no longer require expert engineering or programming skills.     



VR and AR are still emerging technologies, but they already show the potential for how they may change our perception of and interaction with reality; a reality enhanced with digital annotations and virtual objects mapped on the things and places around us. With the increasing convergence of the real and the digital world, user assistance will open up entire new fields of applications in the real world, and with this, evolve from a side product to a core ubiquitous capability offered by these new technologies that we will come to expect and rely on in our daily lives.