November 2016
By Ray Gallon and Andy McDonald

Image: @

Ray Gallon is co-founder of the research and consulting company The Transformation Society, and owner of Culturecom, a company that provides business process improvement through communication. He has over 40 years of experience as a communicator, first as an award-winning radio producer and journalist, then in the technical content industries.



Andy McDonald has been designing and writing documentation for the oil industry since 1998 and is now Innovative Information Products Manager for Tech Advantage in the Paris area. Having seen methods, norms and formats come and go, his basic training leads him to concentrate on the people involved in the processes and the end user requirements.




Content sensing and Information 4.0

Context sensing provides us with the means to produce highly personalized information that is geared towards the user’s needs on a minute-by-minute basis. So are we ready to become creators of Information 4.0?

If you want to catch a glimpse of how people are going to access information in the future, download the Blippar app. Scan your hand and see what you get:

Figure 1: Four screenshots from Blippar after blipping a hand


The app recognizes the image and crawls the web for a transmedia sampling of information.

But it doesn’t stop there. Behind these results is an extensive ontology represented by spheres, which provide new results. Tap the "finger" sphere to get a new ontology in which, interestingly, one of the links leads to "amputation".

Blippar believes that visual recognition and Augmented Reality are their key innovations. And they are. However, we think the way they process and offer information to the user is even more significant.

What is important here is that none of this information is static. If the Wikipedia "Hand" entry is updated, or new relationships are found, they will be displayed immediately. This is constant information delivery, continuously updated and mashed up from a variety of sources.

Blippar gives us an idea of what challenges we have to solve in the next information revolution sparked by Industry 4.0.

  • When someone points Blippar or a similar app at your product or software screen, will your user assistance show up?
  • How can you guarantee this?
  • Is the information offered pertinent to the task or issue they’re interested in?
  • How can we identify this issue?
  • How can you be sure that what the user needs is the obvious choice?

Not just smart refrigerators

Industry 4.0 surely has a lot to do with the Internet of Things (IoT), but it goes well beyond objects communicating the fact that the milk in your refrigerator is going bad. It’s a complex network of networks, powered by artificial intelligence (AI), in which objects make autonomous decisions that affect us directly in a number of ways.

In Industry 4.0, information transparency is a requirement. Industry 4.0 information systems will create a virtual copy of the physical world by enriching digital plant models with sensor data. This requires the aggregation of raw sensor data to higher-value context information.

The way we see it, one of the most important products of Industry 4.0 will be information. Notice that we said "product", not "product-related" or "product instructions" or any other euphemism. We all need and want information. In a lot of ways, it will be tuned to profiles that are not defined by an individual.

Figure 2: The four industrial revolutions as defined by Wikipedia
Source: Christoph Roser at


In the context of Industry 4.0, we define Information 4.0 as having these characteristics:

  • Molecular – no documents, just information molecules
  • Dynamic – continuously updated
  • Offered rather than delivered
  • Ubiquitous, online, searchable and findable
  • Spontaneous – triggered by contexts
  • Profiled automatically

The key to this new informational environment is context-sensing technology. But this context sensing goes well beyond familiar concepts.


The context of context

Intel defines context sensing as

Information that must be collected and used soon, because otherwise the information may not be valid anymore. This kind of context information is called "context state." A group of "context states" comprise a "snapshot" of the current user's context, such as location, user activity, and the user's surrounding environment. This snapshot is formally called the "state vector," which contains a collection of "context states,” describing the user's current context.

This might appear to be a comprehensive definition, but Intel’s idea of a "state vector" may not have a wide enough horizon.

Dr. Christian Glahn, director of the Blended Learning Center at the Chur University of Applied Sciences in Switzerland, refers to context sensing as a "discovery". Its central principles are:

  • Not always more of the same
    --> If I just ate, I don’t need more restaurants.
    --> After four hours in a museum, I need a café more than another museum.
  • Meaningful connections
    --> If I’m on a business trip, I’m not interested in finding a gym just before a meeting.
  • Follow rhythms
    --> If I always eat dinner around 6:00 PM, I might be interested in finding a restaurant at this time when I’m away from home.

Can you imagine applying these principles to user assistance and other technical information? This means not only context-sensitive in the static, traditional sense, but highly personalized and dynamically contextualized information on a minute-by-minute basis!


Mobile evolution

Without noticing, our mobile devices have transmuted from phones to Internet terminals. And they’re about to mutate again from terminals to context sensors. We’ll still use them to phone home or catch up on gossip regarding our favorite pop singer, but a mobile’s real function will be the elaboration of constantly evolving, real-time state vectors.



Will soon know your

·       where it is (in 3D)

·       if it is indoors or outdoors

·       if it is moving or not

·       if it is attached to you or not

·       your preferred communication channels


·       objects, especially faces

·       input type (verbal, haptic, optic)

·       ambient noise

·       conditions (lighting, electromagnetic, temperature, barometric pressure)

·       elements in proximity

·       current time  (local, season, day, date)

·       age, gender, family situation

·       behavior (themes that interest you for work and leisure, learning style)

·       networks – social as well as technological

·       history (previous states of your networks, applications, situation in space-time)

·       emotions

Terminals already "know" some of these things – like the current time – because we, or our service provider, tell them. Tomorrow they will know these things all by themselves and make decisions based on this knowledge. But what will happen when your phone starts comparing you to statistics from Big Data and factors your surroundings into your state vector? Picture the following scenario:

You pass a shoe store (part of a national chain) in a shopping center – let’s call it Sam’s Shoes. Your terminal knows that you bought your running shoes six months ago and, based on your time spent running, it calculates that you are just about due to buy a new pair. Correlating with the store, it finds your brand and model on sale there and alerts you. If you are jogging, it will have the store send an email, and the store decides to include a voucher.

In this scenario, your mobile isn’t simply alerting you about a national shoe sale. It triggers THIS Sam’s Shoes to suggest you buy the SAME shoes, on sale NOW, because your phone deduced YOUR CURRENT SHOES ARE ABOUT TO WEAR OUT.
This level of personalization makes marketers salivate – but it will be a reality before we know it.


Context states and individual needs

This implies that the context state, as defined by Intel, is not just about location, activity, or surroundings. It’s also about interior states that the terminal has monitored and collected: heart rate, perspiration, respiration, general metabolic activity, brain waves, etc.

Coupling context state information with context history predicts mood, behavior and needs. The information offers that we receive can, and probably will, become ever more narrowly filtered and focused as they become more and more personalized. So how can we use all this collected data on context states to help clients and users?


Developing a coherent content strategy

Each individual’s needs form a loose matrix based on varying levels of interest or requirements across multiple domains. This matrix varies in time.

Mapping the needs potentially involves defining thousands of personas. A coherent content strategy is indispensable for defining the information made available to the matrix.

In user-centered design and marketing, personas are fictional characters created to represent the different user types that might use a site, brand, or product in a similar way... [From Wikipedia]

Although primarily used in marketing, we have a more global vision of user personas: fictional characters created to represent different user typologies having similar experiences and needs.

Let’s take an example from the energy industry, where various disciplines (processing, drilling, geology, geophysics, reservoir engineering, etc.) interact to obtain results. Individuals involved have varying levels of experience, competency, skills or know-how in each field, resulting in a complex requirements matrix as shown in Figure 3. 

Figure 3: Expected skills acquisition over time vs what actually happens


A company, using software (provided by a supplier), has objectives concerning the evolution of competencies in various domains. The supplier can’t decide these objectives. They are defined by policy within the company.

However, these evolutions can be mapped to persona journeys by the supplier (on a broader scale than the end client).

A journey is a set of changes in state vectors (orange line, Figure 3). Information 4.0 requires that we respond with refined content candidates.

Personas will probably be tagged as belonging to a discipline. In reality this is not always the case. Awareness of others’ skills is required to handle complex processes.

Let’s look at the same situation through an individual user journey. While the company has objectives, the individual user will require or desire other unplanned competencies, even forget some, or want to move to other domains outside his core domain (blue line, Figure 3).

Integrating this phenomenon into production requires metrics for tracking the real journey.

Success will be measured by:

  • satisfaction
  • company feedback on competency improvement.

At no point have we talked about typing or structuring, and definitely not delivery. We’ll come back to this.

Facilitating individual competency learning

Users of technological products or software are learners. Learning is part of a user journey and is designed in stages (changes in state vectors) that can be easily assimilated. As information developers, we won’t decide when users progress. They will.

Users graze from a variety of sources, and the learning process is no longer linear. Our job is to fill the gaps with pertinent information candidates. Users will cherry-pick for their needs (based on perception), and also learn what we don’t plan or expect. We will promote the journey, facilitate the stages, encourage engagement, and reward success. We are no longer just writers.


Evolving from support to user relationship

Bots and conversational AI (such as IBM Watson) will allow the first level of user support to be automated. It won’t replace the human support person entirely, but will cater to repetitive cases and issues and detect the unresolved. Humans will intervene more in the unresolved complex problems, where expertise, handholding, foreseeing issues, and improving existing practices are required.

This does not remove the requirement for information production. The objective will be to provide less costly, but more pertinent support based on profile, persona and history. The management systems for this type of support are still being designed.

Emotions embedded in sensing

Even if we can’t write for happiness, we need to write for success. Emotions will play a big part in how well our content serves our purposes and helps us curate. As an example, on a cognitive level, joy can be leveraged; contempt and disgust cannot, but have to be taken into account. Resolving a user’s frustration requires alternate re-engagement strategies, including real integrated communities of stakeholders where the user’s problems are taken seriously and their suggestions can be integrated into the products.

Figure 4: Sample of emotions from the Affectiva Developer Portal


The Affdex SDK by Affectiva brings emotion sensing and analytics to software via facial expression recognition. We need to understand how to map content to emotion so that AI applications like this one can help us to provide truly responsive UX.

Multi-threaded production

Our current information production is massively linear. Our next information production model has to consider:

  • Replacing waterfall production (documents) with constant delivery or real-time availability of molecular information
  • Context tagging as an imperative, emotional tagging in the future
  • Widely collaborative processes where stakeholders, including users, drive goal-oriented efforts

Constant delivery or real-time availability implies changing:

  • Decisions about the what, who and when of production and validation – integral to new content strategies
  • Feedback management – this is curation and animation, rather than moderation

Minimalist practice reduces information overload, but doesn’t solve everything; it’s a principle, not a design method. Designing for non-linear user journeys means making structures that are smaller than we use today in systems like DITA, which are combined, updated, and recombined in real time.

Information 4.0 is lean, nimble, profiled and designed to be assembled spontaneously into an emotion-based response, tailored to a persona.

The Blippar example at the beginning draws on the broadest content set possible, mapped using ontological relationships. This is fine for unguided discovery – without a precise objective. But our ontologies have refined purposes. For products or software, for instance, information candidates and the relations between them are oriented towards onboarding, acceptance, familiarity, proficiency and, eventually, expertise.

Defining the future?

Industry 4.0 brings game-changing technology to the scene. It will impact our experiences and behaviors in social and economic spheres. These changes are happening now and, as practitioners, we need to harness them and identify the elements where we have added value to inject in the content and experience process.

Information 4.0 includes design, production, curation, collaboration, animation, and governance. We may think we already do all of these things, but we need to break our usual patterns and learn to do them in a more agile, nimble fashion. Today, information changes in the time it takes to verify it using traditional methods, which jeopardizes its responsiveness for 4.0. Accuracy and validation need to be built into the real-time delivery process.

We need to learn to provide information almost instantly, include users as stakeholders and valuate their contributions, and become comfortable with continuous change and improvement.

The governance of these processes is critical; the criteria and mechanisms are yet to be written. Systems for managing Information 4.0 need to emerge or coalesce. We can let Industry 4.0 dominate the way our future is defined, or we can be the ones who define how it changes us.