March 2020
Text by Andy McDonald

Image: © Kheng Ho Toh/123rf.com

Andy McDonald is one of the co-founders of the Information 4.0 Consortium. Over the past three years, he has specialized more and more in researching AI and exchanging ideas about it. His particular interest is the impact of AI on society.


https://twitter.com/AndyMcD_TECH




-->


-->


-->

Artificial Intelligence 2030 – empowered or overpowered by technology?

Where will we be in ten years’ time with information and AI?

General AI might be eons away, and we have no idea what it might bring. But the technologies around Narrow AI are here and are gradually creeping into our daily lives and changing them, whether we like it or not.

To understand the difference between General AI and Narrow AI, let’s take a look back into the early days of computer engineering. In 1935, at Cambridge University, Alan M. Turing conceived the modern computer. He described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that is also stored in the memory in the form of symbols. This is Turing's "stored-program concept", and implicit in it is the possibility of the machine operating on and thus modifying or improving its own program. Turing's computing machine of 1935 is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines.

 

General AI

Narrow AI

An Artificial General Intelligence (AGI) is a hypothetical machine capable of understanding the world as well as any human, with the same capacity to learn how to carry out a huge range of tasks. It will be "a machine" that is fully perceptive, cognitive and that can reason and build its own intelligence by learning from experience. There is no clear vision of the path that will achieve General AI.

Narrow AI is a specific type of Artificial Intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike General Artificial Intelligence, Narrow Artificial Intelligence focuses on a single subset of cognitive abilities and advances in that spectrum. Narrow AI comprises a set of technologies that are being developed today.

 

While Turing was a precursor, Artificial Intelligence was founded as an academic discipline in 1956. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, the availability of large amounts of data, and theoretical understanding. Also, AI technologies have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering, and operations research.

The recent major efforts in Deep Learning as well as facial and voice recognition have resulted in big advances. Google, Facebook, and Microsoft, among others, are building their AIs. But as yet we know very little about them. To get an idea of the changes over the past decade, take a look at "How AI came to rule our lives over the last decade" by Rachel Metz on CNN Business.

With so much change over the course of one decade, let’s imagine we are ten years down the road: What do we expect to see? What could be a positive outcome? What could be a negative outcome? What conditions are required to get there?

 

What do we forecast?

AI will know us

Our behavior will be observed and a lot will be deduced about it by algorithms we do not know of. In some countries, this will be done using cameras that recognize us. China is already widely using facial recognition to identify people and their behaviors.

AI will guide us

In a positive manner, AI will try to deliver content in contexts that we have not already accessed or consumed. It will propose related subjects that it knows we like. It will deliver it in the format we prefer.

AI will train us

AI will know our skills and will propose further learning or skill acquisition from there. In this aspect, we will have gone from training AI to being trained by it.

AI will organize our day

Technologies based on AI will organize our schedule, remind us what we have to do and maybe do it for us. This means we will either have less control over our time or conversely, have more free time. No one can predict which one will prevail. Similar predictions were made for personal computers, yet we have become way busier but remained in control. However, in the case of the PC, we can always turn it off. AI, on the other hand, will be omnipresent in our daily organization, and maybe we won’t individually control how it affects us.

AI will control our access invisibly

AI will control how we access or are fed information. We see this happening today and we can predict that this effect will be accentuated in the future.

AI will be largely invisible

AI will be everywhere, and it will be more and more difficult to determine where it is interacting with us.

 

How will we consume content?

Content will be consumed mainly on mobile devices. Today, these are our smartphones but, in the future, the interfaces may be completely virtual and possibly immersive, going beyond the simple projections that we have today. Our whole content experience will envelop us. Most will be voice-controlled, and content will be conversational. Virtual devices may be displayed on surfaces, glasses, etc.

Our laptops at home will be replaced by our intelligent televisions. But the word television won’t be used anymore. It will be our hyper-media device and have a name. We will have a relationship with it, and it will observe us and feed us content it "thinks" is appropriate for us. Content will become an emotional experience (see text box below).

 

Emotional Interfaces of the Future

People create bonds with the products they use. The emotions users feel (both positive and negative) stay with them even after they have stopped using a product. Peak-end rule states that people judge an experience largely based on how they felt at its peak [and at its end]; the effect occurs regardless of whether the experience is pleasant or unpleasant.

It’s evident that positive emotional stimuli can build better engagement with your users – people will forgive a product’s shortcomings if you reward them with positive emotions.

In order to influence emotions, designers should have a solid understanding of general factors that impact users such as:

  • Human cognition – the way people consume information, learn or make decisions.
  • Human psychology – the factors that affect emotions, including colors, sounds, music, etc.
  • Cultural references
  • Contextual factors – how the user feels at the time of using a particular product. For example, when a user wants to purchase a ticket at a ticket machine, they want to spend the least possible amount of time on this activity. The user interface (UI) of this machine needs to reflect users’ desire for speed.

By focusing on those areas, it’s possible to create an experience that can change the way people perceive the world.

With permission from Gleb Kuznetsov on webdesignerdepot.com
Image 1: © Gerd Altmann/Pixabay.

 

Education

One of the more promising innovations might greatly change the education sector: the idea of a personal AI tutor or assistant for each individual student. While a single teacher can’t work with every student individually, AI tutors would allow for students to get extra, one-on-one help in the areas where it is necessary. There are many new possibilities in what has been coined by The New York Times as "The Great AI Awakening." One of the possibilities suggested by Forbes includes providing adaptive learning programs, which assess and react to a student’s emotions and learning preferences (from Ankit Rathi on Medium).

In my opinion, by using voice-driven tools, AI may help provide education for people who do not know how to read. Most major companies are working today on the next generation of intelligent chatbots, and we can see a need for this in education. Mobile phones today are used by huge populations who don’t necessarily know how to read.

The OECD report Future of Education and Skills 2030: Conceptual Learning Framework, published in October 2018, states:

"Some experts believe there will be a wholesale revolution in the nature of our education systems. For example, Seldon & Abidoye (2018), in a carefully researched book, The Fourth Education Revolution, analyze the role of AI and the changes that they see as inevitable in our education system. They claim that with the advent of AI, ‘Barely a single facet of this education system will remain unchanged.’"

This will allow:

  • Diminishing gaps between different socioeconomic groups
  • Providing access to knowledge and information for disabled students and those with additional educational needs
  • Personalized learning
  • Individualized feedback
  • Freeing up human teachers to work with learners on other things

Healthcare

The main purpose of healthcare AI applications is to examine relationships between prevention or treatment techniques and patient outcomes. AI programs have been built and implemented for practices such as diagnostic processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Many medical institutions have developed AI algorithms for their departments.

Large technology companies as well as start-ups have developed AI algorithms for healthcare. Additionally, hospitals are looking to AI solutions to support operational initiatives that increase cost-saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Companies are also developing predictive analytics solutions that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing the length of stay and optimizing staffing levels (also from Ankit Rathi on Medium).

 

What do we need to do to get there?

Ethics

The fundamental philosophical question revolves around ethics. There is no general agreement yet on what the ethics around AI should be, although several entities are working on this subject independently.

Control

Underlying this is the question of who is in control. The real questions will be "Can we pull the plug?" and "Who will be empowered to control the machine?"

Content redesign

Our content needs to be redesigned. And those redesigning it will need to understand how it will be delivered and experienced. Content will no longer be linear but an on-the-fly contextual assembly of information. Our information experience will be an exploration.

Content will be a multifaceted experience in which we will be immersed. It will be much more than just multimedia. We will live in a virtual world, somewhat detached from our daily context.

As I write this article, Pranav Mistry from Samsung is announcing Samsung Neon, to be unveiled at CES 2020 in Las Vegas. Mistry has made it clear that he thinks "digital humans" – a human-like avatar – will be a major technology in the 2020s. Follow this on CNET.

Image 2: Samsung’s new Neon project

Technology will make it happen

One example of a technology currently being deployed that will accelerate change is 5G. The map in Figure 3 shows its impact. We can only speculate on the networks we will have in ten years’ time.


Figure 3: Countries field-testing 5G

 

What are the downsides?

Without a doubt, AI also has its disadvantages.

Cognitive bias in AI

One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities, because most developers are wealthy Caucasian men. Recent research shows that support for Artificial Intelligence is higher among men than women.

Effect on humanity

Some experts suggest that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy is deeply misguided. A few experts are also bothered by the fact that AI researchers (and some philosophers) are willing to view the human mind as nothing more than a computer program (a position that is now known as computationalism), which implies that AI research devalues human life.

Social manipulation

Through its autonomous algorithms, social media is very effective at target marketing. Social media platforms know who we are and what we like and are incredibly good at surmising what we think. Investigations are still underway to determine the mistake made by Cambridge Analytica and others associated with the firm, who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K.'s Brexit referendum. But if the accusations are correct, this illustrates AI's power in social manipulation. By spreading propaganda, AI can target individuals identified through algorithms and personal data, delivering these individuals whatever information they like in the format they will find most convincing – be it fact or fiction.

Fake news disseminated on social media is already a problem. Without ethics and control, this plague will only increase. As David Foster, author of Generative Deep Learning, puts it:

"Newborn concepts, such as bias in machine learning and deepfakes, also preoccupied society, casting a shadow of mistrust over deep learning. Perhaps in the spirit that one should 'clear your own mess', researchers quickly rolled up their sleeves to work on these problems. The AI community is intensively working on developing techniques for automatically recognizing deepfakes as can be seen by the competition recently organized by Facebook. However, as prominent researchers including Ian Goodfellow are explaining, developing machine learning models for battling other machine learning models is like fighting fire with fire. Many fears around deepfakes involve authentication and can be partially addressed using appropriate encryption techniques, as well as by raising the awareness of people, who should not confuse verisimilitude with reality."

 

What are the risks?

A more individualistic society

An AI information experience will be an individual one. The risk is that AI might make us even more individualistic. Yet we need a society that is inclusive, with a large dose of empathy. Also, the availability of this form of AI doesn’t necessarily mean that everyone has access to it. So, will there be a new social rift based on AI access? This is something we must avoid.

Dealing with potential dangers

Experts believe these two dangerous scenarios most likely:

  • AI is programmed to do something devastating: Autonomous weapons are Artificial Intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is present with narrow AI but growing as levels of AI and autonomy increase.
  • AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which could happen more easily than you might think. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but explicitly what you asked for. If a super-intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat that it needs to overcome

 

Is it still up to us? Do we have a choice?

We can hope that it is still up to us to decide on our AI future, but in the long run we have no choice. AI is here, and it is here to stay. If we sit back and let it happen, then it may very well be too late to intervene, decide and control.

We can’t, as individuals, influence what AI will do. We can only learn and stay informed. I think we even have an obligation to do so. And that means that we have to become involved. One example of people thinking about our future is the Future of Life Institute.

The Institute is currently focusing on keeping Artificial Intelligence beneficial and exploring ways of reducing risks with regard to nuclear weapons and biotechnology. FLI is based in the Boston area, and welcomes the participation of scientists, students, philanthropists, and others, nearby and around the world.

 

Is there hope?

We have to hope there is. In our corner of the tech comm or content world, we may be affected more than we can effect change. But our role is nonetheless essential. We will provide content, curation and governance. We must learn to develop these skills in the presence of, and for, AI.

We as a community will contribute to hope by becoming vigilant, involved and outspoken. This is part of the Information 4.0 Consortium’s ambitions.

I will close with another quote from David Foster, on LinkedIn:

"Finally, when we talk about the present and future of AI, we should bear in mind that there is not a single, global form of it. AI is being developed in parallel by countries with different resources and cultures, and this leads to systems with different objectives and capabilities. Particularly prominent is the "race" between the U.S. and China, with one end laying importance on privacy and the human rights of people interacting with this technology, and the other end putting the collective good at the forefront of every decision. Rivalry on a national level is quite alarming, but this is what got us to the moon, right?"