D. Fundamental questions in AI research, and the most important research gaps (RFI questions 5 and 6)

Last updated July 28, 2016

In order for AI systems to enhance quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. Broadly, the AI fields’ long-term progress depend upon many advances.

As AI systems become ubiquitous in people’s lives, serving many purposes in both personal and professional tasks, there are still many things they cannot do or that they should do much better. In order for AI systems to enhance humans’ quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. Significant research efforts should be devoted to address these deficiencies. In particular, unsupervised learning capabilities are needed to provide AI systems with common sense reasoning, methods should be developed to avoid bias and specificity in data sets, AI algorithms should be transparent and interpretable, and should be able to interact with humans in natural ways. Machine goals should be specified assuring value alignment and ethical principles, and machines should possess significant social capabilities. All this cannot be achieved by AI researchers alone, but rather by interdisciplinary teams of experts coming from many disciplines.

The AI field’s long-term progress depend upon many advances, including the following ones:

Machine learning and reasoning:
Most current AI systems use supervised learning, using massive amounts of labeled data for training. This is not how humans learn: we learn from very few data, but we observe the world and we build in our minds a model of how the world functions. This allows us to create internal concepts and relationships among them, and to provide us with common sense reasoning, which gives us the capability to learn without too much data. Learning should also be achieved through instruction, interaction (by discussing, debating, watching other people learn), by doing things (utilizing motor skills), generalizing from very little data, and by transferring skills across many tasks.If we want to scale and improve current AI learning systems, researchers needs to find a way to make such systems learn from fewer labeled examples. Unsupervised learning approaches that mimic how humans learn should be explored in depth. Also, transfer learning or inverse reinforcement learning can be useful in this respect, since it provides the capability of learning by transferring learnt concepts from other domains or by observing the world. These and other approaches to provide machines with common-sense reasoning capabilities should be investigated. Further tests to measure progress should be designed and developed, besides the existing ones, such as the Winograd schema challenge and the science tests.

Decision techniques:
For AI-based systems to succeed broadly, new techniques must be developed for modeling systemic risks, analyzing tradeoffs, detecting anomalies in context, analyzing data while preserving privacy, and making decisions under uncertainty.

Domain-specific AI systems:
Deeply understanding the domains of human expertise, such as medicine, engineering, law and thousands more, poses particularly difficult issues of knowledge acquisition, representation, and reasoning. AI systems must ultimately perform professional-level tasks, such as managing contradictions, designing experiments, and negotiating. This is not just a matter of applying existing AI research results and tools to specific application fields, but also to develop innovative domain-tailored basic research in AI, in collaboration with other disciplines.

Data assurance and trust:
As already mentioned, an intrinsic feature of machine learning systems is the need for huge amounts of data. There is no shortage of data in the world: images, videos, text, etc. However, we need to be careful to what data is given to the AI systems, since their behavior depends crucially on the quality of this data. The old adage in computing, “garbage in, garbage out” was never more relevant than with AI. Training and test data can be biased, incomplete, or maliciously compromised. This can give the learning system an accuracy which is only apparent and not actually true in real-life scenarios that go beyond the test data. At best this is expensive and time wasting; at worst, where AI is used for safety critical systems, it is potentially very dangerous. Significant resources and incentives should be devoted to studying ways of checking the quality, completeness and appropriateness of the data used to train and test a learning system, as well as to develop techniques for measuring entropy of datasets and for making AI systems more objective, resilient, and accurate. Also the task of automating data handling and tuning a learning system should be considered as a significant step forward in making AI system less biased and more objective and general.

Radically efficient computing infrastructure:
When deployed at scale, AI systems will need to handle unprecedented workloads that will require the development of new computing architectures such as neuromorphic and approximate computing, and new devices such as quantum and new types of memory devices.

Interpretability and explanations:
AI systems will function and work together with humans, often making or suggesting decisions to them. For humans to follow the AI’s suggestions, they need to trust the machine. People will trust AI systems when systems know users’intents and priorities, explain their reasoning, learn from mistakes, and can be independently certified.These capabilities would greatly help in many business domains, professionals needs to be helped in their job, as well as in tutoring systems, where students needs advice and guidance. Features that could help in this respect are algorithm interpretability and transparency as well as explainability. AI systems should be able to explain why they are suggesting certain decisions and not others, and should be transparent enough to allow for interpretability and accountability. Machine learning approaches are rather opaque from this point of view – particularly neural network variants of AI, where reasoning is often entirely invisible to the user. They should be made less opaque, possibly by combining them with logic-based capabilities that are not able to learn as well as the deep learning approaches but can provide rationale if researchers find a way to check that they faithfully reflect the behavior of the learning system. Explanations should use human-understandable symbols, such as natural language, and should be tuned to the specific target human working with the system (user, developer, researchers, etc.).

Value alignment and ethics:
Often AI systems have an unintended and unexpected behavior because we humans did not specify the right goal for them. The mistake we usually do is to omit many essential details because they would not be needed when specifying a goal to a human. But humans share some common knowledge of how the world function that machines do not have, unless we provide it to them. It is thus essential that we understand how to give machines this basic knowledge, or how to allow them to get such knowledge by observing the world. It is also important that machines are provided with the same moral and ethical values that humans follow. These may vary based on culture, society, task, and profession. For example, an AI system helping a doctor to choose the best therapy for a patient should follow the same ethical rules that doctors are required to follow. Work on understanding how to learn, specify, and comply to specific ethical principles is needed to achieve value alignment and thus allow a correct specification of a goal and avoid unintended and undesired consequences in the behavior of an AI system.In the domain of AI and ethics, multinational and multicultural efforts are needed.

Social AI:
AI systems will not work in isolation. They will be tightly connected to humans, in their professional and personal life. Thus they will need to have significant social capabilities, because their presence in our lives has a profound impact on our emotions and on our decision making capabilities. As an example, companion robots for elder care need to know how to relate to an elder person in order to reach the desired goals (such as making them take a pill) optimally. Sophisticated natural language capabilities will be needed for this purpose, to allow a natural interaction and dialog between humans and machines.

Back to summary.