Skip to main content

Cognitive Systems: The New Era of Computing


A New Era of Computing: Cognitive Systems

Dr. Dario Gil, Director of Cognitive Computing, IBM Research

Cognitive systems are an emerging model of computing designed with people as an integral and central element of the process, and which are explicitly aimed to enhance human cognition. In this new era, cognitive systems will do much more than execute pre-determined programs and calculate difficult equations at great speeds. These systems will dramatically alter how we think, learn and interact with computers; how business leaders use Big Data to make business decisions, how policy makers devise new approaches to governing and how individuals use technology in their everyday lives. These systems will learn from both structured and unstructured data, find important correlations, create hypotheses for these correlations, and suggest and measure actions to enable better outcomes for users. Systems with these capabilities will transform our view of computers from ''calculators'' to ''machines that learn.'' This shift will radically alter our expectations of what computing ought to do for us as humans and will equip us to successfully navigate the increasing complexity of our globally interconnected world.

The Human Brain Project

Prof. Idan Segev, Hebrew University

This October (2013), the EU commission decided to fund two "flagship projects", 1B Euro each for 10 years. One of these projects is the Human Brain Project (HBP), which promises to provide a new platform for neuroscientific research, for medical informatics, and for brain-inspired future computing. I will discuss these facets of the HBP, emphasizing my own part of it - namely to develop methods for biologically-faithful simulations of brain circuits. I will demonstrate what we have learned till now from this "bottom up" modeling approach.

Surrogate Bodies: Thought-controlled Virtual and Robotic Representations

Dr. Doron Friedman, The Interdisciplinary Center

Brain-computer interfaces (BCIs) allow people to interact with external devices using their "thought" alone. Subjects' intentions are decoded from brain activity using methods from signal processing and machine learning, and the application feedback is used by the subjects to improve their level of control in a closed-loop fashion. Using such methods we have been able to allow people to control virtual avatars and humanoid robots by imagining moving their own bodies. In addition to the scientific and algorithmic challenges of brain-computer interface research, we also investigate what it takes to provide people with the sense of having alternate bodies, as an extreme case of human machine confluence.

Can a Blind Person Understand your World?

Dr. Chieko Asakawa, IBM Fellow, IBM Research - Tokyo

Computers have been changing the lives of persons with disabilities. OCR (optical character recognition) technologies enabled them to access printed documents, and voice web access with synthesized voice enabled them to access online services. Now, cognitive computer technologies are reaching the point where computers can help sense, recognize, and understand our living world. One of important approaches is the crowd-driven accessibility that combines human and machine intelligence to enable real-world accessibility. Dr. Asakawa will try to foresee the future direction, beginning with a review of the technologies that have allowed people with disabilities and especially blind people to have increasingly better access to the world where vision is taken for granted. Then she will review how cognitive assistance was introduced via science fiction, leading to a discussion of a possible assistance application. Finally, she will answer the posed question "can a blind person understand your world?"

Keynote: Neural Representations of a Dynamic World

Prof. Richard Zemel, University of Toronto

As animals interact with their environments, they must constantly update estimates about relevant states of the world. For example, a batter must rapidly re-estimate the velocity of a baseball as he decides whether and when to swing at a pitch. Probabilistic models provide an accurate description of such processing, and a range of evidence from perceptual studies suggests that the brain can be characterized as a probabilistic model. In this talk I will consider two basic questions. First, how can populations of neurons represent the uncertainty that underlies this probabilistic formulation? Second, how can these probabilistic models be learned from interaction with the environment? I will conclude by describing recent progress in learning probabilistic models that can approach human-like performance.

On Elephants, Earthquakes, and Brain Research

Prof. Nathan Intrator, Tel Aviv University

Algorithms have been developed for applications ranging from brain modeling, brain scanning and analysis to earthquake detection. All were inspired by infrasound and ultrasound animals. The talk will describe some of this exciting research with an emphasis on brain exploration in humans.

Watson after Jeopardy! Question Answering in Healthcare and Beyond

Dr. John M. Prager, IBM Research - Yorktown

Watson is a Question-Answering system that competed against the two most successful former champions in the US TV quiz-show Jeopardy!, and won. Since the televised exhibition match in February 2011, the Watson research team has been engaged in adapting the technology for other applications, in particular healthcare. This has entailed not only changes to lexicons, ontologies and basic utilities, but a wholesale change to the problem-solving process and interface to the user. In this talk I will discuss the ways in which both the original and "medically-enhanced" Watson learns, understands and explains itself. I will also illustrate how the human-Watson interaction is evolving into one much more like a partnership.




















For more information on the events please contact:

IBM Research

Learn more about cognitive computing, and how people and machines will partner together.