Hardware for AI

Much of today’s computation remains tied to hardware built for spreadsheets and databases. When used for AI, it is power-hungry and inefficient. IBM is pushing the physics of AI to deliver radical improvement over the next decade, with innovation and co-development from algorithms to systems to devices.

Hardware for AI

Much of today’s computation remains tied to hardware built for spreadsheets and databases. When used for AI, it is power-hungry and inefficient. IBM is pushing the physics of AI to deliver radical improvement over the next decade, with innovation and co-development from algorithms to systems to devices.

Featured Projects

Applying co-optimized software
and hardware

Deep learning compute throughput has progressed at a rate of around 5x per year since 2012, when deep learning was first shown to be successful at image recognition. As the gains of accelerator specialization begin to saturate and enterprises apply deep learning to many more types of data, the industry will require new software innovations, like POWERAI, to realize the potential of AI.

A data-centric approach
to systems

In partnership with industry innovators, like NVIDIA and Mellanox, through the OpenPOWER Foundation, IBM Research developed a data-centric system architecture designed to embed compute power everywhere data resides, which could drive new insights at incredible speeds. We’re deploying our next-generation IBM POWER Systems with NVIDIA Volta GPUs at Oak Ridge and Lawrence Livermore National Labs to help scientists tackle the world’s most pressing issues like climate change and the origins of the universe.

TrueNorth: Low-power
inference engine

Scientists at IBM Research built the first gesture-recognition system on event-based hardware. Event-based computation is a biologically inspired paradigm for representing data. The system combines the IBM TrueNorth neurosynaptic processor with an iniLabs Dynamic Vision Sensor.

Digital AI cores embodying
approximate computing

The human brain does not require the highest level of data accuracy to recognize an image. Inspired by this principle of approximate computing, IBM scientists are building a customized core architecture designed to increase the efficiency of AI systems, specifically for the training and inference processes of deep learning models.

Analog AI cores with mixed-precision
in-memory compute

One of the biggest challenges in deriving intelligence and knowledge from huge volumes of data is the fundamental design of today’s computers, which shuttle data back and forth between memory and the computing unit—a slow and inefficient process. IBM Research is using the physics of AI to build a new architecture in which memory and processing coexist, a radical approach inspired by the human brain.

Spiking neural networks with
non-volatile memory elements

IBM Research AI teams explore the state dynamics and multi-bit functionality of phase change memory (PCM) cells to develop synapses and neurons for low-power and low footprint non-von Neumann spiking neural network architectures. They also explore various learning algorithms on PCM arrays and investigate how the inherent randomness of phase-change neurons can be used for population coding.

Analog AI cores using
existing PCM materials

IBM scientists outlined the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices for on-chip training of large-scale deep neural networks through what is, at least locally, analog computing.

Analog AI cores engineered
for AI training and inferencing

Resistive processing units (RPU), first proposed by IBM scientists, can potentially accelerate deep neural network computations by orders of magnitude while using much less power. RPU devices store and update weight values locally, minimizing data movement during computation and fully exploiting the locality and the parallelism of the algorithms.

Machines that mimic
human intelligence

Machine intelligence is very different from machine learning. Machine intelligence involves the use of fast associative reasoning to mimic human intelligence. IBM Research is exploring machine intelligence by using the brain's neocortex as a model for developing flexible systems that learn continuously—and without human supervision. A recent paper discusses unique structures in the human brain called filopodia that may have a role in fast learning.

Quantum for AI

IBM Research is exploring the use of quantum computing to accelerate a selected set of AI primitive calculations. This early stage research has shown some promise and requires additional studies to further pursue the promise of quantum acceleration for AI.

Enterprise AI on IBM Power Systems

Deep learning, machine learning and AI are now more accessible than ever. Power Systems offer the ability to build powerful and innovative applications to put you on the fast track to AI.