images of diverse people

AI Hardware

IBM Research is developing new devices and hardware architectures that support the tremendous processing power and unprecedented speed that AI requires to realize its full potential.

pictogram of ai hardware chip

About us

The field of artificial intelligence (AI) has witnessed tremendous growth in recent years with the advent of Deep Neural Networks (DNNs) that surpass humans in a variety of cognitive tasks.  The algorithmic superiority of DNNs comes at extremely high computation and memory costs that pose significant challenges to the hardware platforms executing them.   Currently, GPUs and specialized digital CMOS accelerators are the state-of-the-art in DNN hardware. However, the ever-increasing complexity of DNNs and the data they process have led to a quest for the next quantum improvement in processing efficiency.   The AI hardware team is exploring new devices, architectures and algorithms to improve processing efficiency as well as enable the transition from Narrow AI to Broad AI.  Approximate computing, in-memory computing, machine intelligence and quantum computing are all part of the computing approaches being explored for AI workloads.

AI Hardware Center

IBM launched a global research collaboration hub to drive next-generation AI Hardware and expand joint research efforts in nanotechnology.

 

Focus areas

Today’s systems have achieved improved AI performance by infusing machine-learning capabilities with high-bandwidth CPUs and GPUs, specialized AI accelerators and high-performance networking equipment. To maintain this trajectory, new thinking is needed to accelerate AI performance scaling to match to ever-expanding AI workload complexities.  IBM Research is tacking this challenge with device, architecture, packaging, system design, and algorithm design.

 

Digital AI Cores

Analog AI Cores

Heterogeneous Integration

Quantum Computing for ML

Machine Intelligence

AI Optimized Systems

 

Featured work

Analog AI Cores

A New Design Paradigm

At IBM Research we’re developing a new class of Analog AI hardware, purpose built to help innovators realize the promise of the next stages of AI.

Analog AI Cores

AI Hardware for Hybrid Cloud Environments

IBM Teams with Industry Partners to Bring Energy-Efficient AI Hardware to Hybrid Cloud Environments.

Digital AI Cores

Ultra-Low-Precision Training of Deep Neural Networks

IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.

Analog AI Cores

Novel Synaptic Architecture for Brain Inspired Computing

IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.

Publications

The IBM Research AI Hardware team's research looks advance the development of computing chips and systems that are specifically designed and optimized for AI workloads and push the boundaries of AI performance.

Please explore all of our AI hardware related research papers

All publications

TITLE RESEARCH AREA VENUE ACCESS
Computational memory-based inference and training of deep neural networks Analog AI Cores Symposium on VLSI Technology (2019)
Confined PCM-based Analog Synaptic Devices offering Low Resistance-drift and 1000 Programmable States for Deep Learning Analog AI Cores Symposium on VLSI Technology (2019)
Inference of Long-Short Term Memory networks at software-equivalent accuracy using 2.5M analog Phase Change Memory devices Analog AI Cores Symposium on VLSI Technology (2019)
Weight Programming in DNN Analog Hardware Accelerators in the Presence of NVM Variability Analog AI Cores AELM (2019)
Analog-to-Digital Conversion with Reconfigurable Function Mapping for Neural Networks Activation Function Acceleration Analog AI Cores IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2019)
AI hardware acceleration with analog memory: micro-architectures for low energy at high speed Analog AI Cores IBM Journal of Research and Development (2019)
Algorithm for Training Neural Networks on Resistive Device Arrays Analog AI Cores arXiv (2019)
Accurate and Efficient 2-bit Quantized Neural Networks Digital AI Cores SysML (2019)
Parallel Prism: A topology for pipelined implementations of convolutional neural network using computational memory Analog AI Cores arXiv (2019)
Computational phase-change memory: Beyond von Neumann computing Analog AI Cores Journal of Physics D Applied Physics (2019)
Multi-ReRAM synapses for artificial neural network training Analog AI Cores IEEE International Symposium on Circuits and Systems (2019)
Training neural networks using memristive devices with nonlinear accumulative behaviour Analog AI Cores International Memory Workshop (2019)
In-memory computing on a photonic platform Analog AI Cores Science Advances 5(2) (2019)
System Performance: From Enterprise to AI Heterogeneous Integration iTherm and Interpack (2019)
DeepTools: Compiler and Execution Runtime Extensions for RaPiD AI Accelerator Digital AI Cores IEEE Micro (2019)

Try our tech

At IBM Research, we create innovative tools and resources to help unleash the power of AI. See for yourself. Take our tech for a spin.

Demos

Come do your best work

We are looking for talented researchers who are as passionate as we are about artificial intelligence, advancing science, and inventing the next generation of intelligent machines.

Explore careers

Learn more about IBM Research AI

Publications

Demos of AI tech

Blog