Hardware and the Physics of AI
For decades, advances in computational technology have aimed to increase versatility so that hardware can be used for a wide variety of applications. The trade-off is that hardware is not optimized for any of its applications. Hardware accelerators may be the answer to this dilemma. Hardware accelerators are tailored for a specific class of applications to give the best compute efficiency. At IBM Research, we are pursuing algorithmic and hardware accelerators for deep learning that are rooted in conventional CMOS technology, while simultaneously taking a fresh look at how hardware can be designed for better compute efficiency. This includes analog devices for AI and quantum computing, opportunities that exploit strong relationships between the executable AI algorithms and the physics of the system components.
Analog computing for deep learning uses arrays of nonvolatile memory to perform matrix operations with the weights imprinted in the memory nodes. Because the weights do not move between memory and compute unit, matrix operations can be done in parallel at constant time. IBM Research has successfully trained deep learning networks with this new architecture, and we are now moving into a new era of chips designed specifically for deep learning training and inference. Quantum computing aims to help us address problems we now simply can’t even attempt. IBM Research is working to understand the interaction between AI algorithms and the underlying hardware to apply quantum computing for AI, as well as using AI to characterize and optimize quantum systems.
Hardware for AI
Much of today’s computation remains tied to hardware built for spreadsheets and databases. When used for AI, it is power-hungry and inefficient. IBM is pushing the physics of AI to deliver radical improvement over the next decade, with innovation and co-development from algorithms to systems to devices.