Prasanth Chatarasi, Alex Gatea, et al.
CGO 2026
Analog Non-Volatile Memory-based accelerators offer high-throughput and energy-efficient Multiply-Accumulate operations for the large Fully-Connected layers that dominate Transformer-based Large Language Models. We describe architectural, wafer-scale testing, chip-demo, and hardware-aware training efforts towards such accelerators, and quantify the unique raw-throughput and latency benefits of Fully- (rather than Partially-) Weight-Stationary systems.
Prasanth Chatarasi, Alex Gatea, et al.
CGO 2026
Minhua Lu, Joyce Liu, et al.
ECTC 2025
Olivier Maher, N. Harnack, et al.
DRC 2023
Sufi Zafar, Thomas Picunko, et al.
IEDM 2023