As humans, we take for granted our ability to look at a person and instantly recognize whether they are waving at us or clapping their hands. The brain does it quickly, without overheating or running out of energy -- which is what would happen if you tried the same task on your laptop or smartphone. That’s, in part, because microprocessors are designed very differently from the brain. To bridge this gap, IBM researchers developed the IBM TrueNorth neurosynaptic processor, which contains a million artificial neurons organized like the brain’s cerebral cortex. In new research, IBM scientists used a special iniLabs DVS128 event camera modeled after the mammalian retina with a TrueNorth processor running a neural network they trained to recognize 10 different hand and arm gestures. Unlike a conventional camera and chip like the one in your phone, the system is event-based, meaning it only reacts if there’s a change in what it’s seeing. This enables the system to run with much less power -- under 200 mW. This model could enable AI applications efficient enough to be powered off the battery in a smartphone or a self-driving car, for example.