@ IBM Research

As humans, we take for granted our ability to look at a person and instantly recognize whether they are waving at us or clapping their hands. The brain does it quickly, without overheating or running out of energy -- which is what would happen if you tried the same task on your laptop or smartphone. That’s, in part, because microprocessors are designed very differently from the brain. To bridge this gap, IBM researchers developed the IBM TrueNorth neurosynaptic processor, which contains a million artificial neurons organized like the brain’s cerebral cortex. In new research, IBM scientists used a special iniLabs DVS128 event camera modeled after the mammalian retina with a TrueNorth processor running a neural network they trained to recognize 10 different hand and arm gestures. Unlike a conventional camera and chip like the one in your phone, the system is event-based, meaning it only reacts if there’s a change in what it’s seeing. This enables the system to run with much less power -- under 200 mW. This model could enable AI applications efficient enough to be powered off the battery in a smartphone or a self-driving car, for example.

IBM Research at CVPR 2017: Helping AI systems to 'see' with latest computer vision innovations



Read featured papers at CVPR 2017

Featured

Dataset

DVS128 Gesture Dataset

Blog

Neural network trained to recognize hand and arm gestures

Blog

Deep learning inference possible in embedded systems thanks to TrueNorth

Partnership

IBM Research and MIT collaborate to advance frontiers of AI in audio-visual comprehension technologies

Explore career opportunities @ IBM Research