Intel’s neuromorphic robots have photographic memory

Intel is powering neuromorphic robots, AKA robots that learn about objects after they’ve been deployed, in what’s a pretty interesting development in the field of artificial intelligence.

So, let’s quickly define what Intel calls neuromorphic computing and robotics: the robot, endowed with neuromorphic capabilities, will be able to generate “miscrosaccades” (small jerky eye movements) after observing an object with its eyes (camera or dynamic vision sensor).

These events are collected and fed by Intel’s Loihi chip and added to a state-of-the-art neural network, which acts much like a brain with collected data and information. If the object or view has not yet been registered in the neural network, the event representation is updated or added as a new input. If known, it is acknowledged and feedback is given to the user.

It’s not a new concept, or a new practice, but Intel’s Loihi chip is capable of powering memory retention and addition with 175 times less power than conventional CPU-based methods. In collaboration with the Italian Institute of Technology and the Technical University of Munich, this was achieved by implementing a spiked neural network architecture on the Loihi chip.

After witnessing an event (object), the Loihi then localizes the learning process to a “single layer of plastic synapses”, with new neurons being recruited as different object views were obtained. This allows the robot to learn while being able to interact with the user.

Ergo, robots can now create memories more efficiently. Heck yeah, our future robotic overlords are energy conscious.

“When a human learns a new object, they look at it, turn it around, ask what it is, and then they are able to recognize it again in all kinds of environments and conditions instantly,” said Yulia Sandamirskaya, robotics research. leader of Intel’s Neuromorphic Computing Lab.

“Our goal is to apply similar capabilities to future robots that operate in interactive environments, allowing them to adapt to the unexpected and work more naturally alongside humans. Our results with Loihi reinforce the value of neuromorphic computing for the future of robotics.

This research, Intel says, is crucial to improving what manufacturing or support robots might do in the future. Robots that can learn on the job sound great, frankly.

However, keep in mind that this research, so far, has been limited to visual events. While visuals are important, consider anything the robot couldn’t learn, including sounds and sensations (although the chip can sense), as well as critical thinking in complex scenarios.

Still, it’s a very impressive development.

You can read the article “Interactive Continuous Learning for Robots: A Neuromorphic Approach” online. It was named “Best Paper” at the International Conference on Neuromorphic Systems (ICONS). You can read the announcement on Intel’s website.

Michael E. Marquez