Recknsense

A New Kind of Machine Learning is Inspired by Neuroscience and uses Time as a Component

There is a new kind of Artificial Intelligence emerging that is closer to the human brain. Today’s AI is likely to be built with Machine Learning that uses Deep Neural Networks (DNNs). The ‘Deep’ is due to the multiple layers of neurons in the network. These layers enable a more advanced method of analyzing data than just a basic Neural Network (NN). This subset of AI is called Deep Learning (DL). But while DNN’s are close to the functioning of brains (especially in pattern detection), they miss a vital element that humans use – that of time. DNN’s don’t factor in time as a human brain does in the form of ‘spiking’ neurons. In our own brains, neurons communicate by reacting to signals from other neurons via a spiking which is dependent on time.

 

The Time Dimension

This new type of NN is called a Spiking Neural Network (SNN). By adding a time dimension, it models the way the brain works more closely. Time is important in the brain because a spike from a neuron only lasts a tiny moment – about a millisecond (a thousandth of a second). That single spike is then transmitted to thousands of other neurons, and each of those neurons then decides whether to generate spikes of their own. These spikes can go to thousands more neurons and so on, in an infinite cascade.

 

Brain activity comes in ‘waves’

The combination of millions of neurons across the brain all spiking together causes ‘brain waves’. Brain activity is not continuous, but comes and goes in these waves. Unlike waves on the beach, these waves happen very quickly, typically dozens of times a second. They can be as slow as less than 1 a second while you’re asleep, and up to 100 times a second in some parts of the brain during vigorous thinking.

We don’t fully understand exactly what these brain waves are doing. They seem to be important for coordinating the communication between different parts of the brain. This is what makes time so important. Brain activity is constantly changing like waves across the ocean. Spikes from each neuron coordinate with the waves, being more likely on the crests and less likely in the troughs. We can detect these waves with electrodes placed on the head. Quite often, we can tell a lot about what a person is thinking by the patterns made from waves surging across their brain.

If you take away the time component of brain activity, you take away the brain waves. And if you take away the waves, you stop the thinking!

 

 

We’re going to need new Hardware and new Algorithms

This is one of the biggest differences between our current state-of-the-art AI (DNNs) and the brain (SNNs). DNNs don’t use spikes and they don’t have waves. To build a true SNN requires dedicated hardware chips. The additional time component results in large information capacity with smaller power requirements, and the ability to handle more complexity. We need new algorithms. The mathematics is complex but the rewards are likely to be huge.

SNNs have been around for many decades but mostly only as an intellectual curiosity. Many would probably say that is still the case, since we still don’t fully understand how to make SNNs do all the things that can be done these days with Deep Learning. However it has also been known for several decades that SNNs have the potential to be more powerful than DL. Consequently there are small groups of researchers around the world who have dedicated their careers to discovering the algorithms (the methods) that will bring that full potential to life. Their efforts are starting to pay off and some very big players in the business are starting to take notice. IBM created an SNN chip called TrueNorth, and Intel created an even more powerful one called Loihi. These companies understand the enormous potential capabilities of SNNs and have created these chips to share with the research community to speed up SNN research.

This nascent field is arguably just beginning to boom, and is aiming to unlock the secrets of creating powerful new forms of AI with SNNs.

 

 

What will SNN’s give us?

SNN’s present the opportunity for some key advancements in intelligence and efficiency of Machine Learning.

 

1. Intelligence that needs to always be on without consuming lots of power

Our brains are efficient machines that use small amounts of energy to do very complex computations. To emulate this in our AI, SNN’s present the best opportunity yet to build hardware that requires intelligence built into it in a low cost, efficient way. This will be useful for a huge range of AI applications like speech and video recognition, health monitoring devices, security applications, self-driving vehicles, robotics, and autonomous process control. These are typically called ‘Edge’ applications, which means applications that are used in real world devices rather than just running on big computers. These applications need to respond in real time; they can’t rely on communication over the internet with distant, enormous and power-hungry computers. Instead they need to respond quickly, reliably, and often in rapidly changing and unpredictable environments. Plus they need to be able to do this while running on rechargeable batteries.

 

2. Intelligence that doesn’t require a lot of training

Just as humans can make a good attempt at learning something new with limited information, an SNN brings us closer to that skill. The deep learning we use today requires vast amounts of training and data to build models. There is a limitation to how far we can go with this approach. It also means the organizations that are best at creating these huge AI models are the ones with the most data, making it out of reach for others and often too exclusive. Open AI’s newly released AI called GPT-3 is impressive as a very sophisticated text predictor. Yet the amount of processing required is enormous and even after all that training, it still lacks common sense.

 

3. Intelligence that doesn’t need expert supervision

There are three ways to train AI.

– Most current AI uses a learning technique called ‘supervised learning’. This is where people decide exactly what they want the AI to learn, and these people also need to supply the training data with the specific result required from the AI each time. Since current AI needs so much training data, it’s often difficult to create enough data for more than very simple problems.

– Another option is called ‘semi-supervised’ learning, which is where the AI tries to learn just enough to recreate the training data it was given. This is a better option, but because of the way current AI learns, it still requires a huge amount of training data and an enormous amount of computation.

– Most SNNs use a simpler approach called ‘self-organization’. This is where the AI self-discovers the patterns that are inherent in the data. It is the only way of doing ‘unsupervised’ learning, which is where the AI can learn by itself. This seems to be how the brain mostly works, and is very different from how DNNs are trained. This is the other vital difference between SNNs and DNNs.

 

It could be a Game Changer 

SNN’s are emerging but have a long way to go both in the hardware and the algorithms designed for it.

However, we need more research and more funding to accelerate our progress. We need to understand more about how to compute like the brain using spikes, waves and time. And we need to understand more about how to learn using self-organization. It presents new challenges but it could also be a game changer. The SNN has the potential to change the design of AI. Eventually leading us to new applications that bring us a step closer to human level intelligence.

 

This post was written in collaboration with Pete Stratton – check out his bio below.  You might also be interested in reading ‘What I discovered studying Neuroscience, having a background in Electronics’.

 

Peter Stratton is a scientist who wants to understand the computational principles that are implemented by nervous systems, and apply these to complex engineering problems in robotics and information processing. Career highlights include a journal paper published in Nature Neuroscience on Deep Brain Stimulation, a PLoS ONE paper on a digital wireless brain recording system, several papers on using artificial neural networks to control robots, and invited talks as a guest speaker to the Salk Institute (USA), the International Seizure Prediction Workshop (Germany), and Brain Corporation (USA). In 2018 he became a Research Affiliate at MIT, Cambridge, USA. He is the author of the website neuro-ai.info.

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email
Scroll to Top