We are Discovering Attention is Vital to Human Intelligence – Do Machines Need it Too?

Why do we say ‘pay’ attention? There is an implication that it costs something to do it. Using that word is closer to the truth that perhaps we realize. After all it costs us time and effort to focus our attention solely on one thing while suppressing the other things in the way. According to neuroscience, attention is a behavior that the brain applies more than we previously understood.


 What we pay attention to is the difference

 We probably always suspected that what we decide to spend our time on is as important as the thing itself. Now research in this area of human cognition has confirmed more details on how the brain operates in deciding the ‘what’. It turns out that rather than our attention working like a spotlight on the important things, it suppresses the things that aren’t important. This type of filtering or noise suppression helps us to be alerted when we need to. It makes sense that our brains, as complex as they are, have ways to efficiently process data. It remains a mystery how we can learn so quickly without seemingly having that much data to go on. It’s considerably less data than we need to train machine learning algorithms. Machine learning has huge reams of data that needs to be processed before it can do simple things like recognize an image or predict text. Even then it’s prone to error as we know. Yet babies demonstrate how our even early stage human brains manage to correctly predict without all that data. Unlike machines, we know there are limits, or ‘optimum’ settings for our human brains. For example, studies show we tend to only remember 7 things at a time, we can only cope reasonably with about 150 relationships (The Theory of Dunbar’s Number), and close friendships max out at about 5 people. Computers can do more and we are designing them to do even more. But could (and should) there be an optimal setting even for machines to learn?
The brain is always checking for what needs our attention
 Going deeper into the research, the brain is constantly ‘looking’. In fact it does this about 4 times a second. Something in our design is set to constantly be looking and refactoring the situation a set amount of times a second. It seems the brain is wired to be a ‘wandering’ mind. I can’t tell you how many times a second robots do this currently but you only need to look at the infamous Boston Dynamics dog shaped robots to see they are looking and checking their surroundings constantly. To operate that well, they must be paying attention a lot.  AI that doesn’t come in robot form, varies a lot in how much attention it appears to be taking. Some AI fueled programs seem to be paying very little attention. For example, does it feel like your intelligent assistant proactively give you good suggestions or does it miss opportunities to help you out? There is still plenty to do to optimize the attention part of AI.
Attention opens the door to the other intelligence functions
 Attention is one powerful thing that we are realizing we do as humans that help make us intelligent beings. There are others like Reason, Common Sense, Creativity and Wisdom (from the experience is really doing something). I’ve written a few pieces on what human intelligence is and how we might build it into technology. 
Attention itself feels like the front door to those other human attributes. Once we pay attention to something important – whether it’s an action, an event or information, we’re switched on to the rest. We are aware of how those important things might connect in the creative process for example. We won’t miss a key piece of knowledge that could help us reason better. When we pay less attention, like any cheap, impulse purchase, it could cost us later. Attention is essential to learning in an optimum way. Machines don’t need to worry so much about missing information like humans do because everything in recorded and filed away. But as we’re discovering, just having lots of information and applying machine learning, it doesn’t produce real ‘general intelligence’. Without a general intelligence element built into machines, we are always going to missing out on their full potential or worse making them smart yet dangerously stupid. This is best explained by the paperclip making machine thought experiment.


Attention helps us filter out the noise

 As humans, we have a lot of distractions getting in the way of attention these days. There is a lot of noise in social media, news, and general business of life. All this noise dampens our ability to be attentive. We burn out trying to pay attention to it all. The cost of paying attention is to our mental health, our ability to empathize, our personal relationships and our concentration at work. We can stop caring because if everything is important, nothing is.
Machines can help us pay attention
 Machines could help in two ways. We can use computers to help us to pay attention to the right things and we can design them to be attentive (to the right things). The challenge will be knowing what are the ‘important’ things. These are different for each person. The first way requires much personalization so that it aids us with the filtering process according to what we feel is important. Right now, people are less likely to state their preferences for personalization so most platforms are estimating based on what you actually do. I don’t tell Netflix what I like but it tries to work it out.
Machines can learn to be attentive (to the right things)
 The second way may require machine learning to have some kind of ‘attention’ built in. Some of the most up to date research on AI is considering just that. Today there are weightings that neural networks apply (this is what machine learning uses beneath the hood). Unsupervised learning can work out these weightings if it is presented with enough training data. It is can produce astonishing results for things like  facial recognition or deep fakes. We tell machines what to pay attention to and the task is narrow. For more generalized intelligence, we get mixed results. The playing field is too broad. Machines can easily pay attention to everything but how do they know what’s important? 

Just like every person has an individual sense of what’s important, machines may have to too. We may have to create ‘flavors’ of machines that pay attention to specific things. I would happily trust an assistant that is attentive to the best money saving deals. Sign me up!







Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email

Nothing beats the power of first-hand, human recommendations

See more >>

Stanford HAI – Human centered AI

I signed up for updates a few years ago when this launched and it’s been a constant source of high quality content. There are always events going on which are

Scroll to Top