What’s the ‘I’ in AI really for?

We equate ‘Intelligence’ in Artificial Intelligence (AI) to a form of learning where we arrive at decisions through logical steps and calculations. It makes sense and appears to be the way humans do learn. In a world where everything is logical and humans follow a set pattern every time, this systematic method works beautifully. I think you know what’s coming next though. In reality, decision making just doesn’t run that smoothly. 

Although we can reflect in hindsight that we followed a reasoned and rational decision making process. In actual fact it could easily be down to a gut reaction, a hunch, or simply a lucky guess. Often our brains make a decision and we don’t know the steps that got us there. Fortunately for us, our short hand methods can guide us to the right decision. Maybe it’s fast thinking vs slow thinking working it’s magic or the enigma of reason (two good books to check out). So how do humans learn outside of our traditional definition of intelligence? There is a great deal of research to understand what humans do. Even more so today as we have new brain imaging tools like MRI to measure brain activity and examine possible causes. This hasn’t given us all the answers but it’s progress. One thing we can be sure of is that most of us humans are open to finding shortcuts to learning. The approach of AI today is fairly limited in the way it defines ‘intelligence’. Intelligence is more than ‘learning’ in a formulaic process. So here are some other ‘I’s that could take the place of Intelligence (they just happened to start with the letter I but there are plenty more alternative ways to ponder).


We know there is a very simple way that humans gain knowledge or a new skill – i.e. just watching and copying. Imitation happens all the time and it might just be the most common and effective way. What’s more we seem to know exactly when to use this method and when to use the ‘real’ learning option instead. By ‘real’ I mean learning from real hands-on action or doing. When innovation is required it’s not always sufficient to copy but in other non innovative cases, imitation will do. We switch effortlessly from imitating to learning as the situation presents itself. We grow up being told copying is something less than learning but it can work better sometimes and could even be essential. For example, if we’re attempting a physical move like a yoga pose, copying is the obvious way to learn. Either way, we learn quicker if watch others, mimic, or reproduce.


Human to human learning -accepting another person’s advice is a pretty useful way to make a conclusion and saves a lot of effort. From tips on how to get a task done at work to which new shows to watch – people will save you from the burden of learning through experience. They’ll simply share the benefit of theirs. There is a risk with this approach though. How do you know which humans to trust? We’ve figured out one way aside from checking out formal qualifications or expertise, which is the power of the influencer.  It could be the number of followers, contacts, status, likes, or some kind of social media credibility. Today’s reliance on the influencer to help guide us in a direction can trigger our lazy tendencies – if someone already worked this out, I’ll just do what they did!


There is no easy answer to the question of how we learn. Aside from Imitation and Influence there is another, almost magical ‘skill’ that might be at play. – Intuition Sometimes it’s the memory of previous trial and error, other times it’s just a gut feeling we can’t explain. We have a natural tendency to strive for efficiency so if we’re in a situation where it demands a quick decision, intuition might be our best option. 

Machine ‘Imitation’

You could argue machine learning copies too. Neural networks are the guts of ML and they learn by using a lot of data to make sure there is a high probability of their reproducing correctly. ML monitors the inputs and outputs in previous decisions and ‘learns’ how a system can get from one to the other. For very narrow problem areas, neural networks are highly effective. Think chess moves, image recognition, text prediction, etc. Given huge amount of data, ML can say with confidence that the answer is very likely to be X. 

The issue is we know machines don’t really understand (see the Chinese room argument). The evidence for this is that faced with a slightly new problem, they often have to start from scratch. Think robots that are trained for specific tasks. As soon as they encounter new environments, they often falter. 

We are getting better and better at improving the errors but the necessity for huge amount of data and processing remain. The challenge of maintaining accuracy while reducing the effort involved is ongoing. 

Machine learning tries to learn every time. You can imagine how inefficient this is. Technologists are starting to get wise to this and attempting to find ways that ML can memorize or copy some tasks. For instance FB just created Expire-Scan. As an example it cites a typical ML program that needs to identify a yellow door. Usually that involves going through the same set of steps for every different color of door even though it repeats itself with the same basic tasks. Using Expire-scan, it can ‘remember’ the tasks in common and not waste energy relearning those. It seems obvious but we’re only starting to consider the myriad of ways that the human brain learns.

Machine ‘influencer’?

Working it out from scratch is less and less appealing. As you tell from machines imitating or memorizing some tasks, the field is looking for other ways to learn. As we pointed out before, we learn from other people, and this is growing into a lucrative method (for some humans). Machines are getting in on the act.

‘Virtual influencers’ are a real thing (minus the authenticity of real, human experience). You can read more about an example here where some Chinese brands are using virtual influencers as well as real human ones.

There are more options

Machine Imitation or Machine Influencers could be a thing but will we ever get to a place where machines display intuition? This is a very human trait and probably is apparent in other organisms too. As not every intuition works out correctly, it’s rather risky to try and put it in a machine. It’s usually after an intuition works out to be true that we describe it like that. When it’s wrong we would say it’s a bad guess. The key to making machines widen their scope of intelligence is to understand ourselves better. For example memory of some tasks on ‘autopilot’, common sense, attention to prioritize, reasoning (which is different to logic). I’ve written about some of these alternative ways here:

We’re learning more about learning. By recognizing when we successfully imitate, how we influence, why we choose our intuition over other options, we can design better machines.

While you are here, please check out Recknsense recommendations. Get inspired and learn something new from someone new or add your own wisdom based on your first hand experience. 

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email
Scroll to Top