Recknsense

Can we Codify Reason? Design for a New ‘Thinking’ Machine

Reason or the act of reasoning is a hard thing to pin down. It seems similar to Logic but it also feels close to understanding. 

 

 

What is Reason?

The exact definition is arguable yet we can be certain it is a type of thinking that humans frequently do. Whether it is unique to humans we can’t say. Perhaps it takes place in animals but we don’t associate it with them and we don’t have much evidence to prove it really occurs. Reason is one form of intelligence, we think of as a special, higher level of human skill. Humans don’t just follow our reflexes and passions, we stop and reason. At least this is what we expect most of us are capable of. In reality, through research in recent years, we’ve discovered that we have ‘systems of thinking’ which are used in different situations. As described in the bestselling book ‘Thinking Fast and Slow’ by Daniel Kahneman System 1 is fast and intuitive and System 2 is more deliberate and thoughtful. There’s also a system that’s based on Perception which can occur before System 1. The System 1 and 2 processes are referred to as ‘dual system thinking’.

 

 

Reason is an Enigma

The subject of Reason is explored comprehensively in the excellent ‘The Enigma of Reason’ by Hugo Mercier and Dan Sperber. It progresses the story of how we think from the dual system thinking to focus on how humans do this mysterious calculation of ‘reasoning’. We learn about the history of reason and how it’s often mistaken for Logic. The common misconception is that we always decide based on some form of logic or reason and then go on to make the most appropriate choice. It turns out, this isn’t what happens most of the time. We actually do or say things through some kind of automatic intuitive inference, and then think of the reason after. We tend to explain or justify why we took an action in hindsight rather than use it to rationally make a decision.

Another insight in the book is that it is a modular process. Somewhere is our mind is a ‘reason’ module which takes an input and delivers an output. We don’t always have a good understanding of how it does the calculation. We’ve always thought of the brain as one big, complex calculator but the modular approach lends itself to some interesting consequences. Like other processes we think of as modular like taste, there is an automation involved and it’s not something we can necessarily stop and influence. We just do it. When we think of these high level intelligence functions as modules we open up some new possibilities. The reason module can be selectively used or maybe even avoided. Who hasn’t done something that is beyond reason, only to explain it later through some rational thought process?

 

 

Process aside, what could drive Reason?

If reason is not logical (and we often see evidence supporting this these days), what does fuel our reasoning?

If we spend some time considering the reasons we do or say things we could come up with a few ideas as to the cause. The list of reason ‘input’ could include; Common sense, Emotion, External variables (e.g. weather, location, time of day, etc), Logic, Memories (we are biased to the most recent), Empathy, Language, Perception, Mind reading (expectation of what others are thinking or ‘theory of mind’), Intuition, Self interest, and Peer pressure. Below is a diagram of a reason module. Although it should be autonomous if the research is to believed generated through intuitive inferences, there are elements that are likely to impact output. The orange boxes refer to human behaviors that wouldn’t apply or benefit machines, the pink boxes are more relevant for AI.

 

   

        

There is something inherent in our reasoning. Babies display reasoning despite knowing very little. There is an innate skill of reason present in all of us even if we don’t always use it.These days we need to better understand the business of how we reason. The lack of reason or proactive decision not to use it (commonly also referred to as a lack of critical thinking) is taking its toll on the way we communicate and the actions we take. It will be long time before we can understand more about the enigma of reason. We may not ever get there. These clues to what appears to be happening give us opportunity to dig deeper.

 

 

Modularity lends itself to coding

Would machines need to do it. Is there any advantage? We don’t need massive amounts of data like machines do, in order to come up with sensible reasons. Somehow we can use much less data, more efficiently to come up with higher quality answers. But there is some explanation as to how this works and how it could be applied to computers. When doing a new action or forming a new opinion, we use the reason module (and probably logic and other thought modules). We are more careful in our consideration. Once we’ve mastered new skills such as driving or math, we need the reason module less and less. Over time, we go on auto mode and we don’t need to use that part of the brain as much.

There could be a way to translate this to machine thinking. A reason module (that could be made up of or connected to a number of other modules like perception or common sense) could work alongside machine learning. As the feedback loops help the machine to hone the reason module, it’s needed less. The amount of data required to calculate reasoning potentially reduces if a reason module could shortcut answers.

The benefit of the modular approach to high level thought processes means the overall ‘intelligence’ of an AI can be tuned depending on what’s needed. We don’t have to rely on one treatment for every problem. Some problems might need more reason, others more empathy. The dials of the intelligent machine can be wound up or down to better solve the task at hand.

 

 

A Reason Module

Below is a possible design for a ‘Reason Module’ that could be combined with Machine Learning. Machine learning (ML) would provide the logical, pattern matching response based on large volumes of data. The response from the ML would immediately go into a ‘reason module’ before it provides its final output.

 

Getting back to the central question – could we codify reason? The conclusion of The Enigma of Reason suggests that very possibility. In machine learning today we already talk about inferences as a means to understand what a user may be asking for. The idea of intuitive inferences is the tough part. If human reasoning is hard to figure out, intuition might be even harder. We speak about it like it’s a mysterious new sense. ‘My intuition tells me there’s something odd going on…’ followed by ‘…I don’t know why’. We sometimes call it a gut feeling and this makes it even more clear that the brain doesn’t appear to be involved, at least we don’t recognize it that way. So how can we possibly codify that?

The other issue is how reason to one person could look very different to someone else. We don’t all conclude the same reasons. Many different factors including personal experience or context, lead a person down different paths as they use their reasoning. This is where I think the personalities of AI will have to emerge. This isn’t different accents or tones or use of language but about the subtle differences in reasoning among us. Some personalities will be more empathetic, some more logical, some more perceptive. Again the use of the intelligence module dials would help us design the right mix for the task.

Once we have decided on the personality of the AI, the right flavor of reasoning could be set. The intuitive inferences that lead to reasoning could be built using the appropriate blend of modules e.g. common sense, empathy, emotion, etc.

 

 

A Thinking Machine

If we try out an example question, we could see what effect a reason module might have.  Many psychological experiments have involved the scenario where unsuspecting test subjects have been put in a position where they have an option to help another person. The test is to see how they respond given factors around them. The main discovery has been that the influence of other people has made an enormous difference to how a person responds in a situation where they are called upon to help. For example, in a group situation where the consensus is not to help, people tend to go along with it, justifying it after. This is sometimes called the bystander effect, where people are less inclined to help if others are around, expecting those others to help. In another example, even if just one other person shows disinterest to help another stranger, research shows you are also likely to show less interest. Even if that stranger sounds like they are in distress.

We can apply this question to our new ‘thinking machine’ design by asking “Should I help this person?”

 

These are just ideas of how it could work. The way in which machine learning could work alongside a ‘reason machine’ could take many forms. One way (as shown in the diagram) could involve a machine learning process taking place first, followed by a reason module to enhance or refine the final result. It could work differently through a closer combination, perhaps with reason blended into the machine learning. Or the reason module could itself be powered by machine learning. There is even the possibility that the reason module is not used at all in the decision making but processes the data afterwards to help train the machine learning for the next time. This is a bit like the way the brain seems to be working – we take the action and come up with the reason after. In this way, there is a feedback loop to help the machine make a better decision next time. Whichever design is selected, it’s likely that some kind of additional reason module is required to augment machine learning on its own, as AI is designed right now.

 

 

The Future of ‘Reason’ based AI

Today our AI models work mostly in one way. They are superb for specific tasks like recognition and generating human like sentences and images. However, they severely lack ‘general intelligence’. That hard to pinpoint type of intelligence that we nearly all possess but that doesn’t neatly translate to one thing like logic, reasoning, or understanding. The likelihood is that intelligence is a complex mix of a number of ‘modules’ that will require tuning. Our task over the next few years will be to continue to research different human thinking modules and processes. From this we will need to actively map these to artificial intelligence techniques and maybe some day we can connect the dots.

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email