As humans we feel duty bound to point out how much we know to others, especially if we think it’s incorrect. We do it often, sometimes in real life and frequently online. Reddit, twitter, Facebook, especially Quora, any forum on any topic and everywhere there is a comments section and a click worthy article we are there to opine till we turn (a web friendly shade of) blue in the face. This infamous meme sums it up pretty well.
We can’t help but share our knowledge – either in a helpful way or to respond eagerly at the gaps in someone else’s knowledge. Sometimes the latter leads to an argument that might impart some wisdom but it’s likely to be willfully ignored than gratefully received. Either way, a natural human reaction is to defend our corner, double down and begin an argument.
This behavior is an innate trait in humans i.e. to show off what we know, even if seemingly does us no favors.
It turns out sharing knowledge is unique to humans. In his book ‘How to learn – Why the brain learns better than any machine…for now’, Stanislas Dehaene, Professor of Experimental Cognitive Psychology lays out the pillars to effective learning based on the latest research which includes attention, active engagement, feedback and consolidation. Among these pillars, Dahaene mentions the concept of ‘social learning’, our powerful capability to learn quickly from other humans.
This has been remarkably useful. We learn faster and more efficiently from others. We learn from their experience. Great inventions have come from a single person but often behind the scenes there were people who helped shape the end result. Whether it was a chance remark that led to that light bulb moment or ‘standing on the shoulders of giants’ to humbly disclose that the work of others before were essential to the process.
In Steve Jobs lost interviews (from 1995) he talks about the product development process being like a rock tumbler where rough rocks knock against each other, just like a group of talented team members ‘bumping up against each other, having arguments, having fights sometimes…until they polish ideas, and what comes up are these beautiful stones’. It’s a perfect metaphor to describe the necessary friction required to really crack the hard problems in building something new and innovative.
It may be that the process of arguing and challenging each other is an essential part of the process to grow our intellect fast and efficiently. A behavior that AI may need to engage in.
AI is not like humans
Despite a recent article stating (I suspect) prematurely that ‘the game is over’ because we now have a generalized AI capability in Google’s Gato , AI is not close to matching our human intelligence. No, we humans still have the edge in putting forth our opinions or carrying out our important knowledge correction duties when no-one asked us. If we didn’t, what innovations would be missing today? There are probably tons of examples of inventions that came about via a loud, bolshy team member (or maybe a disgruntled non team member) letting it be known to anyone who would listen exactly what they thought – even if it leads to an argument. In some cases because it leads to an argument.
AI politely waits for you to ask a question or it pipes up when it thinks it has something useful to tell you, always based on a prediction – i.e. you did this before so maybe this is what you need now? This isn’t really the kind of human trait we’ve been talking about. AI’s don’t usually display argumentative, contrary or confrontational tendencies. I haven’t yet had a proper argument with an AI but maybe that’s what needs to happen. I imagine in the Steve Jobs example, there were plenty of arguments in the team at Apple to go from rough rock to smooth stone. Necessary arguments that pushed the team to innovate.
Maybe you could say AI doesn’t need to argue, why should it? It knows so much more than us, it can smugly be confident it’s technically correct and simply, calmly and without any human emotion just keep repeating that. I really hope I don’t get into an argument with an AI one day. I can’t think of anything more frustrating than arguing with someone or something that just keeps telling you they’re right (AI or human). Ok, for a maths calculation I’m inclined to believe the AI. I won’t put any money on me being right on that one. But for a situation that calls for a not so black and white answer, the possibility is there. I could get into an argument with an AI and maybe there’s some benefit in that.
We could speculate a bit more while we’re here. Imagine a world sometime in the not too distant future when we’ve outsourced a task entirely to an AI or a team of AI’s that each have specialties. How would they deal with a disagreement? If they don’t argue, could they really learn and grow as efficiently as humans? Possibly they don’t need to argue in the archaic way we do with our voices growing louder and redder. They may have a perfectly unemotional interaction where one AI simply secedes as soon as the correct data is presented based on a sophisticated probabilities calculation. Boring but civilized at least.
There continues to be inspiration from humans (in the brain and how we socialize) that is fueling our research and development in AI. Today it is limited to what we know already about how the brain learns and that still involves so much that is unknown. We are understanding (with the help of MRI imaging) that the impact of attention or feedback loops are keys to how the brain grows it’s knowledge. Somewhere in there is the topic of arguments – our attention is gained quicker when we spot a mistake and correcting mistakes is helping us make and modify connections in our neurons. This argumentative behavior could be the higher level activity that paves the way for the scientifically discovered formal understanding we now know from Dehaene’s book. There is plenty of opportunity to investigate our many varied human behaviors to understand how they may contribute to intelligence and effective knowledge sharing.
Until then, our arguments with AI will probably continue to just be us shouting at Siri/Google Assistant/Alexa/generic customer service Chatbot when it can’t figure out the most basic request for the 100th time – and knowing it probably won’t answer us back. At least we won the argument though.