Recknsense

AI needs empathy but it doesn’t need to be emotional

Artificial Intelligence, whether that’s applying machine learning or something else, needs to incorporate empathy in order to perform better for humans. Research and analysis is leading us to the conclusion that cooperation among humans is promoted when empathy is a factor.

The way you perceive another person’s situation goes a long way to helping you to decide whether to respond.

I wrote a piece about the role that empathy plays versus reputation in the likelihood of cooperation in relation to social networking. Using this emergent thinking around empathy and human cooperation, it should follow that empathy also has a big part to play in the interaction between humans and machines. In the same way that we consider not only a person’s reputation, but also the reason for asking i.e. our empathy towards them, we may also need to consider our empathy towards machines and vice versa i.e. why should we care that a machine is trying to cooperate with us and why should a machine pay attention if we try to cooperate with it?

If we look at the first case, this is the most common scenario— a machine or software is trying to engage or obtain our cooperation. Just like in the case of human to human interaction, we are more likely to cooperate if we trust the reputation of the machine or software. But if we add more empathy into mix this could have a much higher rate of success. We tend to connect empathy with emotions, imagining that machines need to be more emotional in some way but empathy can be present without mirroring the same feelings.

Empathy can be emotional or cognitive

Empathy is often sparked off by how we physically feel alongside someone else (the emotional kind) but there is also a cognitive form, which is what we know about how another person is feeling, without feeling it ourselves i.e. perspective taking. As humans, we can work out cognitive empathy by picking up signals from gestures, tone of voice, facial expressions while online we can read a lot between the lines in addition to the words themselves or have it spelled out with emojis. Machine learning is attempting to work out this type of empathy using maths and probabilities but it’s a difficult calculation. There are so many factors to working out how someone is feeling, with much of it unspoken.

Can a machine ever expect to get it right and do we even want it be that personal?

There are some things that are easier to deduce and don’t go too far into the personal invasion of privacy territory. My location, situation, time of day and history of responses may be enough to understand the basics of what a person is likely to be feeling without going into complications of what is really going on emotionally. It takes a lot of information, that isn’t available to machines, for us humans to work out true feelings but we can make some assumptions based on the information that is easy to find. For example, let’s take a scenario: it’s midday, I’m still working at the office without a break since the early morning. The empathy to deduce is that I’m likely to be in deep concentration and don’t want to be disturbed unless it’s urgent. If this machine was a mobile intelligent assistant, at this point, it could refrain from showing non essential notifications like e-mail newsletters but continue with personal e-mails (so I don’t miss anything) until the time I unlock my phone again signaling that I’m ready to see everything again. I wouldn’t need to set my ‘do not disturb’ and I would still selectively have access. Just in the same way in real life, a person would see you head down in concentration, working through lunch and avoid disturbing you, an intelligent machine could make similar conclusions without seeing your every move or even know your true emotion. There would, of course be room for error — the last thing you would want in that scenario is to miss something important but with the right information, and without veering too deep in assumptions, some obvious conclusions can be drawn. In this way, if the intelligence can extend to be aware of basic and obvious situations, it could be channeled in a way that is convenient and conducive to our productivity.

Conclusion

This was one example, but the ability in general to use more context to feed the cognitive empathy of machines, is going to be required for AI to be more intelligent. It may be via machine learning or even simple rules that, just as humans use to make a decision in the moment, machines can use to make the connection between available dots. I can see this type of empathy is being factored in some places but while we try to create algorithms, it may serve us better to take a step back from complex emotion detection by machines or learning by using the probability of emotions occurring, and instead install carefully chosen, simple rules with the same data we humans use in every day life.

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email
Scroll to Top