Top Reasons Why a Techie should Study Philosophy Now More than Ever

The relationship between Technology (or Science) and Philosophy used to be indistinguishable. Historically, the philosophers of the past were also the mathematicians, scientists and inventors of the day. How much impact this had we don’t know, but we can be sure that many breakthrough discoveries came from the scientist philosopher.

Things are different now – we tend to keep technology separate from subjects like Philosophy.

As a technologist, any philosophy knowledge I gained was through my own instigation. Partly this was curiosity but also because of a need to better understand issues like bias and ethics that have surfaced in Artificial Intelligence.

At first glance, Philosophy seems to be unrelated to the business of developing new technology. Yet the subject had enormous impact on the way our systems were designed, the rules we follow and actions we take every day. For example, the politics of our governments, the laws around property and land ownership, our justice systems, scientific methods, so much stems from philosophy. It’s for those reasons that it can seem odd that it’s only studied by a small population today.

One option you could start with is ‘History of western philosophy’ by Bertrand Russell to get a comprehensive history and analysis that makes this book a classic. It’s a long read but worth adding to your book shelf as a reference. I continued reading at every opportunity (check out New Philosopher) but up until now I’d never taken a course or any formal study.

As someone from a techie and product development background, I wanted to find out more.

1. Think like a Philosopher

For reference I signed up to the Philosophy and Critical Thinking online course on EdX.

The modules cover terms and concepts like arguments, premises, truth, cogency, skepticism, conditionals, and identity. The general approach involves dissecting statements and claims into parts which will appeal to a coder or scientist e.g. If x and y are true then that means z. The hard bit is where you can’t rely on x and y because you can’t be sure that any knowledge is really true. It quickly gets exhausting having to question everything and you might find yourself getting frustrated. But this is one of the key areas that had philosophers debating throughout history i.e. how do we really know what is true? For scientists we apply this thinking in our experiments and in technology we look to the hard data asking ‘what does the data tell us?’. The processes identified in philosophy are similar to the ones we use in science. e.g. using inductive reasoning to work something out.

The payoff in discovering truth in science and tech is that you find a solution or build a product in the end. However philosophy doesn’t lead you to a neat solution. The best you can hope for is that when faced with a  choice, you have some justification in the decision you made. And this is where the real connections between technology and philosophy start becoming apparent in a practical way.

The Trolley problem and The Allegory of the Cave

The famous ‘trolley problem’ is used to demonstrate some ethical considerations we could be faced to deal with. A runaway trolley is on a train line headed for disaster and you have a choice to direct it one way or the other. The first way would save one person and the second way would save 4 people (there are variations on this example but the overall dilemma is the same). What do you do – take the option to save more people by sacrificing the one? This is the same choice an autonomous car may have to make if faced with an impending accident. There isn’t one correct answer, but there are ways to think about options to take.

Another important analogy is Plato’s ‘Allegory of the cave’ which addresses human perception. It puts forward the claim that our senses alone are not enough to fully understand reality. We need philosophical reasoning in addition. The analogy describes a situation where prisoners have been tied up in cave for their entire lives and are only able to see shadows of people outside. The shadows are perceived as reality to them but if they stepped outside, they would see there were real people. In Plato’s much longer story, one of the prisoners escapes to tell the other what he saw but they don’t believe him because they can’t comprehend it and think he’s lying.

This story can represent a number of things but it be could applied to human perception today. Could it be compared to the perception we are creating in social media and the risk it poses to understanding the truth? Perhaps the shadows are like fake news or trending tweets. There is no time to reason and explore what could really be happening, so we’re reduced to the predicament of the cave prisoners, trying to make sense of the little we can see. The answer for Plato as he sets out in The Republic, was more education, specifically more philosophers in charge who could reason and educate. Whether that’s the answer here, we don’t know but further discussion could be merited.

2. Be mindful of techniques we apply – Can we really rely on Prediction in AI?

The other interesting concept in the topic of knowledge is the idea that nature is not uniform and therefore we cannot safely guarantee that what happens after will be the same as what has happened before. In this case, knowing when doubt is acceptable is an important element in analyzing knowledge. We can relate this to our tendency in AI to use algorithms that make predictions for information or actions. These are all based on past behavior of ourselves and our world (nature). Whilst it proves right some of the time, if we were to take a philosophical approach maybe we would be hesitant to be so reliant on prediction. Prediction leads to bias and the notion we shouldn’t assume things will always behave as we expect just because they happened before. There is a lot more of this topic which could question our reliance on predictive algorithms as a form of intelligence.

3. Question the big things – Do computers really know anything?

Finally, onto another intersection between technology and philosophy that is worth mentioning – do computers really have knowledge? Having knowledge before experience or sensing is referred to as ‘a priori’ in Philosophy. As humans we sometimes acquire knowledge independently without having actually experienced it. When it comes to computers though, if they don’t have experience of it can we say with confidence that they really know it? We could ask if it even matters for our purposes.

The Chinese room thought experiment by Searle questions the idea of computers having real knowledge. In the imaginary experiment there is a room with a person inside whose job it is take Chinese symbols and respond to them. The person in the room knows nothing of Chinese but they with can look them up and give an appropriate response back. The person outside assumes there is a Chinese speaker inside the room. The ‘argument’ is that the person in the room, much like a computer, does not really understand and therefore the infamous Turing test is not adequate.

Without really knowing, should we put computers in charge and trust them to do the right thing in every situation? We can’t be sure humans would always do what’s right but with computers, maybe we need to tell them about morals and ethics. One of the lessons in the course was about the social contract made between members of a society that set out agreements like the freedom and benefits of the state. Could that include intelligent AI controlled computers?

Whether we like it or not, AI will be making decisions on our behalf. If we can influence how it should respond (e.g. the trolley problem) based on the philosophy we choose, we could at least be aware of what might happen. So does a computer need to know things like a logical fallacies or moral codes? Possibly not but feeding computers information on how humans think might be both useful.

Everyone should study Philosophy, especially if you’re in AI

There’s much more in the course that I haven’t mentioned but my aim was to pick out some things relevant for technology today.  Everyone should study philosophy if they can. It’s become easier to access than ever before. Even if you skip the history and only focus on the skill of critical reasoning. This is especially true if you’re in science and tech but it’s absolutely essential for AI. In fact there’s an argument to say you probably shouldn’t be allowed near it without some training. You can read more about the intersection of Science and Philosophy here and specifically some reasons for software engineers to dive deeper.

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email

Nothing beats the power of first-hand, human recommendations

See more >>

Stanford HAI – Human centered AI

I signed up for updates a few years ago when this launched and it’s been a constant source of high quality content. There are always events going on which are

Scroll to Top