Recknsense

What could be done to prevent bias in AI?

Share:

This question has no easy answer. It requires diverse opinions and development of ideas. 

The technologies used for Artificial Intelligence (AI) today i.e. machine learning or deep learning are prone to bias. The techniques are designed for pattern recognition and so it is inevitable that those patterns mimic the behaviors that we display as humans. Unfortunately sometimes these patterns are based on real data where any negative bias that occurs is reflected in the AI too. This means if you are profiled by AI and the probabilities show you more likely to do something based on previous data, you could be on the receiving end of that bias, especially if you’re a minority. Additionally machine bias may have more far reaching impacts than human bias. As it makes decisions on a large, complex scale, it can remain exempt from the oversight of a human moderator and thus express bias on a larger scale and without detection. Facial recognition, recruitment, recommendations, access and approvals involving a myriad of systems where AI is employed may be impacted. So what can we do?

3 thoughts on “What could be done to prevent bias in AI?”

  1. I’m inclined to agree with this timely article (short, worth a read) that does a good job of explaining why ethics guidelines/standards and companies self-policing themselves will not suffice. The concept of “ethics guidelines” is just a way for companies to avoid regulation e.g. Google, Facebook et al. suddenly touting their internal “AI fairness” programs over the past few years. When viewing these issues through a human rights lens, perhaps the only meaningful solution is that governments and international orgs get involved and institute laws/regulations. https://www.aimyths.org/ethics-guidelines-will-save-us

  2. This could be a legal, technical, ethical, philosophical or moral problem, so the framing of the question matters. The “debate” often blurs these lines. For example, some have advocated for ML engineers to take an ethics course. But there is no property of ML that makes its users specifically prone to ethical issues, so this is more of an argument for making engineers more “ethically minded”. This could be a philosophical stance, as in a desire for a “fairer world” or a desire to make a brand more “woke” etc, but it’s not a technical issue. Regarding the issue of hidden biases, as might be buried in a non-linear system like a DN, the use of counterfactuals seems promising. The irony here is that AI itself is probably the best tool we’ll have to spot such hidden biases in a massively high-dimensional state-space.

  3. We should create an ethics committee that operates independent of tech companies. It should have the power to remove AI that proves to be unethical.

Leave a Comment

Your email address will not be published.