One of the biggest challenge in AI is figuring out a way to prevent bias.
This is a huge problem to solve as the algorithms being developed and the data feeding the machine are ‘trained’ using data from real humans and as humans are prone to bias, it follows that the AI systems also inherit that bias. Just like every security system has its weaknesses, every AI could too. The difference is the subtlety with which AI may present that bias (it’s difficult to detect) and the impact of that to every day lives. If the predictions are true, AI will be applied in almost every type of industry from health to finance to recruitment and the result of bias in the systems could have long lasting and irreversible effects, especially on people who come out on the losing side of the bias.
If we can’t solve it easily, can we make it safer?
Just as we know that security is difficult to perfect – it can be optimized, maybe that’s the same target to aim for. Security uses 3rd party certificates and keys to reassure us and there could be a path where AI is directed this way too e.g. certificates or an ‘ethic check’ that could detect the bias level.
The onus would be on consumers and businesses to look for recognizably ‘safe’ AI that has been evaluated in this way.