This question has no easy answer. It requires diverse opinions and development of ideas.
The technologies used for Artificial Intelligence today i.e. machine learning or deep learning are prone to bias. The techniques are designed for pattern recognition and so it is inevitable that those patterns mimic the behaviors that we display as humans. Unfortunately sometimes these patterns are based on real data where any negative bias that occurs is reflected in the AI too. This means if you are profiled by AI and the probabilities show you more likely to do something based on previous data, you could be on the receiving end of that bias, especially if you’re a minority. Additionally machine bias may have more far reaching impacts than human bias. As it makes decisions on a large, complex scale, it can remain exempt from the oversight of a human moderator and thus express bias on a larger scale and without detection. Facial recognition, recruitment, recommendations, access and approvals involving a myriad of systems where AI is employed may be impacted. So what can we do?