There is an open desire to make AI ‘more human’. Recent announcements for the Stanford Institute for Human-Centered AI shows that the industry and researchers know they need to focus on putting humans at the center of AI development.
So what does ‘more human’ mean?
Some of the things that come to mind include more awareness of the ethical and moral choices, more kindness, more creativity, less biased decision making, more inclusivity and more benefit for humans. All things that are better for humankind in the long run. These are great aspirations but how do we do this in reality?
We could create a checklist or a governing body overseeing all AI products and providing a stamp of approval if it passed the ‘human enough’ test. We could produce a guideline for designing the most human friendly AI. Maybe some of those things are happening and if you know of any, please share details.
If instead of asking for human centered AI, we changed the question to ‘What can AI do to help humans?’ we may come up with more practical ideas that by their nature require a more human approach to implement. Here are just a few:
- Encourage us to spend less time on screens or social media
- Saving humans time and money without compromising choice
- Helping humans more healthy choices
- Finding opportunities that humans wouldn’t normally get access to because of their economic or social status
- Find solutions to climate change
- Give humans access to more intelligence for decision making in a transparent way
- Encouraging more quality social and family time
- Making humans happy by providing new entertainment and experiences for them
Some of these are actively being worked on but they key point is the asking for specifics e.g. without compromising choice, requesting transparency, opportunities regardless of status.
Can you think of more human requirements for AI that would bring out the human side?