And there is no doubt, we will all trust AI. Everything is pointing in that direction.
And this is where things become interesting and important.
We Need More “Platforms” for Building “Trust”
Can we really trust AI?
Should we just let things run their course and accept the consequences?
Most people I talk to tend to agree: “We shouldn’t just trust AI”.
We should try to understand how AI is already affecting our lives. After all, the impact will be much more significant than the introduction of the Internet.
In order to build trust in AI, we need to have detailed discussions at all levels (and not only among the AI specialists). We need to focus on what it means, how it is already affecting our lives and how it will affect our lives in the future.
Recently, I have been thinking more and more about this. And whenever I am speaking at or attending a conference at home or abroad and AI is not on the agenda, I believe that we are missing an opportunity.
After all, conferences are an excellent opportunity to start discussing the impact of AI on fields outside of “technology”.
We have to ask questions about AI, understand how AI systems are trained, where the data is coming from, etc.
In particular, we need to think about the values or “ethics” that structure how AI operates.
For example, how do we want an autonomous car to react when confronted with an unavoidable accident? Should it minimize the loss of life, even if that means sacrificing the occupants of the car or should it prioritize the lives of the occupants at any cost? Alternatively, should the choice be a random one?
Transparent, open and inclusive dialogue seems to be the best way to build real trust in the systems that will structure our lives in the future.