The Ethics of Artificial Intelligence
The way we develop the next generation of learning software will have a tremendous impact on our life. Prof. Joanna Bryson (@j2bryson) from University of Bath dissected this fascinating topic with Azeem Azhar (@azeem), founder of The Expontial view in this incredible podcast.
Intelligence, ethics, and digital transformation they covered them all. Here's our main lessons learned:
AI is the pen and paper of the 21st Century. Artificial Intelligence and Machine learning are the "Cognitive prosthetics that enable humans to offload information to the cloud". Through voice recognition, photos and smart devices we are able to save a big chunk of our life on the internet. AI and Machine learning give us the power to record but also to access all that data!
Bias is one of the most burning problems with AI from an ethical perspective. This issue is being heavily researched at the moment. If the results are successful, a weakness can be turned into a strength: AI and Machine Learning models can be used to detect, measure, and ultimately remove bias from human-centered processes!
3. Explainability is a very serious and largely unresolved issue with Deep Learning systems. Understanding why a certain action was taken or suggested by a machine learning algorithm is not only important for understanding if a moral or ethical guideline was respected, but doing so has also been imposed by recent regulations such as GDPR.
4. Artificial Intelligence can be put at good use if applied as a tool that enhances human intelligence.
5. We need to maintain AI and Deep Learning models over time. As we do with other machines (e.g. cars), we need to check the status of an AI model to avoid bad data is indadvertedly introduced! Since these systems are continuing to learn, we need to ensure biases don't affect our AI models.
Our Suggestions for further reading
understand what the ethical issues we are facing are and the emergence of super-intelligence https://nickbostrom.com/ethics/ai.html
read one of Prof. Bryson's seminal articles: “Semantics derived automatically from language corpora contain human-like biases”(April 14, 2017)
get tense by reading the paperclip example: which is about how AI, that is apparently innocuous, can pose a very serious existential threat