The Risks of Artificial Intelligence and the Marginalisation of the Machine in Relation to Humans
The concept of Artificial Intelligence appeared in the 1950s, so it’s not new. We are now in 2020 and the rise of big data has made AI applications much more concrete. Artificial Intelligence Systems (AIS) are coming everywhere to improve our lives: healthcare, autonomous transportation, energy management, fighting climate change, anticipating cyber security threats, facial, digital, voice recognition and many others.
But whether they are accidents, errors or malevolence, the risks associated with artificial intelligence are real. Imagine a “caretaker” robot that misdiagnoses a problem, traffic headlights that cause an accident, or cleaning robots that introduce explosives into restaurants. Authoritarian states could also use AI to spy on their citizens and carry out attacks.
If we do not quickly and rigorously frame the design and deployment of AIS at the source, we will sink straight into an Orwellian 1984 precipitated by the egocentric Type A personalities’ s claims to power and fortune, and by governments and GAFA’s inaction.
The good news is that our researchers are active on the subject! It will soon be a year since the European Economic and Social Committee tabled its own-initiative opinion on artificial intelligence: anticipating its impact on jobs to ensure a fair transition. This own-initiative opinion has the merit of having laid the first ethical foundations necessary to avoid drift, but is still too cautious about the concept of algorithmic transparency and marginalises the machine in relation to the human being : behind algorithms lie the opinions of its designers, as Cathy O’Neil, Harvard PhD, puts it so well. AI per se is not a danger. The men behind it and the use they’d like to make of it could be one. As is the lack of anticipation and transition. We should therefore not forget this precept and put as much emphasis on the constitution of the AIS project team as on the system itself.
That being said, various research communities in Montreal (home of the World Observatory on the Societal Impacts of AI and Digital - let's not forget it) have drafted the excellent Montreal Declaration for the Responsible Development of Artificial Intelligence, which I invite you to sign.The principles of the Montreal declaration « rest on the common belief that human beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive to fulfill their potential by freely exercising their emotional, moral and intellectual capacities.”
It outlines 10 ethical principles of AI:
1. The development and use of artiﬁcial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings.
2. AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
3. Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems.
4. The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations.
5. AIS must meet intelligibility, and accessibility criteria, and must be subject to democratic scrutinity, debate and control
6. The development and use of AIS must contribute to the creation of a just and equitable society.
7. The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.
8. Every person involved in AI development must exercice caution by anticipating, as far as possible the adverse consequences of AIS use and by taking the appropriate measures to avoid them.
9. The development and use of AIS must not contribute to lessening the responsibility of human beings when decisions must be made.
10. The development and use of AIS must be carried out so as to ensure a strong environmental sustainability of the planet.
Finally, some articles find a particular echo in the audit cybersecurity and digital transformation consulting boutique that we are:
· Prin 5. Art 3 : The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes.
· Prin 5 Art 4 : The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders,and those affected by the situation.
· Prin 5 Art 7 : We must at all times be able to verify that AIS are doing what they were programed for and what they are used for.
· Prin 8 Art 3 : Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders.
In a future post, we will come back in more depth on the IA acceptance criteria and we will discuss the possible solutions to ensure a trustworthy AI.
Without a doubt, AI will replace jobs with more automation. The social impacts are important in an economy where robotics density is growing, while taxes on labour remain the main source of tax revenues. It is therefore necessary to anticipate its impact on jobs and prepare training and transition funds for collateral victims who will not immediately find their place in the digital transformation of their activity ; but it is also a great opportunity to empower people and make them shine in the tasks that make them profoundly human and creative, sensitive and gifted with emotions, and to invent the roles of tomorrow, roles where they will be able to flourish even better.