In presenting its next steps for building trust in artificial intelligence through taking forward the work of its High-Level Expert Group, the Commission is now inviting industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines.
The EU executive says the plans are deliverable under the EU AI strategy, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
The ethical dimension of AI is not a luxury feature or an add-on, said EC Vice-President for the Digital Single Market Andrus Ansip: "It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."
Commissioner for Digital Economy and Society Mariya Gabriel added: "Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."
Artificial Intelligence (AI) is seen as benefiting a wide-range of sectors, such as healthcare, energy consumption, cars safety, farming, climate change and financial risk management. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement authorities to fight crime more efficiently. However, AI also brings new challenges for the future of work, and raises legal and ethical questions.
The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.
1.Seven essentials for achieving trustworthy AI
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
2. Large-scale pilot with partners
In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts. In addition, members of the AI high-level expert group will help present and explain the guidelines to relevant stakeholders in Member States.
3. Building international consensus for human-centric AI
The Commission wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.
The Commission adds that it intends to ensure the ethical development of AI with new initiatives in the autumn of 2019: launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces.