1 - 2 December 2020

  • 1 DECEMBER
  • 2 DECEMBER
  • Online
09:00 - 10:00 CETOPENING SESSION1 December

1 December 2020| 09:00 – 10:00 CET

Welcome and Presentation
AI4People’s 7 AI Global Frameworks

  • Raja Chatila, Chairmain, Healthcare Committee, AI4People; Professor and Director Emeritus of the Institute of Intelligent Systems and Robotics (ISIR), Sorbonne Université

Speakers:

  • Eva Kaili, Member of the European Parliament
  • Dragoş Tudorache, Chair of the Special Committee on Artificial Intelligence in a Digital Age, European Parliament
  • Ravi Gurumurthy, CEO, Nesta (Innovation Foundation)
  • Lucilla Sioli, Director of Artificial Intelligence and Digital Industry, DG Connect
  • Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford; Director of the Digital Ethics Lab of the Oxford Internet Institute

Chair:

  • Robert Madelin, Member of the AI4People Scientific Committee; former Director-General DG Connect

10:30 - 12:30 CETWorking Session 1 |Media & Technology1 December

1 December 2020 | 10:30 – 12:30 CET

The draft of the Good AI Global Framework for the Media & Technology industry will be the starting point of this session’s debate. The Media & Technology Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements.

The Media & Tech Committee defined the scope of the draft paper according to four main application and use themes in the Media & Tech sector: Automating data capture, automating content generation, automating mediation, and automating communication. Focusing on concrete examples of AI application areas, the Media & Tech Committee discussed the 7 Key Requirements for Trustworthy AI in each individual theme. This, furthermore, stimulated the discussion of possible tensions among the 7 Key Requirements. Within the proposed scope of the Media & Tech sector, the Media & Tech Committee developed guidelines for the implementation of AI systems.

A

Speakers:

  • José van Dijck, Professor of Media and Digital Society, Utrecht University
  • Natalie Helberger, Professor of Law and Digital Technology, University of Amsterdam
  • Philip Michael Napoli, Professor of Public Policy, Sanford School of Public Policy, Duke University
  • Stephen Cave, Executive Director, Leverhulme Centre for the Future of Intelligence
  • Robert Madelin, Member of the AI4People Scientific Committee; former Director-General DG Connect
  • Cornelia Kutterer, Senior Director, Rule of Law & Responsible Tech, European Government Affairs, Microsoft
  • Elizabeth Crossick, Head of Gov Relations, RELX
  • Janne Elvelid, Policy Manager EU Affairs, Facebook
  • Paula Boddington, Senior Research Fellow, New College of the Humanities
  • Aphra Kerr, Professor of Sociology; Maynooth lead of the ADAPT Centre for Digital Media Technology, Maynooth University

A

Chair:

  • Jo Pierson, Professor of Media, Innovation and Technology, Vrije Universiteit Brussel

13:30 - 15:30 CETWorking Session 2 | Banking & Finance1 December

1 December 2020 | 13:30 – 15:30 CET

The draft of the Good AI Global Framework for the Banking & Finance industry will be the starting point of this session’s debate. The Banking & Finance Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements.
Banking and Finance is an area where AI technology is likely to have a huge impact. Already now most of credit checks, KYC and Anti money laundering decisions are now made by algorithms. Credit scores and therefore credit worthiness tests are also hugely faster and more accurate when carried out by algorithms trained on large data sets based on past decisions and outcomes. Similarly, investing and trading continue to be disrupted by AI. In some markets more trade happens as results of orders put
in by algorithms than by humans. The so-called “fintech revolution” which sees most parts of finance and banking disrupted and re thought, is strongly assisted by AI technologies. Yes, finance is also the most regulated area of business there is, and much of the so called “AI bias” is already covered by  existing regulation. The B&F committee will make recommendations around what areas of existing regulation needs to be strengthen and how, and where new sets of skills and mathematical tools can be used to ensure AI has a positive impact on this important sector.

A

Speakers:

  • Paul Jorion, Associate Professor of Ethics, Université Catholique de Lille
  • Giulia Del Gamba, Digital and Innovation Policy Advisor, Intesa Sanpaolo
  • Aisha Naseer, AI Ethics Research Manager, Fujitsu Laboratories of Europe
  • John Cooke, Chairman, Liberalisation of Trade in Services Committee, TheCityUK
  • Mark Nitzberg, Executive Director, Center for Human-Compatible AI

A

Chair:

  • Nir Vulkan, Associate Professor of Business Economics, Saïd Business School, University of Oxford

16:00 - 18:00 CETWorking Session 3 | Legal Services Industry1 December

1 December 2020 | 16:00 – 18:00 CET

The draft of the Good AI Global Framework for the Legal Services industry will be the starting point of this session’s debate. The Legal Services Industry Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements.

The use of artificial intelligence in the justice system creates significant opportunities to address known shortcomings and failings in the administration of justice, but also poses unique and serious dangers, not only for individual citizens, but the rule of law ideal more  generally. On the one hand, legal systems across the world struggle with high costs of litigation efficient enforcement of rights, and with often unconscionable delays in the administration of justice so that justice delayed does indeed often become justice denied. Here technology can be a force for good. On the other, law often deals with citizens who are in particularly vulnerable conditions, and subject to power and information imbalances the risks that the use of AI in the legal sector creates are not always reducible to risks for individual clients, citizens, victims of crime or suspects. Some of the risks manifest themselves as risks against the conceptual integrity of the legal system, weakening of the rule of law ideal, or our understanding of the relation between citizen and state in a democratic system committed to human rights and the overarching principle of human dignity. Protecting the ideals of open and public adjudication, equality before the law, contestability and transparency of legal decision making are among the challenges that the legal services working group explored.

A

Speakers:

  • Jacob Slosser, Carlsberg Foundation Postdoctoral Fellow, University of Copenhagen
  • Sophia Adams Bhatti, Head of Strategy and Policy, Wavelength
  • Dame Wendy Hall, Regius Professor of Computer Science; Executive Director of the Web Science Institute, University of Southampton
  • Gry Hasselbalch, Co-Founder, DataEthics

A

Chair:

  • Burkhard Schafer, Professor of Computational Legal Theory; Director, SCRIPT Centre for IT and IP Law, University of Edinburgh


Click on dates above to switch between days and see other sessions
  • Online
09:00 - 10:45 CETWorking Session 4 | Healthcare2 December

2 December 2020 | 09:00 – 10:45 CET

The draft of the Good AI Global Framework for the Healthcare industry will be the starting point of this session’s debate. The Healthcare Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements.

The risk-based approach introduced in the HLEG Ethics Guidelines and outlined in the EU Commission’s Whitepaper in February 2020 is examined in regard to the 7 Requirements, and more specifically the Assessment List for Trustworthy AI related to these requirements. Different exemplar use cases in Healthcare, raising different issues, are analyzed to provide a solid foundation for practical risk assessment and mitigation of AI systems operating in Healthcare in order ensure their compliance with the 7 Requirements.

A

Speakers:

  • C. Donald Combs, Vice President and Dean, School of Health Professions, Eastern Virginia Medical School
  • Eugenio Guglielmelli, Full Professor of Bioengineering and Prorector for Research, Campus Bio-Medico, University of Rome (UCBM)
  • Danny Van Roijen, Digital Health Director, COCIR
  • Wendy Yared, Director, European Cancer Leagues
  • Ben MacArthur, Deputy Programme Director for Health and Medical Sciences, The Alan Turing Institute
  • Robert Madelin, Member of the AI4People Scientific Committee; former Director-General DG ConnectA

Chair:

  • Raja Chatila, Professor and Director Emeritus of the Institute of Intelligent Systems and Robotics (ISIR), Sorbonne Université
11:00 - 13:00 CETWorking Session 5 | Insurance2 December

2 December 2020 | 11:00 – 13:00 CET

The draft of the Good AI Global Framework for the Insurance industry will be the starting point of this session’s debate. The Insurance Committee’s draft paper has attempted to map out the impact that the 7 Key Requirements for a Trustworthy AI are likely to have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements.

We explore the issue of standards related to AI ethics and trustworthiness, which aims to prepare the ground for an AI Global Mark of Compliance.

A

Speakers:

  • Paul Jorion, Associate Professor of Ethics, Université Catholique de Lille
  • Maria-Manuel Leitão-Marques, Member of the European Parliament
  • Alex Towers, Director of Policy and Public Affairs, BT Group
  • Rui Ferreira, Chief Data Governance Officer, Zurich Insurance Group (ZIG)

A

Chair:

  • Frank McGroarty, Professor of Computational Finance and Investment Analytics; Director of Centre for Digital Finance, Southampton Business School

14:00 - 16:00 CETWorking Session 6 | Automotive2 December

2 December 2020 | 14:00 – 16:00 CET

The draft of the Good AI Global Framework for the Automotive industry will be the starting point of this session’s debate. The Automotive Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements. Furthermore, the draft could serve as a basis for developing a certification of ethics in the automotive sector. Three general emphases that are central to this paper need to be highlighted: first, a responsible offsetting / balancing of risk or potential harm in line with consequentialism should be permitted for autonomous vehicles. Furthermore, as a radical implementation of fully autonomous vehicles (level 4 and higher) seems rather unrealistic in the short run, companies and policy-makers should consider a more incremental, step-by-step approach. Finally, policy-makers are challenged: a clear regulatory frame needs to be developed as soon as possible.

A

Speakers:

  • Aisha Naseer, AI Ethics Research Manager, Fujitsu Laboratories of Europe
  • Aida Joaquin Acosta, Head of the International Relations Department, depending directly on the Minister of Transport, Mobility, and the Urban Agenda in Spain; Affiliate at the Berkman Klein Center for Internet and Society, Harvard University
  • David Danks, L.L. Thurstone Professor of Philosophy and Psychology Chief Ethicist, Block Center for Technology and Society, Carnegie Mellon University

Chair:

  • Christoph Lütge, Director, TUM Institute for Ethics in Artificial Intelligence, Technical University of Munich

16:30 - 18:30 CETWorking Session 7 | Energy2 December

2 December 2020 | 16:30 – 18:30 CET

The first draft of the Good AI Global Framework for the Energy industry will be the starting point of this session’s debate. The Energy Committee’s draft paper has mapped out the impact that the 7 Key Requirements for a Trustworthy AI will have on this sector and suggested some concrete and practical steps that businesses operating within this sector must take to become and remain compliant with the 7 Requirements. The aim of the Energy Committee report is to provide a comprehensive guideline with practical recommendations and obligations based on the fundamental rights and ethical principles of the 7 key requirements about how AI will impact the Energy Industry Sector. Based on such fundamental rights and ethical principles, the Committee suggested some concrete steps that the Energy industry must take in order to be trustworthy, such as data protection and data security.

The digitalization of the energy sector or the smart grid technology supporting the integration of Renewable Energy Sources (RES) make the energy industry more efficient, reliable and secure. The Energy Committee also highlighted some case studies considering the ethical principles, standards and EC guidelines: for instance, predictive maintenance-based AI and Machine Learning algorithms for early fault detection, preventive corrective maintenance or equipment failure.

A

Speakers:

  • Sergio Saponara, Full Professor of Electronic Engineering, University of Pisa
  • Afzal S. Siddiqui, Professor, Department of Computer and Systems Sciences, Stockholm University; Adjunct Professor, Department of Mathematics and Systems Analysis, Aalto University
  • Rónán Kennedy, Lecturer in Law, School of Law, National University of Ireland Galway
  • Robert Madelin, Member of the AI4People Scientific Committee; former Director-General DG Connect

A

Chair:

  • Lucian Mihet, Professor in Energy Technology, Faculty of Engineering, Oestfold University College


Click on dates above to switch between days and see other sessions

Receive the Programme