In response to the EU stakeholder consultation, CoESS welcomes the European Commission’s White Paper on Artificial Intelligence. Focussing on the use of Artificial Intelligence (AI) systems in security, CoESS recommends that a future legislative framework for AI should be based on key requirements developed in the European Commission’s “Ethics Guidelines for Trustworthy AI”, while ensuring facilitated uptake of AI technologies. What is needed is legal certainty for so-called “high-risk” AI applications as well as a skills and licensing framework for developers, producers and users.
In a position paper, published today as part of the currently running EU stakeholder consultation on the European Commission’s White Paper on Artificial Intelligence, CoESS highlights that trustworthiness is a prerequisite for the uptake of the technology. To build trust and guarantee compliance with fundamental rights, Artificial Intelligence (AI) must, through the system’s entire lifecycle, be lawful, ethical, and robust in order. To live up to that goal, CoESS believes that the key requirements developed in the “Ethics Guidelines for Trustworthy AI”, published by the European Commission’s High-Level Expert Group on Artificial Intelligence, should be guiding principles for legal and non-legal actions.
In that sense, CoESS supports the idea that the introduction of legal and non-legal measures for “high-risk” AI applications, as outlined in the White Paper, can be suitable to address these principles. CoESS however recalls that a clear definition of these applications is necessary to guarantee harmonised implementation in the Member States and legal certainty for developers, manufacturers, and users. CoESS further opposes any moratorium against remote biometric identification systems, such as facial recognition. Such technologies can add considerable value to enhanced public security and law enforcement, and their use should be allowed in public spaces based on a risk and impact assessment under adequate human oversight.
CoESS particularly stresses that human autonomy and oversight are key for the overall goal of human-centric, lawful, ethic, and robust AI. To reach that goal, every stakeholder in the chain – developers, manufacturers, users, testers, procurers – need to be empowered to preserve human autonomy by means of curricula and qualifications. CoESS believes that for specific “high-risk” use-cases, the Commission should consider making certain qualification and licensing mandatory for developers, users and testers.
At the same time, it is important to keep financial and administrative burden for users as low as possible in order to ensure uptake of AI. This applies to liability frameworks as much as for conformity assessments and voluntary labelling. Particular support has to be considered for SMEs.
For more details, you can find the position paper here.