The integration of Artificial Intelligence (AI) solutions into security services is advancing steadily. CoESS therefore publishes today two position papers as a reaction to the European Commission proposal on an EU AI Act, and the ongoing consultation on adapting liability rules to AI. In both papers, CoESS stresses that legal proposals on AI must provide legal certainty for users and take into account practical implications of provisions on security and AI-enabled services. CoESS therefore proposes concrete amendments to the proposal for an EU AI Act, and lays out its key principles on liability rules. The papers have been developed under the guidance of leading topical experts in European security services in the recently established CoESS Expert Group on AI.
In cooperation with law enforcement, the security industry will be at the forefront of integrating AI solutions in human-led services at airports, but also in remote monitoring and access control in both public and private spaces. In CoESS view, the integration of AI in security solutions could allow in certain use-cases for a significant increase in performance of security processes, translating in a better protection of European citizens, Critical Infrastructures and the economy against increasingly complex threats to public security.
CoESS therefore welcomes efforts at EU-level to set a legal framework for the deployment of Artificial Intelligence (AI) and respective liability rules, and supports rules that guarantee an ethical and human-centric use of AI as well as an efficient uptake of AI solutions, notably by the services industry.
The two position papers published today, and adopted by the CoESS Expert Group on AI, aim to support the European Institutions in developing a legal framework on AI that provides both legal certainty and full respect of fundamental rights in a human-centric approach, while ensuring efficient uptake in the security services which is in line with today’s requirements to effectively counter criminal offences and threats to public security. In CoESS view, the legal proposal for an EU AI Act does not fully live up to that objective.
CoESS therefore recommends in a paper on the EU AI Act concrete amendments to the legal proposal of the European Commission along the following lines:
User liability is a topic that is very closely related to the technology’s uptake in the security services, and hence the future EU AI Act. CoESS therefore welcomes the ongoing consultation concerning a possible adaption of liability rules in the face of AI and publishes today also a position paper that sets key principles to keep in mind for a possible revision of the EU liability framework.
CoESS believes that the integration of AI will only be successful if liability rules (1) provide clear responsibilities and provisions for user-liability, (2) mirror the complex liability chain in AI, and (3) do not introduce an unrealistic burden of proof on companies. The position paper lays down key principles in this regard and accompanies CoESS contribution to the ongoing EU stakeholder consultation.
Both position papers are available here.