Ahead of the AI Act vote in the European Parliament, civil society calls on Members of the European Parliament (MEPs) to ensure the EU Artificial Intelligence Act (AI Act) prioritises fundamental rights, and protects people
affected by artificial intelligence (AI) systems.
Increasingly we see the deployment of AI systems to monitor and identify us in public spaces, predict our criminality likelihood, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum and the presumption of innocence, predict our emotions and categorise us using discriminatory inferences, and to make crucial decisions about us that determine our access to welfare, education and employment.
Without proper regulation, AI systems will exacerbate existing societal harms of mass surveillance, structural discrimination, centralised power of large technology companies, the unaccountable public decision-making and environmental extraction. The complexity, lack of accountability and public transparency, and few available
processes for redress present challenges for people to enforce their rights when harmed by AI systems. In particular, these barriers present a particular risk for the most marginalised in society.
The EU’s AI Act can, and should, address these issues, ensuring that AI development and use operates within a framework of accountability, transparency and appropriate, fundamental-rights based limitations. We are calling for MEPs to ensure the following in the AI Act vote:
Empower people affected by AI systems
This includes ensuring horizontal and established accessibility criteria for all artificial intelligence systems, the right to file complaints when human rights are violated by an artificial intelligence system, the right to representation, and the right to effective remedies.
Ensure accountability and transparency for the use of AI
This can be achieved by requiring users of high-risk AI systems to carry out and publish a fundamental rights impact assessment before implementation and by requiring the use of high-risk AI systems, and the use of all AI systems in the public sphere to be registered before implementation, ensuring that the law does not a loophole that could allow high-risk AI providers to bypass legal oversight.
Prohibit AI systems that pose an unacceptable risk for fundamental rights
There must be a complete ban on all types of remote biometric identification, predictive and surveillance systems in policing, emotion recognition systems, biometric categorization systems that use sensitive features or are used in public spaces, and individual risk assessment and predictive analytics systems in the context of limiting and prevention of migration.
We call on MEPs to vote to include these protections in the AI act and ensure the Regulation is a vehicle for the promotion of fundamental rights and social justice.
For a detailed outline of how the AI Act can better protect fundamental rights, see this statement signed by 123 civil society organisations. More information on amendments proposed by civil society can be found here.