The Council of the European Union adopted the AI Act, taking a major step towards fostering trust in new technologies, as well as ensuring their transparency and accountability to European citizens.
Namely, the AI Act aims to encourage the development and acceptance of safe and reliable AI systems across the EU, while ensuring fundamental rights of all its citizens. It intends to implement this through a multi-level approach to Regulation of AI technology, where the rigour of Regulation increases depending on the importance of technology risks for society and the EU.
More specifically, in terms of risk, the Act distinguishes between four types of artificial intelligence, the most risky of which are artificial intelligence systems that socially value and classify individuals into different categories, from gender, race, citizenship, sexual orientation and others. Artificial intelligence systems representing unacceptable levels of risk, on the other hand, are prohibited. Examples of such artificial intelligence are systems for recognising emotions in public spaces, systems for storing photographs of people and systems for social scoring.
In the coming days, the AI Act will be published in the EU Official Journal and will enter into force 20 days after its publication. In turn, its implementation will start two years after its entry into force and a number of bodies will be established in the EU to ensure its implementation. However, some parts will be applicable sooner : bans on prohibited practises will apply six months after the entry into force date, codes of practise (nine months after entry into force), general-purpose AI rules including governance (12 months after entry into force), and obligations for high-risk systems (36 months). The specificity of the Act is that it excluded artificial intelligence systems used for military and defence purposes and research activities from the Regulation.
Although this Act represents the first law of this nature in the world and although it can become a global standard for further regulation of artificial intelligence, it is important to monitor the quality of its application.
Gong, together with European civil society organisations, criticised the proposed content of the AI Act for introducing exceptions to the use of artificial intelligence in matters of national security. We had already supported and published an open letter with 45 organizations, in which we requested the equal application of the Act to the public and private sectors and the refusal of the exception to the application of the Act to national security and defence matters. Gong will therefore continue to participate in the monitoring of practical aspects of the Act, such as its implementation.