Law students from the European Union are reporting for JURIST on law-related events in and affecting the European Union and its member states. Ciara Dinneny is JURIST’s European Bureau Chief and a trainee solicitor with the Law Society of Ireland. She files this dispatch from Dublin.
Last week leaders of the G7 – the intergovernmental group that includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States (with the European Union as a “non-enumerated member”) – reached an agreement on international guiding principles on artificial intelligence (AI) and a voluntary code of conduct for AI developers. This development has been welcomed by the European Commission.
While some forms of AI have been around for over 50 years due to advances in computing power, the availability of enormous quantities of data and new algorithms have led to major AI breakthroughs in recent years.
When referring to AI, this is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity. AI enables technical systems to perceive their environment, deal with what they perceive, solve problems and act to achieve a specific goal i.e. a computer received data prepared or gathered through its own sensors, processes and responds to it.
Examples of such include Siri, Alexa and customer service chatbots. Last year, we saw the development of Chat GPT, which uses natural language processing to have conversations like dialogues. Chat GPT has been used to write articles, solve math problems, summarise presentations, create titles and even answer trivia questions. The incredible development in AI in recent years has led to the push for further regulation in the area.
The G7 process on artificial intelligence, known as the “Hiroshima process”, was established on May 19, 2023. The initiative was established to analyse risks and opportunities of generative AI, to develop international guiding principles for AI stakeholders and organisation developing advanced AI, and to promote cooperation to support the development of responsible AI tools and best practices. In a joint statement, the G7 leaders stated that they hope that the guiding principles will “foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed” which will “maximize the benefits of the technology while mitigating its risks, for the common good”. Some of the principles include identifying and mitigating risks, publicly reporting, invest in robust security control, deploy reliable content authentication and work towards responsible information sharing.
The Hiroshima process complements the EU Artificial Intelligence Act (“AI Act”), which is currently being finalised. The AI Act is set to be world’s first comprehensive AI law. The AI Act was first published by the European Commission in April 2021 and in June this year members of the European Parliament adopted the Parliament’s negotiating position on the AI Act. Talks have now begun with EU countries in the Council on the final form of the law.
Under the AI Act, obligations are placed on users and providers depending on the risk associated from AI. Systems using AI that is considered to be an unacceptable risk are systems considered to be a threat to people and will be banned. This includes cognitive behavioural manipulation, social scoring and facial recognition in public places. The AI considered to be high risk will need to be assessed before being put on the market, and through their life cycle. Generative AI such as Chat GPT would have to comply with transparency requirements. The main purpose of the Act is to regulate AI by ensuring better conditions for the development and use of this innovative technology.
The EU Artificial Intelligence Act is expected to come into force near the end of 2023. Once signed into law, providers of AI systems will be expected to comply within 24 or 36 months.