The more humanity develops, the more problems it creates. As if overpopulation, environmental pollution, and climate change weren’t enough, we have introduced an uncontrollable alternative to our own intellects to the fray. In recent months, amid the proliferation of Chat GPT, Google Bard, and other consumer-accessible AI chatbots, media outlets have reported that AI ‘could replace the equivalent of 300 million full-time jobs”, “…has a big carbon footprint”, “…makes non-invasive mind-reading possible by turning thoughts into text” and ultimately, that it could destroy all of humankind.
In light of this digital revolution, people seem to forget that artificial intelligence in itself is just a product on the market, developed and tested by tech companies and start-ups. It has been in use already for quite some time, long before ChatGPT came out. And, as with any other product, AI should be regulated, to minimise the potential harm to users and maximise its effectiveness. With that in mind, the European Union is currently preparing the AI Act, the world’s first comprehensive AI law.
To shed light on the revolutionary law, explore how it will aim to protect EU consumers, and explore what risks all this may pose to AI developers, JURIST spoke with Axel Voss, a member of the European Parliament and coordinator of the European People’s Party group in the Committee on Legal Affairs. Beyond his political career, Mr. Voss is also a German lawyer with upwards of 15 years of professional experience.
This is the first of a two-part interview series. The second part of the interview can be found here.
JURIST: The European Union is the first to regulate AI. But if I recall correctly, some Member States have already tried to regulate it at some point. Why did the EU decide to step in to try to create a uniform law?
Axel Voss: Fragmenting our efforts within the European Union and its single market does not make any sense. When it comes to AI, I would say there should be a kind of global legislation in place, but [geopolitical interests] are probably so different that we are not coming forward with this idea. That’s why we need to have an European approach to it. I cannot really recommend the national approach with respect to global development or global technology; we should not try to fragment everything more than necessary.
JURIST: At what stage of development would you say the new act is right now? I suspect the proposal was already going through when ChatGPT came out, and that probably called for some amendments. Where do things stand now?
Voss: As for now, we have reached the stage of trilogue. On July 18 we had the second trilogue. I think the atmosphere is such that we will be motivated to come to an end probably in November. This is the mood I’m experiencing. We hope to have voting in December or January. Generally, I believe we in the Parliament took too much time in distributing these to our committees because everyone was interested in getting these on board. It took seven months and I think this was something of a waste of time, as we were not able to work on it [during this period]. And now we are under pressure because of the end of mandate . That’s why I would say we will wrap up the AI Act at the end of this year.
JURIST: Do the current EU talks include a discussion of potential threats? Ultimately, what you are trying to prevent with respect to the development and proliferation of AI?
Voss: First of all, it is essential to understand that this is a kind of a product regulation; we are considering AI and its algorithms to be products. And here for the first time, we are trying to integrate fundamental rights. We are creating regulation over something that has its own, so to say, life. The outcome is something that you can’t really predict. So that’s why we are focusing on risks, thinking about what might be too intrusive for the consumer.
And here, of course, we have a lot of use cases, and we are facing them or meeting them with one general clause.
One thing we do not want is a social scoring AI system, the likes of which we are seeing in China. This is, obviously, too intrusive [to comply with rights related to] privacy and personality and so on.
Also, there are political discussions going on regarding [the potential roles of] facial recognition, biometric data in real-time use, and so on. Here we have different political approaches to these technologies. My group, for instance, is saying, that we need to give our law enforcement organs the possibility of using modern technology [while other groups offer that these technologies are incompatible with fundamental rights]. But, we do have a consensus that there should be an exemption in place for health.
AI systems [that we have categorized in the act as “high-risk”] might affect fundamental rights; some algorithms might affect the principles of democracy by creating misinformation, disinformation, fake news, and so on. In other words, some algorithms might disturb a kind of democratic fundament. So to safeguard all these individual rights and democratic principles, [we must] be careful. [To this end, with these technologies, we are advocating] safeguards for the developer and also for the system itself. In AI algorithms you can’t really guarantee the result at the end; therefore, you need to observe what your AI system is doing. There’s a thin line between good and bad. That’s why we need to focus and work on safeguards in the upcoming years and try to be as flexible as possible.
We try to restrict some AI systems, but on the other hand, we should not burden companies too much. Generally, we need to observe, see what is working well, and what is going wrong, and then we should correct it. Also therefore we need a flexible legislator.
JURIST: What would you say about fears that AI will cost jobs, and ultimately do more harm than good?
Voss: We have to tell everyone this is [an irreversible] development. Everyone will use [AI] in the end. People should not be afraid to use it, but before they do, we need to create the correct framework for it.
This is the first of a three-part interview series.