In an era where digital technologies permeate every aspect of our lives, concerns about data protection and privacy have taken center stage. The European Union has long been at the forefront of addressing these concerns through comprehensive data protection laws. The most prominent of these laws is the General Data Protection Regulation (GDPR), which came into effect in May 2018 and has significantly transformed the landscape of data protection within the EU.
The EU’s approach to data protection is characterized by a harmonized legal framework that sets common standards for the collection, processing, and transfer of personal data. The GDPR serves as the cornerstone of this framework, creating a consistent set of rules across all member states. This harmonization is crucial because it enables individuals and businesses to navigate data protection regulations seamlessly when conducting cross-border activities.
While the GDPR provides a unified foundation, there can be subtle variations in the way it is implemented and enforced among member states. National Data Protection Authorities (DPAs) in each EU country play a role in overseeing compliance and enforcement. This decentralized structure allows member states to adapt certain aspects of the GDPR to their specific legal systems and administrative processes. As a result, while the core principles remain consistent, there might be slight differences in how data protection is interpreted and applied across the EU.
The rise of Artificial Intelligence technologies has brought about new challenges in the realm of data protection. One of the primary concerns is the potential for AI systems to process massive amounts of personal data without adequate consent or understanding from individuals. Machine learning algorithms can uncover intricate patterns and insights from data, raising questions about the extent to which individuals can control their own information once it enters the digital ecosystem.
Another concern revolves around automated decision-making. As AI algorithms make more critical decisions, such as in loan approvals or job recruitments, worries arise about the fairness and accountability of these systems. If AI systems are fed biased data, they might perpetuate and even amplify societal prejudices, leading to discrimination.
Furthermore, the sheer volume of data collected and processed by AI systems raises concerns about security breaches and unauthorized access. As AI applications become more sophisticated, safeguarding the integrity and confidentiality of sensitive information becomes paramount.
To shed light on the revolutionary legislative initiative, examine how it will aim to protect EU consumers and explore what risks all this may pose to AI developers, I spoke with Axel Voss, a member of the European Parliament and coordinator of the European People’s Party group in the Committee on Legal Affairs. This is the second of a two-part interview series. The first part of the interview can be found here.
JURIST: How will AI regulation affect data protection policies in the EU?
Voss: There is a conflict between AI and data protection regulation. … Nobody is consenting to train algorithms when individuals are consenting to the processing of their personal data. But if we are not enabling this, we are totally incapable of getting enough data to train algorithms. So that’s why we need a kind of solution here. There is in the Data Protection Regulation, Article 6, paragraph 4, which says you can use personal data for different purposes, but you have to explain and inform and so on. But if you’re doing this, going through the whole processes, then you are out of being flexible, being fast and so on.
But all the rights and everything that we have integrated in the General Data Protection Regulation is still valid. The problem is that we are just creating a kind of an environment where you can test and train your AI-systems, but only with constant observation and official authorization, and only then you can also go to your market and do your business.
Regarding the data situation, here, there is a real problem we created with the GDPR kind of a mentality of not processing personal data, of not sharing personal data. And this, in a data driven world, is not the most intelligent way forward. And Europe is lagging behind. … We are in a kind of a third-world state when it comes to digital reality, we need to catch up with the digital reality somehow. But it’s more difficult if you already have something in place, that is not really paving the way somehow. Consumer protection is still in place. GDPR is also in place and all the rights that are protecting you are in place. But now we have to face something new, see how this is developing and if something is going wrong, we have to be there and face the new problem. Quickly. And I hope this is not taking 5 or 4 years so that we might be a little bit faster in correcting these.
JURIST: When it comes to the user contesting the use of their personal data, it’s going to be up to the firms who develop AI to guarantee that their data is not going to be trained. Do you leave it fully to the firms, or will Member States will need to guarantee everything by themselves?
Voss: I would say that for training and testing the algorithms, we should be a bit more open to it. But if they are using AI systems in large language models, with personal data, companies should of course respect the GDPR. The regulation directs this all to the administration. Especially when it comes to a kind of a first model of generative AI and so on. It can’t be perfect. A car in these days is different from what we have seen decades ago. That´s why we have to accept that there is a development and there is a first try. Developers are improving, they are training their algorithms better. And we understand that you need to have quality data to have good outcomes. Therefore, I think we need to be patient a little bit. Then this is a kind of a living product. And we have to see what the outcome might be. And if this is going in the wrong direction, please retrain these algorithms and so on.
JURIST: Law interpretation and law enforcement AI systems are qualified as high risk and they even need to be registered in a separate database. Was AI already used somehow in these areas, or is this a preventative measure?
Voss: I’m pretty much sure this was already used somewhere. But of course, here what we were talking about was predictive policing. We need to be careful with the law enforcement institutions for predicting, if someone who came from jail, might return because of AI expectations. On the other hand, if you are open-minded enough, you can create a situation where you are saying: yes, use it, but we are recommending and regulating the access and the purpose of the use of data. We can be very strict and then nobody has to fear that something is going wrong. But still, it is very complicated when it comes to law enforcement.
JURIST: Because you’re also a lawyer from the background, do you believe that this regulation is going to influence the usage of AI in the commercial law sphere? In legal firms? For instance, in the US already the new CoCounsel AI, can do the work of a first-year associate. Do you believe that the government will try to regulate the law sphere and AI? What do you think about it?
Voss: Well, at a certain point it might even replace a legislator. So, if we are just telling ChatGPT, please create the AI Act 2.0 and then there might be a better outcome than what we are doing right now. I still would say please do not avoid using AI but have control over it sometimes. Question the outcome of something. Probably you have seen or heard these. A case of two lawyers in New York who asked AI to find proof cases, and AI created their own proof. It is, of course, interesting to see that AI is so intelligent to create something new. But of course, in practical terms for the lawyer, it’s not really helpful then. But still, I would say in the future anyway, we shouldn’t be too much afraid and we should more control it.
And yes, it might lead to a situation where even lawyers might become redundant. So just ignoring this is not helpful. Then we should better say: we know this is happening and we can form or regulate this in such a way that this is not leading to a situation like this.
I still would say we will need lawyers for the court and we will not be replaced by a robot in the next 20 years or so. But one day AI systems might be faster in solving problems or giving you better advice. It will happen anyway, so it’s better to be prepared for this.
In my generation or older generations, people find this spooky and so they say: no, we don’t want to live in a world like this. Probably as a lawyer, I also don’t want to live and work like this. But I can’t see how we should ignore these if this is helping for faster proceedings, for immediate results. I think in Estonia this already exists for a certain amount of disputes; they have such a proceeding already in place.
And this is what I would recommend because we have too many cases in the courts, so it must be helpful or it can help reduce the amount of work. But it depends how open-minded you are. I would always recommend if there is a possibility we should try. Sometimes on a political level, I’m promoting that we should build a full digital court just for trying and to get an experience. We should try and experience all of these and then try to get this integrated. It doesn’t have to fully replace lawyers but for a small amount of claims in some cases or for advising on your existing laws, it might be helpful.
JURIST: One positive aspect of the usage of AI in the legal sphere might be free legal help, just as there are legal clinics, of course, where students or other volunteers are trying to help. But I believe that AI might be better because more people will be able to access it quickly and just know what to do if something happens. So, in the end, do you believe that AI, such a huge system and machine that is developing minute by minute, not even day by day, can be regulated within a legal framework? Do you believe the AI Act is going to succeed?
Voss: I begin with the example of GDPR. We decided on it in 2016 here in the Parliament, it entered into force for the application for everyone in 2018. And at least in 2019 or 2020, after just two years, you have the feeling it’s already old. We need an update. And so far, the legislators on the European level, the Commission, and also the Parliament and the Council, we are living somehow in an old world. Normally if there is a problem, politicians have to solve it. But such a solution, for instance, for regulating fully ChatGPT – it would take 4 or 5 years. And this is not what everyone is expecting when it comes to regulating something that is already evolved or developed further. So that’s why I think we need to come up with a solution in our democratic system. It doesn’t need to be a full regulation, but it will give a frame where everyone knows how to move on or to develop the algorithm.
JURIST: Speaking about developing such a quick algorithm and being up to date with it, I read through the Proposal that the Member States need to establish special authorities, such as notifying authorities and emergency authorities. So actually, if AI comes out of its frames, the company should notify the authorities? Or how is it going to work? It’s really interesting. Just in legal terms, it’s a little bit different than I think it might be in reality. Maybe you can explain.
Voss: So what we discussed as for now is that we are proposing, so to say, sandboxes, which are going to be organised by an official administration. And we are saying if you’re coming out of the sandbox, you get an exit report and this indicates the legitimate use of this algorithm. With that, you have to notify the respective body, and the authority then has to take into account your exit report very seriously. But to me, it seems that the exit report is a bit more bureaucratic than necessary. But some experts it seems to be necessary for making sure that the mechanism is working properly
Of course, the first place is a self-assessment by the company. That’s why we have a practical problem; we do not have all the experts for all the algorithms. Therefore, we need to have a self-assessment by the developer itself whether this is high risk or not. And then at a certain point, the expert group around the Commission should also examine and say, oh, yes, this is high risk or not.
If a problematic situation happens with the algorithm, then it is necessary to put this into a register, under Annex Three of the AI Act. I hope this will also work well. But we still need to experience all this, if this is a good way to forward it. Once again, if not, we should not sit there and say, Oh, this is our gold standard and we have to wait and we can’t change something now. No, we should try to immediately change it.