The European Commission’s High-Level Expert Group on AI (AI HLEG) on Wednesday released a second report about artificial intelligence (AI) development and deployment, specifically aimed at “maximising [] benefits whilst minimising and preventing [] risks.”
The report includes 33 recommendations for the EU, focusing on four domains: AI helping in general society, in the private sector, in the public sector, and in academia.
Using the recently-termed concept of “Trustworthy AI,” the EU is aiming for goals that include a single European market. This type of goal, though, requires complementary legislation among its Member States to enable “lawful, ethical, and robust AI-enabled goals and services.”
In addition to lofty recommendations, the report also describes specific protections for humans and our society. Despite potential government interest in a “secure society” that relies heavily on AI, the report urges caution regarding the “mass surveillance of individuals.” Individual privacy and freedoms should be maintained.
As technology continues to improve, the report also recommends the introduction of a “mandatory self-identification” system, which would require deployers of AI systems to “disclose that in reality the system is non-human.”
Transparency will continue to be a concern as AI becomes more entrenched in the daily activities of the modern world. The AI HLEG, though, is confident that early implementation of ethical guidelines will help prevent abuses.