Human rights organizations issue complaint against risk-scoring algorithmic system in France News
iankelsall1 / Pixabay
Human rights organizations issue complaint against risk-scoring algorithmic system in France

Amnesty International, along with 14 other organizations, issued a complaint on Wednesday demanding the French Social Security Agency’s National Family Allowance Fund (CNAF) stop using a risk-scoring algorithmic system.

The algorithm is used by the CNAF to flag overpayments and errors in benefit payments. This works with a scoring system, selecting individuals to be further investigated based on set criteria. According to La Quadrature du Net, one of the leading organizations in the complaint, “individuals in situations of vulnerability find themselves over-monitored compared to the rest of the population.” As mentioned by Amnesty International, this discriminatory effect targets people with “disabilities, lone single parents who are mostly women, and those living in poverty.” The organizations launched the complaint in the Council of State.

Amnesty International claims that the use of this algorithmic system is discriminatory, subjecting individuals to different standards under the law. Moreover, CNAF is allegedly violating the right to privacy of millions of French citizens. As stated by Agnès Callamard, Secretary General at Amnesty International, “these systems flatten the realities of people’s lives. They work as extensive data-mining tools that stigmatize marginalized groups, and invade their privacy”.

The complaint demands that CNAF stop using this scoring system, including a fine of 1024 euros per day of delay from the day of delivery of the Court’s decision. Furthermore, it poses preliminary questions for the Court of Justice of the European Union. Amnesty International recalls that “EU lawmakers have been vague in explicitly defining social scoring within the AI Act.” The EU’s AI Act entered into force in August of this year. Regulation 2024/1689 prohibits the use of AI systems providing social scoring of natural persons by public actors that lead to discriminatory outcomes. In recital 31 the European Union legislators argue these systems “violate the right to dignity and non-discrimination and the values of equality and justice.”

Amnesty International believes the AI Act is not specific enough to classify the system employed by CNAF as a social scoring system as defined by the Regulation. Still these 15 organizations aim to pursue legal action at a national level. The European Commission is expected to issue guidelines for the high-risk classification before February 2025. This may propose further protection for individuals at a European level against the risks of artificial intelligence.