Algorithms in the welfare state and justice system

Algorithms in the welfare state and justice system

The adoption of systems of Artificial Intelligence (AI) is a very topical issue, which is having a profound impact on our society and our lives. As reported by the article of the New York Times Freedom that Hangs on Algorithms, the use of AI tools in the areas of justice and public security is becoming more and more widespread among European and not European states. In this regard, I asked a few questions via email to Nicolas Kayser-Bril, reporter at AlgorithmWatch – a Berlin-based no-profit organisation – in order to better investigate the application of AI, their future and the challenges ahead.

In general terms, the adoption of AI systems can be defined as the creation of programs capable of simulating human abilities, reasoning and behaviour. Your organization has identified 16 European states that use algorithms for these purposes. What countries are they and how did their respective governments decide to introduce those algorithms?

The “16 countries” statement referred to our upcoming publication, “Automating Society: Taking Stock of Automated Decision-Making in the EU”. In it, we researched examples in France, Italy, Germany, Estonia, Belgium, Netherlands, Slovenia, Greece, Portugal, Spain, Sweden, Finland, Poland, UK, Denmark and Switzerland. Most importantly, we found examples of Automated Decision-Making Systems (ADMS) in every country we researched. There is no reason to believe that other European countries do not use ADMS.

The decisions to introduce ADMS vary greatly. It can be municipalities that automated welfare disbursement systems, as in Trelleborg in Sweden, or governments that go through parliament to push for machine-learning to spot tax fraud (France). The past version of the report, published last year, already contains dozens of examples from 12 countries.

Artificial intelligence was also an important subject for the British physicist Stephen Hawking. In discussing the future of humanity at the Leverhulme Centre for the Future of Intelligence (CFI) in October 2016, Hawking, alongside the benefits, highlighted the serious risks that human could face if they abuse and underestimate the impacts of artificial intelligence. He said: “The development of full artificial intelligence could mean the end of the human race”. What do you think the future of these tools will be and do you agree with Hawking’s view?

When or whether “full artificial intelligence” will come is a matter of debate. Some scholars, such as Mireille Hildebrandt, a professor at Vrije Universiteit Brussel, consider that even talking of artificial intelligence is counter-productive. Instead, she argues, AI is really about “automated inferences”. Code-driven inferences provide autonomy but do not allow for meaning, she argues, highlighting the fundamental difference between human and machine agency. (see: The Artificial Intelligence of European Union Law, German Law Journal, 2020, vol. 21, no 1, p. 74-79).

On the 4th and 5th of December 2018, the European Commission for the Efficiency of Justice (CEPEJ) adopted the European Ethical Charter on the use of artificial intelligence in judicial systems and their environment. The Charter includes the principles of respect for fundamental rights, non-discrimination, quality and security, transparency, impartiality and fairness, and the so-called “under user control” principle. Do you believe that the Ombudsman can be an ally in the protection of human rights in the field of AI?

Any well-founded effort to foster a responsible use of technology should be welcome as a way to preserve the fundamental rights of EU citizens and residents. However, such efforts must be linked to enforcement mechanisms if they are to be fruitful. While there is widespread support to curb the power of algorithms, government and international institutions have yet to provide the means to enforce their guidelines and charters.

In his Report of October 2019 on extreme poverty and human rights, UN Special Rapporteur Philip Alston warned against the introduction of the “digital welfare state”. He wrote that “in such a world, citizens become ever more visible to their governments, but not the other way around”. The GCRL has been working for four years at the UN in Geneva and at the Parliamentary Assembly of the Council of Europe in Strasbourg for the recognition of the “right to know” as a fundamental human and civil right. What do you think of this initiative?

As stated in question 3, such initiatives are very laudable, as they address the concrete issue of preserving fundamental rights in a changing social and technical environment. However, without proper enforcement, they risk causing confusion as to their actual goal. In this regard, the GDPR is exemplary. While its provisions were in large part applauded by civil rights organizations, its enforcement is alarmingly defective, leading to a “regulatory standstill”, as the head of the German DPA said earlier this month (see: German regulator says Irish data protection commission is being ‘overwhelmed’, on The Irish Times).

Federica Donati

Leave a Reply