Research
10.04.2024

When can a decision be considered 'automated’?

In a timely new article published in the German Law Journal, Postdoctoral Researcher Francesca Palmiotto addresses some of the most pressing questions arising at the nexus of public administration and automated decision-making.

There are already numerous ways in which automated systems are being deployed by government and state authorities to support - and in some cases replace - decisions which until recently were made entirely by humans. Paying attention to the legal implications of these new uses of AI and automated systems within public administration, Francesca Palmiotto broaches the important question of when, and under what circumstances, a decision can be considered ‘automated’. 

This paper explores the consequences of increased outsourcing of decision-making to machines and AI systems by public bodies. It sets out to investigate when a decision can be regarded as automated from a legal perspective, and how exactly automated decision-making can and should be used in the course of public administration without infringing upon the rights of citizens and applicants. As such, the paper addresses the uncertainties relating to the future AI governance, paying close attention to the legal protections currently available under GDPR as well as the recently enacted AI act within the context of automated decision-making. In response to the developing reality of AI in public administration, the author appeals for a fundamental rights approach to AI governance in order to safeguard rights in the future.

Read the paper here.

 

Francesca Palmiotto is a Postdoctoral Researcher at the Centre for Fundamental Rights working on the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) Project. Funded by the Volkswagen Stiftung, the AFAR Project is a 4-year collaborative research project investigating the use of new technologies in migration and asylum governance. 

Author: