Law, artificial intelligence and automated decisions on liberty and migration
By: Jorge Leyva
The increasing use of artificial intelligence in matters of migration control, security screening and risk assessment has introduced a structural shift in the way decisions affecting liberty are produced. What were once individual administrative acts, attributable to identifiable authorities and susceptible to direct scrutiny, are now frequently generated through automated or semi-automated systems whose internal logic remains opaque to both the affected individual and the reviewing authority.
These systems are not legally neutral tools. They operationalise predefined criteria, weighting mechanisms and data sets that translate political and administrative priorities into decision-making outputs. When deployed in migration and security contexts, they influence outcomes such as visa refusals, entry bans, enhanced surveillance or prioritisation for enforcement. The legal relevance lies not in the technological sophistication of these systems, but in the intensity and irreversibility of the effects they produce.
Automated decision-making alters the traditional structure of accountability. The attribution of responsibility becomes fragmented between designers, operators and authorities that formally endorse the outcome without having materially assessed its basis. This fragmentation weakens the individual’s ability to challenge the decision, as the grounds for refusal or restriction are often expressed in generic terms or reduced to risk scores that cannot be meaningfully contested. The appearance of objectivity masks a substantive deficit of justification.
From a legal standpoint, the central issue is the displacement of discretion without a corresponding displacement of responsibility. When an automated system determines outcomes that restrict liberty or mobility, the obligation to provide reasons does not dissolve. It intensifies. Decisions that cannot be explained in intelligible legal terms cannot satisfy minimum standards of due process, regardless of their statistical performance or administrative efficiency.
The use of artificial intelligence also affects evidentiary standards. Data-driven systems operate on correlations rather than causal determinations of individual conduct. When such correlations are treated as sufficient grounds for restrictive measures, the presumption of individual assessment is replaced by probabilistic suspicion. In migration contexts, this substitution carries a heightened risk of structural discrimination, as historical data often reflects prior biases and enforcement asymmetries.
Legal control cannot be deferred to ex post technical audits or abstract assurances of compliance. Where automated systems are used to produce effects comparable to those of coercive state action, the legal order requires ex ante transparency, traceability and the possibility of effective challenge. Absent these conditions, the decision-making process ceases to be legally reviewable in any meaningful sense.
The legal consequence is direct. Automated systems may assist administrative decision-making, but they cannot replace legally accountable judgment in matters affecting liberty and migration status. When the state relies on artificial intelligence to justify restrictive measures without providing intelligible reasons and effective avenues of challenge, it does not modernise governance. It displaces constitutional guarantees behind a technical interface, rendering legal protection formally intact but materially inaccessible.