Artificial Intelligence and human migrations: to be or not to be?
Migrations in any form, as a movement of individuals or populations from one geographical area to another, is a perpetual process.
Artificial Intelligence (AI) is one of the most trending topics globally; it would be impossible to neglect the nexus between AI and Human migration.
To evoke the possibility of applying AI to the migration process, meet a fierce resistance of those who claim that the AI has nothing to do with it because it has, per se, a potential threat to Human rights.
Many practitioners and scholars allege that the migrant’s data processing is opaque, the rights and freedoms of migrants are at significant risk, and the use of personal data is done beyond migrants’ will.
No one can state and confirm that migrants are aware of where and by whom their collected data is used?
A lot of progress is made, and supervisory and notified competent authorities do their best to ensure the respect of Law in accordance with fundamental rights; however, we will not discuss all these aspects but only give some insights into practical difficulties that AI may encounter while dealing with migrants.
We will neither use legal definitions nor technical terminology. We will strive to stay as limpid as possible.
So, without further ado, let’s put the question bluntly.
What predictable output are we looking for when we envisage applying AI to Human migration?
We know perfectly that owing to AI, it is possible to communicate successfully using natural language recognition, answer questions based on learned and stored information, and process data, detect and predict patterns.
AI’s knowledge distillation has emerged as a promising and effective method for addressing fast-evolving demands with prediction accuracy.
We also know that due to AI’s subset machine learning, we can process the existing data millions of times faster than humans. The output would depend on many variables, crossing correlations, and patterns.
Patterns and insights derived from data exploration would enable us to build a model in which we would be prone to predict a potential outcome with efficiency and extreme accuracy.
Some protagonists of AI argue that it is possible to improve the accuracy of the model’s performance with more data by using parameter tuning or cross-validation.
All said is true; no one argues the vantage of AI or usefulness, and perhaps we would be strongly tempted to leave AI to make decisions, but things are not as simple as we believe.
In the area of Asylum and Migration, many decisions depend on different circumstances, especially on interpretation of the facts and applying the law to a concrete set of facts.
In some cases, the final output could be a categorical variable because it is a classification problem: to grant a right or not.
A “bad” algorithm could find an “overfitting” or an “underfitting” pattern, so we need to be cautious and slow down.
Fundamental human rights, albeit general, however, are exercised individually.
What does it mean? In layman’s terms, it signifies that we should take the legal norm and facts and apply them to the specific case. We wouldn’t be able to make an analogy because migration stories are as different as human fingerprints.
To accurately picture the situation, I suggest simulating one case among thousands when an asylum seeker applies for international protection, in other words, to claim refugee status.
What would happen?
The competent authority will examine the case thoroughly, consider the facts and legislation in force, and decide to grant the Right or turn down the application.
Until here, it is easy to select the facts, in return, oppose legal provisions, and roll out the algorithm.
If the decision-making process is done integrally by AI, the facts and legal provisions would be matched, conditions met or not, and the machine would print the following output “YOUR RIGHT IS GRANTED” or “YOUR APPLICATION IS TURNED DOWN.”
However, it would be an “imitation game” but would never reflect reality or be a legitimate, ethically correct, and just decision.
A simple example will make things even more apparent.
Suppose an asylum seeker provides proof of prosecution. Let’s say a medical certificate that alleges that they were hospitalized for aggression and injury.
How a machine would appreciate if the hospital stamp is authentic or not, and if the doctor who issued this document exists in real life.
Needless to explain that only human investigation with elements of proof would determine if the evidence was forged or not.
Another example of an irregular migrant who wishes to return to the country of its origin with State assistance. This kind of voluntary return programs have their eligibility criteria, and to become its beneficiary, one must comply with conditions.
Let’s imagine that the migrant hosting country’s stay duration period is inferior to the required precondition. If the decision-making process is made integrally by an AI system, the migrant cannot benefit from this assistance.
In real life, things could be completely different, and competent authority, taking into account the extreme vulnerability of the migrant personal situation, would organize the voluntary return based on humanitarian considerations.
Here also, we note that the interpretation of facts and assessment by human prevail algorithm problem resolution.
Yet, for those who ignore it, we are in the stage of Artificial narrow/weak Intelligence, which means that AI would be applied for a particular task for resolving a narrowly defined specific issue but will not think.
We know that AI, while mimicking human behaviors as “problem-solving” and “learning from experience,” could be very impressive; however, it can’t make any reasonable decision because of the absence of human thinking, and sometimes, it could be just a result of a sequence of actions.
We acknowledge that AI would learn facts and behaviors without significant human oversight, but it cannot interpret the facts, oppose them to Law, assess the situation, and grant Rights.
Should we definitely exclude AI for Human Migration?
Of course not!
We know perfectly that AI can help improve and automate decision-making.
However, AI should not automate the whole decision-making because AI “decides” with “human-like”perception and cognition, without social and emotional behavior, essential for humans. The keywords here are “human-like.”
Ask the question for yourself?
We, human beings, do we deserve our personal case to be handled by a human-like and decided for us?
Once again not, however, we need to understand what could be done in a way that AI benefits migrants, public service providers, and society as a whole with full respect of fundamental rights, freedoms, and existing legal rules.
For this reason, the human-centric, responsible but risk-based European regulatory AI approach doesn’t prohibit the use of AI systems in the Migration and Asylum area but classifies it as HIGH RISK.
What does it mean concretely?
It means that in this specifically and explicitly named sector; significant risks can be expected to occur while using AI systems.
In other terms, not every use of AI in the Migration and Asylum area is equal to significant risks.
However, according to the EU regulations, the use of AI systems in Migration and Asylum for all parties concerned should comply with strict and mandatory obligations and requirements that concern data, documentation, traceability, provision of information and transparency, human oversight, trustworthiness, conformity, accuracy, etc.
Would these legal provisions exclude any misuse of AI systems and violations?
If, for some, “reasonably foreseeable misuse” is possible,” for the others, provisions such as ” ensuring the potential abuse of such technologies are sufficiently mitigated” will suffice.
To follow up for further developments……
Disclaimer: The views and opinions expressed in this presentation are those of the authors and do not necessarily reflect the official policy or position of the AF4SD and its partners.
By Andranik VAN
March 26, 2022