AI assistants like ChatGPT have an emerging role in human rights advocacy: these tools have the potential to bridge knowledge gaps, empower communities, and raise awareness. But what do they really know about statelessness, what are their current and potential future applications, and how can collaboration between AI developers and human rights organisations drive positive change and help safeguard against government use of AI that risks resulting in detrimental impact on the lives of communities and individuals?

What AI assistants know about statelessness (and what they don’t)
AI assistants can define statelessness, outline its causes and consequences, and cite global statistics. They have a baseline awareness of key legal instruments like the 1954 Statelessness Convention and can describe their significance in establishing international frameworks for the protection of stateless people. However, these systems lack a nuanced understanding of evolving policies and on-the-ground realities, leaving them with some significant limitations. There are now several virtual assistants available, but the examples below use ChatGPT to test some of the possibilities and constraints of AI in the realm of statelessness advocacy.
Data Cutoffs: AI assistants only know what they have been explicitly trained on. They can’t access new developments, legal/policy changes, or case studies emerging after their training data cutoff date. For example, ChatGPT has been trained on information available up to September 2021, so if asked about statelessness in the Netherlands, it will omit information and associated concerns about the new bills that were adopted in June 2023.
Potential inaccuracies or omissions: For civil society organisations, one of the most significant barriers to adequately addressing statelessness and designing policy solutions is a lack of accurate data about stateless populations in each country. This is partly due to the absence of mechanisms and awareness-raising among authorities to identify, determine, and adequately record statelessness, including among screening officials, decision-makers, frontline refugee responders, and civil registries. Until that gap is filled, any available data on statelessness will in itself be biased, and AI systems may not adequately portray this. AI assistants also risk reproducing the errors in any biased, outdated, or incorrect information that was included in their training data. Without balanced training data, for example, an AI may downplay the impact of phenomena like gender discrimination in nationality law.

Linguistic and regional biases: Training datasets tend to overrepresent English-language sources focused on North America and Europe. This results in knowledge gaps on how statelessness plays out in other parts of the world.
Risk of misinformation and stereotyping: Without oversight, AI systems risk perpetuating harmful biases or stereotypes and may gloss over nuances when discussing how statelessness plays out in certain contexts. They may rely on flawed training data that overlooks minority groups affected by statelessness (such as Roma people or indigenous populations) and may perpetuate harmful stereotypes associated with such groups (for example, an AI could reinforce the perception that Roma communities have no attachment to place, when in fact many Roma people settle in communities long-term). Moreover, statelessness is sometimes created or exacerbated by discriminatory state policies or practices against particular groups, and an AI might not capture these nuances, especially when they are linked to controversial issues in international politics.
Use of problematic language: AI assistants lack contextual awareness and cannot comprehend the implications of their words or how meanings may vary. They are also sensitive to user input, which means they may replicate problematic language present in either a user’s query or their training data.

Response variability: The way a query is phrased can significantly influence an AI’s output, which can range from nuanced explanations to outright misinformation. This can inadvertently reinforce knowledge gaps, particularly among those who may not be aware of the complexities of the issue.

Lack of human perspectives: Nobody understands the impact of living without a nationality better than stateless people themselves, but their voices may be underrepresented in or missing from AI training data, resulting in responses that lack empathy and emotional depth, and moreover, don’t account for the broad spectrum of lived realities of statelessness.

AI doesn't possess the capability to learn or adapt beyond its training data and cannot independently verify information. Crucially, AI doesn’t know what it doesn’t know, so human oversight and expertise are crucial when using AI to support advocacy and awareness-raising initiatives.
What can civil society do to address AI’s blind spots on statelessness?
Both oversight and collaboration can help to address the gaps and potential biases in AI’s knowledge of statelessness. As civil society, we have a role to play in safeguarding against misinformation and providing information that captures the diverse contexts and nuances of the issue.
Platforms like AI for Good and AI Commons connect NGOs and grassroots advocates with AI companies, researchers, and developers that share their values and interests. This is an opportunity to work side-by-side with developers to flag blind spots and biases in existing systems and provide input into training on the impact of knowledge gaps and inaccuracies for AI developers. Impactful partnerships can also be forged between civil society advocates and academic AI researchers. Initiatives like the Alan Turing Institute's AI for Human Rights project exemplify this approach, bringing together civil society partners like Amnesty International with academic teams to co-design AI applications focused on upholding justice and human rights.
Practically speaking, we can maximise the impact of our resources by publishing them under open licenses and allowing their integration into AI training datasets. Translating materials into multiple languages and defining technical terms also boosts accessibility, enhances global comprehension of statelessness, enables more effective AI applications, and promotes inclusive solutions. We can assist in bridging knowledge gaps by identifying and sharing impactful datasets, such as the ENS Statelessness Case Law Database and the StatelessnessINDEX. Equally important, of course, are resources that convey the human dimension of statelessness and offer emotional insights that data alone cannot capture. By fostering collaborative, equal-footed partnerships with stateless communities and their representatives, we can collect and share first-hand accounts through storytelling mediums such as interviews, documentaries, and testimonies. These resources can enrich and humanise narratives surrounding statelessness (AI generated or otherwise), providing a deeper understanding of the personal experiences and challenges faced by stateless individuals. These types of resources also serve as valuable reference material for AI developers and researchers, enhancing AI's capacity to provide empathetic and accurate responses to queries about statelessness.
Harnessing the power of AI for statelessness advocacy
Despite AI's limitations, these tools offer valuable capabilities for many organisations whose work includes statelessness advocacy. Their potential to help with tasks like content creation, compiling information, translating resources, and editing and proofreading make them a particularly useful asset for small or under-resourced groups and initiatives looking to maximise their impact. Of course, it's crucial to combine AI-driven content with human expertise to ensure accuracy and relevance.
AI assistants have already proven their worth in statelessness advocacy initiatives, an exciting example of which is Statefree's ‘Anny Faice’, a social AI activist that uses AI technology to shed light on the experiences of stateless individuals and the crucial work being done in this field. Another promising application is the use of speech recognition and voice interfaces like Google Cloud Text to Speech to make resources more accessible to communities that may face barriers related to literacy or language provision, helping to enhance communication with groups that may otherwise be marginalised or left behind in advocacy efforts.
Looking ahead, there is a remarkable potential to enhance existing resources by integrating them with AI capabilities. Digital tools like our StatelessnessINDEX, for example, could be equipped with a chatbot interface that would allow users to inquire about statelessness, access data insights, and obtain information on statistics and case studies. A chatbot functionality powered by the likes of ChatGPT could also be applied to databases such as our Case Law Database and could serve as a virtual guide, facilitating user navigation through cases while explaining complex legal and policy concepts.
Our shared responsibility: navigating the pitfalls of AI
The emergence of AI technologies carries profound implications, not only for statelessness advocacy but for society as a whole, as explored in this Chatham House research paper. Already, AI is being deployed by governments to assist in making decisions that hold significant consequences for individual citizens and entire societies, including in healthcare, criminal justice, and asylum processes. A future where AI is used by governments to assist in determining the credibility of someone’s application for statelessness status is almost certainly on the horizon, and as we consider this future, it’s clear that AI's transformative power is a double-edged sword. Such a development comes replete with risks, from a loss of human oversight and contextual understanding to the replication of biases and discrimination in AI training data.
The stakes are high, and it is crucial to acknowledge that AI-driven solutions have the power to shape lives for better or worse. Whether we embrace this transformation or not, the reality is that AI is advancing rapidly and ignoring these developments is not an option: we must actively engage in shaping the responsible use of AI in statelessness advocacy and beyond. As we navigate this evolving landscape, our responsibility as a sector is to understand the limitations and risks of AI, ensure human oversight, advocate for robust regulations, and uphold the rights and dignity of stateless individuals always. With thoughtful human partnership, AI's potential for social good can be realised. But fulfilling that promise begins with centring ethics, amplifying voices from the margins, and embracing the expertise of those with lived experience.
The transformative potential of AI in this field is wide-reaching, and this editorial only scratches the surface of the multifaceted challenges and ethical considerations that demand our attention. If you or your organisation are actively engaged in addressing these issues or have valuable insights to contribute, please get in touch!