Can AI in the Humanitarian sector save lives?

With AI's unprecedented potential in the humanitarian sector, how can we ensure its ethical implementation to ensure its life-saving capabilities?

Artificial Intelligence (AI) is revolutionizing opportunities for assistance in the humanitarian sphere. The development of AI technology has led to the creation of knowledge-based and machine learning systems that have been able to interpret and produce life-saving information in various humanitarian responses. As the need for international aid increases, the humanitarian sector is turning to AI to support its efforts. However, the most common sentiment surrounding the use of AI in this field is fear. Experts are considering the long-term effects of implementing AI assistance tools in the field, and are deciding if they should embrace it. AI has the potential to completely transform the nature of humanitarian assistance, and its life-saving potential must be embraced, with caution. 

AI is supporting humanitarian actors as they implement a paradigm shift in the humanitarian world. Traditionally, humanitarian assistance has been characterized by its reactive nature, as humanitarian actors often address and provide aid in the aftermath of a crisis. AI has allowed actors to embrace anticipatory approaches in conflicts, as opposed to reactive approaches in humanitarian action. In this regard, “statistical models can be used to calculate and forecast impending natural disasters, displacement and refugee movements, famines, and global health emergencies.” Using AI frees up resources, personnel, and saves valuable time in a conflict, as well as the ability to work closely with affected communities through participatory design which better informs humanitarian action before a conflict or crisis unfolds. We are also seeing groundbreaking innovations in the traditional “response” phase of humanitarian action. “xView2” is a visual computing project that has already helped with disaster logistics and on-the-ground rescue missions during recent earthquakes in Turkey. Developed by the Pentagon’s Defense Innovation Unit and Carnegie Mellon University’s Software Engineering Institute in 2019, xView2 uses machine-learning algorithms in conjunction with satellite imagery to identify building and infrastructure damage in the disaster area and categorize its severity much faster than is possible with current methods.  

The wreckage of a collapsed building, Diyarbakır, Turkey. 6 February 2023. “2023 Earthquake Damage Turkey” by VOA is licensed in the Public Domain.

In Turkey, xView2 has been used by at least two different ground teams of search and rescue personnel from the UN’s International Search and Rescue Advisory Group in Adiyaman. Ritwik Gupta, the principal AI scientist at the Defense Innovation Unit and a researcher at Berkeley explains that residents of Adiyaman were able to “find areas that were damaged that they were unaware of.” Various international actors like the World Bank, the International Federation of the Red Cross, and the United Nations World Food Programme have all used the platform in response to the earthquake. Projects like xView2 have allowed humanitarian actors to shift away from traditional disaster assessment systems, by assembling data and creating a shared map of the affected area in mere minutes—saving time and lives.

Data-driven AI systems can also build on predictive analytics techniques, which seek to identify patterns and relationships in data, to predict developments in the field. For example, Project Jetson, an initiative of the United Nations High Commissioner for Refugees (UNHCR), uses predictive analytics to forecast forced displacement of people. The Project has been used to predict violence escalation in Somalia by building on various data sources, including climate data (such as river levels and rain patterns), market prices, and remittance data to train its machine learning algorithm. 

Humanitarian actors are in an unprecedented, privileged position where they can recognize and embrace the power and influence of AI in their field. However, with progress comes risks. As AI continues to develop, humanitarian organizations must accept that without considering and mitigating harm, AI cannot be used effectively, safely, or ethically within the humanitarian sphere. Karin Maasel, the executive director of non-profit Data Friendly Space, an organization committed to improving humanitarian response through better data, particularly in Natural Language Processing (NLP), told The New Humanitarian, in an interview: “Everybody wants to save the world, but nobody wants to help mom with the dishes.” Those “dirty dishes” refer to the pre-existing problems and legacies of the Western humanitarian system. The top-down nature of the majority of AI projects revive some of the long-standing criticisms about development and humanitarianism as preserving Eurocentric systems of knowledge and reinforcing colonial legacies.

One of the main ways these issues are perpetuated is through algorithmic bias. Bias in AI systems may exacerbate structural and historical inequalities and perpetuate direct and indirect forms of discrimination, notably on the grounds of gender and race. The consequences of deploying biased AI systems can be significant in the humanitarian context. For example, studies done on facial recognition technology have found that algorithmic biases in software systems have led to the misidentification of individuals with darker skin types as “commercially available facial recognition algorithms were less accurate in recognizing women with darker skin types due in part to a lack of diversity in training data sets.” If identification by those means is a precondition for accessing humanitarian aid, misidentification may lead to individuals being denied assistance, furthering harm to already vulnerable individuals. 

 

Annual Conference by the World Economic Forum in Davos, January 23, 2019. “Compassion through Computation: Fighting Algorithmic Bias” by Jakob Polacsek is licensed under CC BY-NC-SA 2.0 DEED.

Furthermore, Large Language Models, Like Open AI’s Chat GPT, have been found to be more likely to “sentence defendants to death” when using prompts from African-American dialects. A study done by Cornell University asked LLM’s to make hypothetical decisions about people, based on how they speak (inputting prompts in both African American English dialects, and Standardized American English). Among the results, GPT-4 technology was more likely to suggest that speakers of African American English be assigned less prestigious jobs, convicted of crimes and even “sentenced defendants to death” when triggered by African American dialect features alone. The pre-print study reinforced current research findings on how algorithmic bias perpetuates “covert racism” in deep learning algorithms. The study also found that coders could teach language models to “superficially conceal the racism they maintain on a deeper level.”

The use of machine learning systems in disaster and conflict zones can negatively affect the vulnerable communities they are intended to help. These systems rely on extensive data, often collected by humanitarian groups and other sources, to learn and predict future behavior. Terms like techno-colonialism have emerged to explain how technology, surveillance, migration, and humanitarianism have and continue to be intertwined due to their colonial histories. To ensure positive outcomes, AI’s historical ties to imperial control must be acknowledged and addressed. Relying solely on past data for analysis may overlook changes in behaviour and the environment, leading to incomplete or inaccurate predictions. Without considering these factors, AI risks perpetuating historical biases and inequalities.

In this era of unprecedented technological advancement, the humanitarian sector stands at a crossroads, where the integration of Artificial Intelligence (AI) offers boundless opportunities to revolutionize assistance efforts worldwide. However, amid the excitement and promise of AI, to harness the full potential of AI in the humanitarian sphere, humanitarian actors must prioritize decolonial frameworks. This means actively addressing algorithmic biases, ensuring diverse and inclusive data sets, and centering the voices and needs of marginalized communities in the development and deployment of AI technologies. By doing so, we can mitigate harm, enhance accountability, and uphold the principles of humanity, neutrality, impartiality and independence. By embracing new technologies while staying true to humanitarian principles, we can build a more resilient and responsive humanitarian sector—one that leaves no one behind and uplifts the most vulnerable among us.

Edited by Clare Rowbotham

Featured Image: AI-generated holographic projections of data streams, code, and 3D brain-like structures, showcasing the complexity and sophistication of AI systems, 6 November 2023. “DALL-E 3 – advanced artificial intelligence” by Alenoach is licensed in the Public Domain