Listen to the companion podcast on our Human Rights Magazine channel!
Poor communication had a harmful impact during the humanitarian failure to intervene in the Rwandan genocide, McGill Professor Rachel Kiddell-Monroe told me as she described working in the area with Médecins Sans Frontières. “I had to do one report a month, and that was sent by fax. If they got it, they got it, and if their fax didn’t work, they didn’t get it. This was the pre digital world. It had all sorts of great characteristics, but it also had massive faults. And those massive faults were that we didn’t know what was going on.”
She said that as the Rwandan genocide was unfolding “there was a lot of stuff going on, that we kind of had inklings of it, but there was nothing that we could really do about it. Unless you investigated, the information was lacking, and your impacts became very localized. And I think a really good example is, I was in Goma [on the border with Rwanda], and while a genocide was being fomented in Rwanda, I didn’t know. Right, I only had word of mouth from people.”
Kiddell-Monroe argues that we can harness artificial intelligence to protect human rights in conflict and disaster zones. “AI can analyze information for us; it can bring together essential data that would take us weeks or months to gather. To do so is great and gives us a starting point, but that in itself is not going to change anything. It’s what we do with that information.”
Because of its ability to predict trends and interpret data, AI holds lifesaving potential. In conflict and disaster zones, using AI can free up resources, personnel, and time. These tools are now being used by humanitarian actors before and after crises unfold. AI can help assess damage after natural disasters, efficiently allocate resources, and monitor conflicts by detecting troop movements and infrastructure destruction.
Human bias in the technology
However, AI can also perpetuate bias and discrimination, enhance harmful surveillance, and violate data protection laws. Even if its pros outweigh its cons, I wanted to learn more about the ways AI can harm human rights, even in unintended ways. After speaking with Jennifer Addison, I realized that the starting point of this conversation is with the coders.
Jennifer Addison is a project manager for the Montreal-based AI4Good Lab. She runs a coding mentorship program designed to support female and gender-diverse youth as they diversify the tech space. Addison spoke to me about why labs like this exist, and the importance of diversity, equity, and inclusion in the tech world. “I think DEI is important because it can present an opportunity to try to correct some of the inequities that persist today. Certainly, those cannot be corrected without an actual reckoning or acknowledgement of why those inequities exist or why the systems were built and designed intentionally to be inequitable.”
Addison believes that to produce good technology, we need to focus on the teams responsible for the coding. Producing responsible and equitable technology requires diverse perspectives in the AI development stage. She is advocating for a cultural shift in the tech world.
She described being constantly reminded of why spaces like the AI4Good Lab exist and said, “I was talking to one of our trainees recently and she said to me, you know, I am the only girl in my computer science classes, and I find it so difficult. I don’t feel comfortable asking questions. I’m the only one in the space, or the only one that has made it in this space, and now I am carrying the weight of responsibility, representing everyone that looks like me or is like me, and that is an awful feeling.”
Ensuring that the teams developing AI are diverse would acknowledge historical inequities and help limit human bias being embedded in technology. “When we’re talking about tech or AI, let’s not forget that there is a human behind these things that we are developing,” Addison said. “We are constantly being fed information that is also rooted in bias and stereotypes and tropes, et cetera. To think that’s not then being coded into whatever we are working on, or that that’s not touching the development of whatever projects we’re working on, would be wrong. We’d be mistaken to think that.”
AI-related human rights harms are often linked to algorithmic bias. In simple terms, algorithmic bias is caused by natural human bias being embedded in AI systems. These biases can worsen existing inequalities and lead to discrimination against marginalized groups. Artificial intelligence cannot be neutral, as it reproduces the values and biases of its coders. The human rights implications of deploying biased AI systems could be severe. When trained on flawed data sets, AI can ignore key variables or prioritize certain values. An example of this would be a software system that misidentifies individuals with darker skin types because of a lack of diversity in training data sets.
To learn more about this, I spoke with Dr. Rowena Rodrigues who has a background in consultancy law, ethics, and impacts of emerging technology. She is co-lead of Innovation and Research at Trilateral Research , a consulting company that provides ethical AI solutions. She talked to me about how failures to address gaps in data may lead to more harm than good.
“An AI system is only as good as the data it is trained on. And I think if there are gaps in this, if this was not done well, and if it is focused, for example, on some parts of the population and not on others, it can have disproportionate or wrong impacts on the wrong groups.”
For example, in healthcare, AI can improve medical diagnostics and treatment but also poses risks if biased or inaccurate. “The use of AI in healthcare can be beneficial, which is also a very good thing,” Rodrigues said, “but an AI system that was trained on a certain cohort of the population, and then was taken and implemented on other individuals, could result in the wrong types of treatments. This affects the right to life. It might be the difference between life and death sometimes.”
Potential for malicious outcomes
AI technologies have also been used to breach data privacy and security regulations. In disasters, migration or conflict zones, there is a great potential for satellite imagery data to be exploited. Malicious actors can access sensitive information from detailed surveillance systems and even profit off of the unauthorized sale of sensitive data.
Professor Kiddell-Monroe is worried about the unethical use of satellite imagery. She gave the example of tracking movement. “You can monitor movements of refugees and migrants. For humanitarian use, it’s pure intention, it’s ethical. But if that information gets into the warring factions’ hands, it becomes extremely dangerous.”
AI in the community
Professor Kiddell-Monroe is advocating for AI to be a supportive tool. She talked to me about how AI can work alongside Indigenous knowledge in health crisis management. To her, the answers do not necessarily lie in the data. “Ideally, if I could, I would apply a community first approach to it. The community of Pond Inlet, in Nunavut, has a huge TB outbreak right now. AI can help us understand how TB spreads in the community. But, how about we put first of all the Indigenous knowledge, the community’s knowledge of TB, and its impact, the knowledge of how their community functions, where the elders are, where the youth are, and do a human mapping of the whole thing.”
By working alongside communities, as opposed to replacing their systems and localized knowledge, AI can be used as a supporting actor. “We are all talking a lot about traditional knowledge and ways of doing things. These are often not digitized, nor are they based in the Western framework. What matters for me is how it can support and improve our collaboration with communities. How can it support and improve communities’ capacity to build their resilience in times of severe crisis? I think there’s a massive role for AI in this.”
To ensure that the interests and needs of the communities working with AI are honored, Addison calls for constant reflection and meaningful collaboration between all parties involved. “Sometimes it’s not possible to consult with the impacted community in the way that you want. That could be time constraints, there could just be access, or the vulnerability of the population and just trying to protect their safety or anonymity, things like that. I think having a multidisciplinary team and leveraging the expertise of several individuals can really help with filling in some of those gaps.”
Accountability mechanisms
One of the ways to prioritize human rights promotion is by applying an ethical lens to AI development, Rodrigues says, by “looking at something from more than a compliance point of view and thinking about what are the consequences of what we are developing. AI has been around for a very, very long time, it’s been on the policy agenda in recent times. It’s not something completely new, but we need to make those advances as we go along; you don’t stop. So I think ethics is a continued renewed ability to look at it.”
Another aspect of accountability is ensuring that regulatory frameworks are implemented, and continuously updated. While there are no formal AI regulatory frameworks in Canada, the EU has recently passed the Artificial Intelligence Act which seeks to regulate and ensure AI is safe and protects fundamental rights. Dr. Rodrigues says that, without increasing checks and balances, AI will not be able to support communities and protect human rights.
“I think AI systems must function in a robust, secure and safe way because of the potential impact that they can have on human populations. They can, if they’re not robust, if they’re not secure, and if they’re not safe, make us more vulnerable. “It’s not a ‘should we regulate,’ it’s ‘we need to regulate for more responsible AI’. It’s not a one versus the other approach. They need to work together. If you ask industry, they’ll say, ‘the regulators don’t get us because they don’t understand the technology’, but you can only get that relationship when you talk to one another. So, I love the fact that now we have those fora where people sit down, where there’s more technical engagement with the legal sector, and there’s more legal engagement with the technical sector.”
All three of my interviewees agreed that mechanisms must be in place to hold entities accountable for AI-related harms. “We all are responsible for what we put out there,” Rodrigues said. “I think you have to look at it from the AI lifecycle point of view. So, at the design stage, who is accountable? At the use stage, who is accountable? So, I think looking at it from those diverse lenses helps.”
To make sure AI helps protect human rights, it’s crucial to have strong data security, clear regulations, ethical guidelines, and transparency. These steps are key to preventing misuse and safeguarding individual rights. For AI to be effective in this area, the tech industry needs to keep diversifying its workforce, stay mindful of its motives, and work closely with regulators and those affected by their technology.
“Despite the complete chaos we’re creating, there is something about the essential beingness of humanity, Kiddell-Monroe said. “We need to pull that out and say, this is a red line, and AI cannot cross that red line.”
Image author “Alenoach.” Credit: “Image generated by DALL-E 3, symbolizing advanced artificial intelligence.” Creative Commons CC0 1.0 Universal Public Domain Dedication.