Listen to the companion podcast on our Human Rights Magazine channel!
Poor communication had a harmful impact during the humanitarian failure to intervene in the Rwandan genocide, McGill Professor Rachel Kiddell-Monroe told me as she described working in the area with Médecins Sans Frontières. “I had to do one report a month, and that was sent by fax. If they got it, they got it, and if their fax didn’t work, they didn’t get it. This was the pre digital world. It had all sorts of great characteristics, but it also had massive faults. And those massive faults were that we didn’t know what was going on.”
She said that as the Rwandan genocide was unfolding “there was a lot of stuff going on, that we kind of had inklings of it, but there was nothing that we could really do about it. Unless you investigated, the information was lacking, and your impacts became very localized. And I think a really good example is, I was in Goma [on the border with Rwanda], and while a genocide was being fomented in Rwanda, I didn’t know. Right, I only had word of mouth from people.”
Kiddell-Monroe argues that we can harness artificial intelligence to protect human rights in conflict and disaster zones. “AI can analyze information for us; it can bring together essential data that would take us weeks or months to gather. To do so is great and gives us a starting point, but that in itself is not going to change anything. It’s what we do with that information.”
Because of its ability to predict trends and interpret data, AI holds lifesaving potential. In conflict and disaster zones, using AI can free up resources, personnel, and time. These tools are now being used by humanitarian actors before and after crises unfold. AI can help assess damage after natural disasters, efficiently allocate resources, and monitor conflicts by detecting troop movements and infrastructure destruction.
Human bias in the technology
However, AI can also perpetuate bias and discrimination, enhance harmful surveillance, and violate data protection laws. Even if its pros outweigh its cons, I wanted to learn more about the ways AI can harm human rights, even in unintended ways. After speaking with Jennifer Addison, I realized that the starting point of this conversation is with the coders.
Jennifer Addison is a project manager for the Montreal-based AI4Good Lab. She runs a coding mentorship program designed to support female and gender-diverse youth as they diversify the tech space. Addison spoke to me about why labs like this exist, and the importance of diversity, equity, and inclusion in the tech world. “I think DEI is important because it can present an opportunity to try to correct some of the inequities that persist today. Certainly, those cannot be corrected without an actual reckoning or acknowledgement of why those inequities exist or why the systems were built and designed intentionally to be inequitable.”
Addison believes that to produce good technology, we need to focus on the teams responsible for the coding. Producing responsible and equitable technology requires diverse perspectives in the AI development stage. She is advocating for a cultural shift in the tech world.
She described being constantly reminded of why spaces like the AI4Good Lab exist and said, “I was talking to one of our trainees recently and she said to me, you know, I am the only girl in my computer science classes, and I find it so difficult. I don’t feel comfortable asking questions. I’m the only one in the space, or the only one that has made it in this space, and now I am carrying the weight of responsibility, representing everyone that looks like me or is like me, and that is an awful feeling.”
Ensuring that the teams developing AI are diverse would acknowledge historical inequities and help limit human bias being embedded in technology. “When we’re talking about tech or AI, let’s not forget that there is a human behind these things that we are developing,” Addison said. “We are constantly being fed information that is also rooted in bias and stereotypes and tropes, et cetera. To think that’s not then being coded into whatever we are working on, or that that’s not touching the development of whatever projects we’re working on, would be wrong. We’d be mistaken to think that.”
AI-related human rights harms are often linked to algorithmic bias. In simple terms, algorithmic bias is caused by natural human bias being embedded in AI systems. These biases can worsen existing inequalities and lead to discrimination against marginalized groups. Artificial intelligence cannot be neutral, as it reproduces the values and biases of its coders. The human rights implications of deploying biased AI systems could be severe. When trained on flawed data sets, AI can ignore key variables or prioritize certain values. An example of this would be a software system that misidentifies individuals with darker skin types because of a lack of diversity in training data sets.
To learn more about this, I spoke with Dr. Rowena Rodrigues who has a background in law, ethics, and the impacts of emerging technology. She is Head of Innovation and Research at Trilateral Research, a consulting company that provides ethical AI solutions to address problems such as child exploitation, community safeguarding, and air quality monitoring. She also works on the RAI-UK funded RAISE project developing guidance for SMEs to use generative AI responsibly. She talked to me about how failures to address gaps in data may lead to more harm than good.
“An AI system is only as good as the data it is trained on. If there are gaps in the data, e.g., if it is focused, for example, on some parts of the population and not on others, it can have disproportionate or adverse impacts on different groups.”
For example, in healthcare, AI can improve medical diagnostics and treatment but also poses extreme risks if biased or inaccurate. “The use of AI in healthcare can be beneficial, which is a very good thing. But, an AI system trained on a certain cohort of the population, and then taken and implemented on other individuals, could result in the wrong types of treatments and adverse impacts affecting the right to life.”
Potential for malicious outcomes
AI technologies have also been used to breach data privacy and security regulations. In disasters, migration or conflict zones, there is a great potential for satellite imagery data to be exploited. Malicious actors can access sensitive information from detailed surveillance systems and even profit off of the unauthorized sale of sensitive data.
Professor Kiddell-Monroe is worried about the unethical use of satellite imagery. She gave the example of tracking movement. “You can monitor movements of refugees and migrants. For humanitarian use, it’s pure intention, it’s ethical. But if that information gets into the warring factions’ hands, it becomes extremely dangerous.”
AI in the community
Professor Kiddell-Monroe is advocating for AI to be a supportive tool. She talked to me about how AI can work alongside Indigenous knowledge in health crisis management. To her, the answers do not necessarily lie in the data. “Ideally, if I could, I would apply a community first approach to it. The community of Pond Inlet, in Nunavut, has a huge TB outbreak right now. AI can help us understand how TB spreads in the community. But, how about we put first of all the Indigenous knowledge, the community’s knowledge of TB, and its impact, the knowledge of how their community functions, where the elders are, where the youth are, and do a human mapping of the whole thing.”
By working alongside communities, as opposed to replacing their systems and localized knowledge, AI can be used as a supporting actor. “We are all talking a lot about traditional knowledge and ways of doing things. These are often not digitized, nor are they based in the Western framework. What matters for me is how it can support and improve our collaboration with communities. How can it support and improve communities’ capacity to build their resilience in times of severe crisis? I think there’s a massive role for AI in this.”
To ensure that the interests and needs of the communities working with AI are honored, Addison calls for constant reflection and meaningful collaboration between all parties involved. “Sometimes it’s not possible to consult with the impacted community in the way that you want. That could be time constraints, there could just be access, or the vulnerability of the population and just trying to protect their safety or anonymity, things like that. I think having a multidisciplinary team and leveraging the expertise of several individuals can really help with filling in some of those gaps.”
Accountability mechanisms
Industry must play a role in ensuring that technology is used for social good. One of the ways to prioritize social welfare and human rights is by applying an ethical lens to AI development. To Dr. Rodrigues, applying an ethical lens is “looking at something from more than a compliance point of view and thinking about what we are developing. Ethics gives us the continued and renewed ability to look at new and changing impacts and consequences to develop solutions responsibly.”
Another key piece of the puzzle is ensuring that regulatory frameworks are implemented, and continuously updated. While there are no formal AI regulatory frameworks in Canada, the EU has recently passed the Artificial Intelligence Act which seeks to regulate and ensure AI is safe and protects fundamental rights. Dr. Rodrigues explained that without the right checks and balances, AI will not be able to support communities and protect human rights.
“I think AI systems must function in a robust, secure and safe way because of the potential impact that they can have on human populations. If they are not robust, secure, safe, they make us more vulnerable. It’s not a ‘should we regulate,’ it’s a ‘we need to regulate for more responsible AI’. It is not a one versus the other approach – this does not work. I think different sectors and actors need to work together. Because if you ask industry, they’ll say, ‘the regulators don’t get us because they don’t understand the technology’, but we can only have that good relationship when we talk to one another. So, it’s good that now we have those fora for discussion, and there’s more technical engagement with the policy/legal sector, and vice versa.
All three of my interviewees agreed that mechanisms must be in place to hold entities accountable for AI-related harms. Dr. Rodrigues said, “We all are responsible for what we put out there. I think you have to look at it from the AI lifecycle point of view. So, at the design stage, who is accountable? At the use stage, who is accountable? Looking at it from those diverse lenses helps.”
To make sure AI helps protect human rights, it’s crucial to have strong data security, clear regulations, ethical guidelines, and transparency. These steps are key to preventing misuse and safeguarding individual rights. For AI to be effective in this area, the tech industry needs to keep diversifying its workforce, stay mindful of its motives, and work closely with regulators and those affected by their technology.
“Despite the complete chaos we’re creating, there is something about the essential beingness of humanity, Kiddell-Monroe said. “We need to pull that out and say, this is a red line, and AI cannot cross that red line.”
Image author “Alenoach.” Credit: “Image generated by DALL-E 3, symbolizing advanced artificial intelligence.” Creative Commons CC0 1.0 Universal Public Domain Dedication.