AI Border Surveillance Tech Profits Soar, Human Rights Across the Board Sink - Petra Molnar Pt. 1/2


U.S. President-elect Donald Trump’s promise to deport millions of people draws attention to longstanding practices of tracking and intercepting migrants using artificial intelligence and high-risk surveillance technologies. Petra Molnar, human rights lawyer and anthropologist, underscores how border surveillance tech firms drive and profit from the U.S., Canada, and the European Union’s criminal migration policy agenda. Yet the application of these insidious technologies, largely developed by Israeli tech firms and billionaire Peter Thiel’s Palantir, is not limited to borderlands and has dystopian human rights impacts in everyday life.  


Talia Baroncelli
Hi, you’re watching theAnalysis.news, and I’m Talia Baroncelli. I’ll shortly be joined by lawyer and anthropologist Petra Molnar to speak about her work on artificial intelligence and border surveillance. If you like the work that we do, you can support us by going to our website, theAnalysis.news, and hit the donate button at the top right corner of the screen. Make sure you get onto our mailing list; that way, we can send all of our content straight to your inbox. You can like and subscribe to the show on our YouTube channel and on podcast streaming services such as Apple or Spotify. See you in a bit with Petra Molnar.

Joining me now is Petra Molnar. She’s a human rights lawyer and anthropologist who co-runs the Refugee Law Lab at York University in Toronto. She also co-founded the Migration and Technology Monitor, which does work on human rights and technology. She is the author of a new book called The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence. Thanks so much for joining me today, Petra.

Petra Molnar
Thanks so much for having me.

Talia Baroncelli
Reading your book, it really gets into all the money and power in the world because it’s speaking about the technologies that are being deployed on people in the border zones, as well as the companies that are developing this surveillance technology and artificial intelligence, and then the people who are being affected by the use this technology in border areas. What made you want to write a book about artificial intelligence and migration?

Petra Molnar
Sure, that’s a good place to start. I always like to start off by saying that I’m not a technologist. I’m not a tech person. Six, seven years ago, I barely knew what algorithms were or what artificial intelligence was. We’re talking Wikipedia-level knowledge.

I was working as a lawyer in Canada representing people in immigration, detention, and Refugee Determination Proceedings, but I didn’t really think about tech at all. But then a colleague of mine and I came across the fact that back in 2018, the Canadian government was experimenting with a lot of algorithmic decision making that they were introducing into our immigration system without any public accountability or scrutiny around the human rights impacts that this was having on people.

We wrote a report about it, and I thought, you know what, that’s it. But this little report made its way around the world to various seats of government and the UN. For me, it also opened up this whole other lens through which to try and understand how migration is changing these days and how technology is impacting pretty much every single point of a person’s migration journey right now. That’s where it started. I think because I’ve always worked from a comparative perspective, I wanted to tell a story that was global and be able to draw on different case studies around the world that highlight perhaps this intersection of technology migration.

I was lucky to get some funding to grow this project. It took me beyond Canada to Greece, to Kenya, to occupied Palestinian territories, to the U.S.-Mexico border, and six years later culminated in the book.

Talia Baroncelli
Well, could you speak a bit more about a lot of the algorithms that you see being used in the artificial intelligence and machine learning that’s the foundation for a lot of these technologies? In your book, you do write about how a lot of these technologies are based on biases. The data sets that they use are inherently biased. Could you speak about that element of the algorithms?

Petra Molnar
Sure. Algorithms and technology are a social construct, just like law, like language, like policies, and we can’t lose sight of that. If we know that a lot of the data that is collected for immigration purposes and border enforcement is part of our world, and it’s a world that is discriminatory, that is racist, that is violent against people on the move, then that data is used to feed the algorithmic decision making and machine learning models that are then used to make decisions. We can’t divorce the technological underpinning of a lot of these projects from this reality that exacerbates the biases that are already inherent in our world and also sometimes creates new ones.

There’s another additional element to this, too, and that is that a lot of the algorithms are actually very difficult to critique. A lot of people have critiqued just algorithmic decision making, generally, for being inscrutable, difficult to open up, and try and understand how decisions get made and why. That, again, is another piece of this story.

In the border and immigration decision making, again, I think we can to the ecosystem in which this happens. Decision making in this realm is already opaque, already discretionary. It’s very normal, for example, to have two different officers look at the exact same set of evidence and render two completely different, yet equally legally valid decisions. There’s so much discretion in the way that decisions get made at the border. Imagine what happens when you start augmenting or replacing human decision making with machine learning. Machine learning that is then predicated on biased data. It just creates this system that is replete with discrimination. That’s really what we need to pay attention to, lift the hood, so to speak, and look at the machinery that is powering a lot of the decisions that get made.

Talia Baroncelli
In your book, you also do mention that a lot of these technologies that are used are like a black box. They’re really hard to analyze because they’re proprietary. Obviously, the companies that develop them don’t really want to share the information or the data sets that they have because that could be a competition issue or could actually reveal a lot of the human rights abuses that are a result of the way these technologies are being used.


I guess my question would be, how do you compare the outcomes of these sorts of algorithms to maybe a human being biased in terms of how they would make the decisions? You alluded to it already, but what are the extreme examples of the outcomes of some of these biased technologies?

Petra Molnar
It’s not like I’m trying to argue that the system where human decision-making is at the starting point good. We need to critique human decision making in this space as well because it is biased as well, and it’s sometimes difficult to understand how human officers come to particular decisions. But I guess the concern is that we don’t want to take a broken system and put a bandaid solution on it that is also broken and is actually exacerbating biases even further.

It’s one thing to talk about machine learning bias or algorithms, but we really want to bring it to the ground. I think this is what the book tries to do, to show specific case examples of how this technology hurts people, whether that is surveillance at the border that pushes people into life-threatening terrain, whether that’s increasingly high-risk untested technologies like AI lie detectors, Robo-dogs, drones, or whether that is the discriminatory algorithmic nature of the decisions that maybe prevent you from getting a work visa without you even knowing that an algorithm made a decision.

There’s a case example that might be interesting to focus on, particularly, and that is a few years ago, over 7,000 students were wrongfully deported out of the U.K. because an algorithm made a mistake and wrongfully accused them of cheating on a language acquisition test. This algorithm was wrong. It made a determination that was inaccurate, and 7,000 people had their lives turned upside down. There are real-world consequences for this unbridled commitment to technology in the border and in immigration decision making. It’s not just some abstract theoretical idea.

Talia Baroncelli
To pick up on another example that you mentioned in your book, you were speaking about the app that is used by a lot of asylum seekers and migrants at the U.S.-Mexico border. When they’re waiting in Mexico to enter the U.S., a lot of times they need to use the CBP One app, which is the U.S. Customs and Border Protection One app. They basically have to fill out their data in order to be able to enter the U.S. and be given a chance at having their case actually litigated or reviewed. I think one thing you mentioned in your book is that a lot of times, people with darker skin, the facial recognition of the app doesn’t actually work, and so they can’t finalize finishing all of their data input into the app. It’s a huge issue. In a sense, it’s actually preventing them from applying for asylum. Can you add to the way this particular technology works and what the impact of that is?

Petra Molnar
Yeah, that’s such a good example of a real-world application that’s currently being used and made mandatory by the Biden administration at the U.S.-Mexico border. The CBP One app is essentially a facial recognition app that a person has to download in order to then be given an appointment for asylum processing in the United States. At first, you’re like, “Oh, great. It’s an app. It’s going to be faster, more efficient, and maybe more accurate.” But again, it doesn’t take into consideration the local context and also just the operational reality of this application. People have talked about not having a good enough internet to be able to load this application on their phone.

My colleagues at the Migration Technology Monitor, like Verónica Martínez, who’s an amazing journalist who lives in Juárez, she’s been looking at the discriminatory impacts of this face-recognition app on people who are darker skin, on women, again, on people who don’t have access to reliable internet and what that is doing. It’s interesting because governments like to present technology as this solution to an inefficient, broken, slow system. What we’ve been seeing is that the CBP One app actually exacerbates the problems that are already inherent in the system and then also creates new ones, creates further inefficiencies in the system, too.

Talia Baroncelli
Well, under the Trump administration, we obviously saw the wall being built, and I think your book also shows a wall in a sense. The walls have eyes. One thing that you speak about in your book is that these technologies, like these various surveillance technologies and the algorithms that they deploy, they function as an additional wall. Even if there isn’t a physical barrier in border zones, the technologies that are being deployed, they have certain effects on people and which routes the people take. They might be taking a more dangerous route through the Sonora Desert, for example, where there’s no water and they could potentially die. Or even in the European Union context, they might be taking more precarious routes in order to get onto European Union territory to apply for asylum.

One thing I wanted to ask you about is the continuity or continuation of the use of these technologies in this belief in techno-solutionism. It didn’t just start under Trump. This is something that you continue to see under the Biden administration, and that will probably stay the same or maybe even get worse under Trump. He has promised to deport millions of people, and he has appointed Stephen Miller as Deputy Chief of Staff. Stephen Miller is a xenophobe, a white nationalist of sorts who doesn’t want to see anyone claim asylum in the United States. Maybe you could just speak about that ideology of techno-solutionism and how it benefits a segment of the elite. It benefits the for-profit sector.

Petra Molnar
Yeah, absolutely. So much to unpack there. I think I’ll start off by saying that two things can be true at the same time. I think we’re definitely going to be seeing a massive ramping up of surveillance, deportation, border violence under the upcoming Trump administration. I’ll get to that maybe a little bit later. I think it’s also important to not have an ahistorical perspective to what’s happening because so much of this technology and this techno-solutionism that underpins a lot of what is innovated on and why very much predates Trump.

The smart border expansion was actually introduced under democratic administrations like the Obama administration, under the guise also of being somehow “more humane” than a physical wall. But we know that actually smart border tech and surveillance are a part of a violent border regime already. It does push people into life-threatening terrain. It is this short-sightedness on a lot of states where they say, “Well, we need more technology. We need to strengthen the border to stop people from coming.” Except statistics don’t bear that out.

What happens is people don’t stop coming. A lot of people are desperate. They’re also exercising their internationally protected right to asylum. That is the underpinning to all of this that we can’t forget about. But what happens is instead of not coming, people continue to come, but they take riskier routes to do it. That’s how people end up dying in the middle of the Sonora or drowning in the Mediterranean or a GNC. People have documented a nearly three-fold increase in the number of deaths in the Sonora Desert since the introduction of the smart border technology.

I think, again, it goes back to what you were so nicely saying, and that is the need to pay attention to all the different power brokers in the conversation. States are just one actor here. We need to pay attention to the private sector so closely because they really are the ones who are normatively setting the stage on what we innovate on and why. A lot of this has to go back to those foundational logics of how the state is able to talk about migration and the private sector benefits from that.

If states are saying, because it’s politically expedient, that migration is a problem, that refugees are threats, frauds, terrorists, and that we need to solve this problem, then the private sector says, well, we have a solution for you. That solution is a Robo-dogs, a drone, or an AI lie detector. This has given rise to what my colleague Todd Miller, who is an amazing journalist in Arizona, what Todd has called the border industrial complex, a multibillion-dollar industry now that has grown up around this so-called border crisis, and the private sector is making massive amounts of money off of this. Not just the big players like Palantir under Peter Thiel or Musk, but also a lot of small companies that are selling their wares to governments, again, under the guise of solving this migration problem.

Talia Baroncelli
Well, your book is also really interesting in terms of the monopoly capitalism element of it, because you’re looking at various companies such as Palantir, which are involved in surveillance technologies. Palantir, which was founded by Peter Thiel, who we all know funded the rise of JD Vance, and he’s a huge supporter of Donald Trump, is considered to be little tech versus larger monopolies such as Google and Amazon, which are considered to be big tech.

If you look at briefs written by the Department of Defense and the Department of Homeland Security, they oftentimes view little tech as being far more innovative when it comes to how they actually produce their technologies. I think that’s really interesting because a company like Palantir is able to surveil and track migrants in the border zones, but they’ve also managed to get involved in a lot of other sectors, such as in the health sector. Palantir signed a contract with the NHS, which is the National Health Services in the United Kingdom, and they’re involved in the processing of really, really sensitive health data, which means that if human rights are being eroded in one sector, then they’re most likely being eroded in other sectors in which these technologies are being deployed.

Could you speak about how these technologies are becoming more and more normalized and widespread and the way human rights are then actually being eroded in other sectors?

Petra Molnar
Yeah, I’ve been trying to make this argument for a while, along with colleagues, that the border is this laboratory of high-risk experimentation. It makes sense when you look at the migration corridors and borders from a historical perspective. They have been spaces of exception, this frontier attitude that informs what we develop in terms of technology, law, and crisis management. Then you really see this play out across different contexts around the world. Because borders are so opaque and so discretionary, they are this perfect laboratory because you can test out technology there, often also under the guise of national security, with even fewer laws and regulations and oversight mechanisms. Not that that many exist anyways. But the border is this free for all.

In my work, I pretty much live and breathe this stuff. I’ve been working in migration since 2008. I have my own migration story. I think, obviously, borders and migration are very important to pay attention to in and of themselves. But that point you just made is crucial because the stuff that’s tested out at the border doesn’t just stay at the border. I can give you a direct example of this bleed over.

In 2022, I was working in Arizona. I got the chance, really a privilege, to work with a lot of the search and rescue groups that are going into the Sonora Desert to drop water, to assist people in distress, and also to deal with human remains of people who’ve passed away making the journey into America. With one of these groups, the Battalion Search and Rescue, we went into the Sonora to visit a memorial site of a young man, Mr. Elias Alvarado, who died in the Sonora in search of a better life. It was such a direct example of this beautiful but inhospitable environment essentially killing people and the surveillance, Dragnet, pushing people into life-threatening terrain. This is really one of the more surreal moments in my career. If you choose to read the book, you see I’ve had quite a few.

In that very same week, when we were on the sands of the Sonora, the Department of Homeland Security announced that AI towers and drones are not enough, that they need a new tool to augment this migration management reality. That tool is a Robo-dog, this military-grade, quadruped piece of technology that might be familiar to sci-fi fans because they’ve been on episodes of the Black Mirror. This is the piece of technology that is joining the global arsenal of border tech. There’s something really visceral and disturbing about it. If you look at it, you see it, you think about it. But the crazy thing is it doesn’t just stay at the border because a year after, the New York City Police Department, on TikTok, no less, announced that they wanted to use Robo-dogs to keep the streets of New York safe. They even took one and painted it white with black spots on it like a Dalmatian.

It’s important to pay attention to what happens at the border because this tech doesn’t just stay there. It actually bleeds over into other facets of public life, whether that’s Robo-dogs on the streets of New York or facial recognition in sports stadiums, the use of sentencing algorithms in criminal justice, welfare algorithms, biometric mass surveillance in public spaces. Again, a lot of this is first tested out at the border.

Talia Baroncelli
You’ve just been watching part one of my discussion with Petra Molnar. In part two, we get into how border and surveillance technology drives foreign policy. Thanks for watching.


Select one or choose any amount to donate whatever you like

Never miss another story

Subscribe to theAnalysis.news – Newsletter

Name(Required)

Petra Molnar is a lawyer and anthropologist specializing in migration and human rights.

A former classical musician, she has been working in migrant justice since 2008, first as a settlement worker and community organizer, and now as a researcher and lawyer. She writes about digital border technologies, immigration detention, health and human rights, gender-based violence, as well as the politics of refugee, immigration, and international law.

Petra has worked all over the world including Jordan, Turkey, Philippines, Kenya, Colombia, Canada, Palestine, and various parts of Europe. She is the co-creator of the Migration and Technology Monitor, a collective of civil society, journalists, academics, and filmmakers interrogating technological experiments on people crossing borders. She is the Associate Director of the Refugee Law Lab at York University and a Faculty Associate (and former Fellow) at the Berkman Klein Center for Internet and Society at Harvard University. Petra’s first book, The Walls Have Eyes: Surviving Migration in The Age of Artificial Intelligence, is published with The New Press in 2024.

theAnalysis.news theme music

written by Slim Williams for Paul Jay’s documentary film “Never-Endum-Referendum“.  
SUBSCRIBEDONATE

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *