The first time Karl Ricanek was stopped by police for “driving while Black” was in the summer of 1995. He was twenty-five and had just qualified as an engineer and started work at the US Department of Defense’s Naval Undersea Warfare Center in Newport, Rhode Island, a wealthy town known for its spectacular cliff walks and millionaires’ mansions. That summer, he had bought his first nice car—a two-year-old dark green Infiniti J30T that cost him roughly $30,000 (US).
One evening, on his way back to the place he rented in First Beach, a police car pulled him over. Karl was polite, distant, knowing not to seem combative or aggressive. He knew, too, to keep his hands in visible places and what could happen if he didn’t. It was something he’d been trained to do from a young age.
The cop asked Karl his name, which he told him, even though he didn’t have to. He was well aware that if he wanted to get out of this thing, he had to cooperate. He felt at that moment he had been stripped of any rights, but he knew this was what he—and thousands of others like him—had to live with. This is a nice car, the cop told Karl. How do you afford a fancy car like this?
What do you mean? Karl thought furiously. None of your business how I afford this car. Instead, he said, “Well, I’m an engineer. I work over at the research centre. I bought the car with my wages.”
That wasn’t the last time Karl was pulled over by a cop. In fact, it wasn’t even the last time in Newport. And when friends and colleagues shrugged, telling him that getting stopped and being asked some questions didn’t sound like a big deal, he let it lie. But they had never been stopped simply for “driving while white”; they hadn’t been subjected to the humiliation of being questioned as law-abiding adults, purely based on their visual identity; they didn’t have to justify their presence and their choices to strangers and be afraid for their lives if they resisted.
Karl had never broken the law. He’d worked as hard as anybody else, doing all the things that bright young people were supposed to do in America. So why, he thought, can’t I just be left alone?
Karl grew up with four older siblings in Deanwood, a primarily Black neighbourhood in the northeastern corner of Washington, DC, with a white German father and a Black mother. When he left Washington, DC, at eighteen for college, he had a scholarship to study at North Carolina A&T State University, which graduates the largest numbers of Black engineers in the US. It was where Karl learned to address problems with technical solutions, rather than social ones. He taught himself to emphasize his academic credentials and underplay his background so he would be taken more seriously amongst peers.
After working in Newport, Karl went into academia, at the University of North Carolina, Wilmington. In particular, he was interested in teaching computers to identify faces even better than humans do. His goal seemed simple: first, unpick how humans see faces, and then teach computers how to do it more efficiently.
When he started out back in the ’80s and ’90s, Karl was developing AI technology to help the US Navy’s submarine fleet navigate autonomously. At the time, computer vision was a slow-moving field, in which machines were merely taught to recognize objects rather than people’s identities. The technology was nascent—and pretty terrible. The algorithms he designed were trying to get the machine to say: that’s a bottle, these are glasses, this is a table, these are humans. Each year, they made incremental, single-digit improvements in precision.
Then, a new type of AI known as deep learning emerged—the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches—in this case, millions of photos uploaded to the web that could be used to train new image recognition algorithms.
Deep learning catapulted the small gains Karl was seeing into real progress. All of a sudden, what used to be a 1 percent improvement was now 10 percent each year. It meant software could now be used not just to classify objects but to recognize unique faces.
When Karl first started working on the problem of facial recognition, it wasn’t supposed to be used live on protesters or pedestrians or ordinary people. It was supposed to be a photo analysis tool. From its inception in the ’90s, researchers knew there were biases and inaccuracies in how the algorithms worked. But they hadn’t quite figured out why.
The biometrics community viewed the problems as academic—an interesting computer-vision challenge affecting a prototype still in its infancy. They broadly agreed that the technology wasn’t ready for prime-time use, and they had no plans to profit from it.
As the technology steadily improved, Karl began to develop experimental AI analytics models to spot physical signs of illnesses like cardiovascular disease, Alzheimer’s, or Parkinson’s from a person’s face. For instance, a common symptom of Parkinson’s is frozen or stiff facial expressions, brought on by changes in the face’s muscles. AI technology could be used to analyse these micro muscular changes and detect the onset of disease early. He told me he imagined inventing a mirror that you could look at each morning that would tell you (or notify a trusted person) if you were developing symptoms of degenerative neurological disease. He founded a for-profit company, Lapetus Solutions, which predicted life expectancy through facial analytics, for the insurance market.
His systems were used by law enforcement to identify trafficked children and notorious criminal gangsters such as Whitey Bulger. He even looked into identifying faces of those who had changed genders, by testing his systems on videos of transsexual people undergoing hormonal transitions, an extremely controversial use of the technology. He became fixated on the mysteries locked up in the human face, regardless of any harms or negative consequences.
In the US, it was 9/11 that, quite literally overnight, ramped up the administration’s urgent need for surveillance technologies like face recognition, supercharging investment in and development of these systems. The issue was no longer merely academic, and within a few years, the US government had built vast databases containing the faces and other biometric data of millions of Iraqis, Afghans, and US tourists from around the world. They invested heavily in commercializing biometric research like Karl’s; he received military funding to improve facial recognition algorithms, working on systems to recognize obscured and masked faces, young faces, and faces as they aged. American domestic law enforcement adapted counterterrorism technology, including facial recognition, to police street crime, gang violence, and even civil rights protests.
It became harder for Karl to ignore what AI facial analytics was now being developed for. Yet, during those years, he resisted critique of the social impacts of the powerful technology he was helping create. He rarely sat on ethics or standards boards at his university, because he thought they were bureaucratic and time consuming. He described critics of facial recognition as “social justice warriors” who didn’t have practical experience of building this technology themselves. As far as he was concerned, he was creating tools to help save children and find terrorists, and everything else was just noise.
But it wasn’t that straightforward. Technology companies, both large and small, had access to far more face data and had a commercial imperative to push forward facial recognition. Corporate giants such as Meta and Chinese-owned TikTok, and start-ups like New York–based Clearview AI and Russia’s NTech Labs, own even larger databases of faces than many governments do—and certainly more than researchers like Karl do. And they’re all driven by the same incentive: making money. These private actors soon uprooted systems from academic institutions like Karl’s and started selling immature facial recognition solutions to law enforcement, intelligence agencies, governments, and private entities around the world. In January 2020, the New York Times published a story about how Clearview AI had taken billions of photos from the web, including sites like LinkedIn and Instagram, to build powerful facial recognition capabilities bought by several police forces around the world.
The technology was being unleashed from Argentina to Alabama with a life of its own, blowing wild like gleeful dandelion seeds taking root at will. In Uganda, Hong Kong, and India, it has been used to stifle political opposition and civil protest. In the US, it was used to track Black Lives Matter protests and Capitol rioters during the uprising in January 2021, and in London to monitor revellers at the annual Afro-Caribbean carnival in Notting Hill.
And it’s not just a law enforcement tool: facial recognition is being used to catch pickpockets and petty thieves. It is deployed at the famous Gordon’s Wine Bar in London, scanning for known troublemakers. It’s even been used to identify dead Russian soldiers in Ukraine. The question whether it was ready for prime-time use has taken on an urgency as it impacts the lives of billions around the world.
Karl knew the technology was not ready for widespread rollout in this way. Indeed, in 2018, Joy Buolamwini, Timnit Gebru, and Deborah Raji—three Black female researchers at Microsoft—had published a study, alongside collaborators, comparing the accuracy of face recognition systems built by IBM, Face++, and Microsoft. They found the error rates for light-skinned men hovered at less than 1 percent, while that figure touched 35 percent for darker-skinned women. Karl knew that New Jersey resident Nijer Parks spent ten days in jail in 2019 and paid several thousand dollars to defend himself against accusations of shoplifting and assault of a police officer in Woodbridge, New Jersey. The thirty-three-year-old Black man had been misidentified by a facial recognition system used by the Woodbridge police. The case was dismissed a year later for lack of evidence, and Parks later sued the police for violation of his civil rights. A year after that, Robert Julian-Borchak Williams, a Detroit resident and father of two, was arrested for a shoplifting crime he did not commit, due to another faulty facial recognition match. The arrest took place in his front garden, in front of his family.
Facial recognition technology also led to the incorrect identification of American-born Amara Majeed as a terrorist involved in Sri Lanka’s Easter Day bombings in 2019. Majeed, a college student at the time, said the misidentification caused her and her family humiliation and pain after her relatives in Sri Lanka saw her face, unexpectedly, amongst a line-up of the accused terrorists on the evening news.
As his worlds started to collide, Karl was forced to reckon with the implications of AI-enabled surveillance—and to question his own role in it, acknowledging it could curtail the freedoms of individuals and communities going about their normal lives. “I think I used to believe that I create technology,” he told me, “and other smart people deal with policy issues. Now I have to ponder and think much deeper about what it is that I’m doing.”
And what he had thought of as technical glitches, such as algorithms working much better on Caucasian and male faces while struggling to correctly identify darker skin tones and female faces, he came to see as much more than that.
“It’s a complicated feeling. As an engineer, as a scientist, I want to build technology to do good,” he told me. “But as a human being and as a Black man, I know people are going to use technology inappropriately. I know my technology might be used against me in some manner or fashion.”
In my decade of covering the technology industry, Karl was one of the only computer scientists to ever express their moral doubts out loud to me. Through him, I glimpsed the fraught relationship that engineers can have with their own creations and the ethical ambiguities they grapple with when their personal and professional instincts collide.
He was also one of the few technologists who comprehended the implicit threats of facial recognition, particularly in policing, in a visceral way.
“The problem that we have is not the algorithms but the humans,” he insisted. When you hear about facial recognition in law enforcement going terribly wrong, it’s because of human errors, he said, referring to the over-policing of African American males and other minorities and the use of unprovoked violence by police officers against Black people like Philando Castile, George Floyd, and Breonna Taylor.
He knew the technology was rife with false positives and that humans suffered from confirmation bias. So if a police officer believed someone to be guilty of a crime and the AI system confirmed it, they were likely to target innocents. “And if that person is Black, who cares?” he said.
He admitted to worrying that the inevitable false matches would result in unnecessary gun violence. He was afraid that these problems would compound the social malaise of racial or other types of profiling. Together, humans and AI could end up creating a policing system far more malignant than the one citizens have today.
“It’s the same problem that came out of the Jim Crow era of the ’60s; it was supposed to be separate but equal, which it never was; it was just separate . . . fundamentally, people don’t treat everybody the same. People make laws, and people use algorithms. At the end of the day, the computer doesn’t care.”
Excerpted from Code Dependent: Living in the Shadow of AI by Madhumita Murgia. Published by Henry Holt and Company. Copyright © 2024 by Madhumita Murgia. All rights reserved.