Armed Drones

In February 2002, the CIA sent an armed drone into Afghanistan’s Paktika province to target three men. The men killed turned out to be innocent villagers collecting scrap metal. One …

Artwork by the Walrus Art Department

In February 2002, the CIA sent an armed drone into Afghanistan’s Paktika province to target three men. The men killed turned out to be innocent villagers collecting scrap metal. One was tall and wore robes like Osama bin Laden. Later, US defence secretary Leon Panetta called drone strikes “one of the most precise weapons that we have in our arsenal.” Proponents say they are more accurate than fighter pilots, but their accuracy depends on the intelligence that deploys them.

Over the past ten years, armed drones have become far more important in the global war on terror. It is easy to understand why. They are cheap to manufacture, easy to operate, and lethal. Used in conventional warfare, they serve as an effective weapon, the technologically advanced descendants of the arrows and cannons that preceded them. However, they make us uneasy, because they seem to reduce the risks and costs of war. An operator can sit in an air-conditioned war room, push a button, and watch a drone strike its target halfway around the world. Ethicists worry that the costs have been obfuscated, but they are still there, just harder to see. The combat stress experienced by drone operators may be equal to or greater than that of fighter pilots.

Especially troubling for citizens in democracies is how drones have been used. The covert CIA Predator program, launched in the aftermath of 9/11, targets terror suspects regardless of whether or not they happen to be in a recognized war zone. Anyone, anywhere can be a target, potentially making the whole world a battlefield. The Bush and Obama administrations have refused to release the legal opinions that framed who could be targeted, and when and where. The public has no way of knowing how the CIA chooses its targets, who approves its missions, or who should be held accountable when, inevitably, something goes wrong.

This year, Obama tentatively initiated a public conversation about rules for the use of drones. In a May 23 speech at the National Defense University, in Washington, DC, he declared that “the Afghan war is coming to an end,” even though the United States still faces determined terror networks that it will keep trying to eliminate. He asked Congress to “refine and ultimately repeal” the general authorization of force it passed in the wake of 9/11. Only those who pose “a continuing and imminent threat to the American people,” and who cannot feasibly be captured, will be targeted in the future.

Meanwhile, even as lawyers and politicians finally begin to draft guidelines for dealing with the drones already in use, a new generation of remote weapons is coming online. These “lethal autonomous robotics,” after they are programmed and activated by their human creators, will be able “to select and engage targets without further human intervention,” writes Christof Heyns, the UN Special Rapporteur on extrajudicial executions, in an April 2013 report. He argues that LARs may be qualitatively different from drones, because they can make autonomous choices in selecting a target. This blurs the distinction between what is a weapon and who is a warrior, taking the “problems that are present with drones and high-altitude air strikes to their factual and legal extreme.” Not only could their deployment “lead to a vacuum of legal responsibility,” as he observes. “The point here is that they could likewise imply a vacuum of moral responsibility.” But how will a machine distinguish between civilians and combatants when we can’t? And who will be held responsible for an error when no one is pushing the button?

Janice Gross Stein
Janice Gross Stein directs the Munk School of Global Affairs at the University of ­Toronto, and ­edited the book Diplomacy in the Digital Age .