On August 29, a white car left a house in Kabul. It made several stops around the city’s mostly dense neighbourhoods. It stayed for some time at a location that the military later claimed they believed was an ISIS warehouse. Some men were seen loading canisters into the trunk. Then it drove away and parked a few miles away from the airport.
For a programmer sitting somewhere in the US, these snippets of information were enough to launch a drone strike. An unmanned military weapon flew over the location and fired its shot. Eleven people were killed, including seven children. The US military claimed it was three.
Just two days before their hasty exit from Afghanistan, the US struck its people one last time, mistaking an innocent aid worker’s usual commute to and from work for the acts of a militant, and killing him and his seven children. The supposed ISIS safe house was in fact the office of an American NGO. As for the programmer who pressed 'send', they probably logged off at the end of their shift and went home to their family.
Welcome to modern warfare. Now you can order groceries and military strikes on faraway lands with just one click!
This week in The Global Tiller, we try to understand what makes drones so attractive to militaries around the world despite the psychological toll it has taken on the populations it has targeted. How does artificial intelligence make the waters even murkier? Is there a humane way of fighting war?
"The armed drone is really one of the defining weapons of post 9/11 period," says Chris Woods, an investigative journalist who tracks military actions in conflict zones. He describes this kind of warfare as "disconcerting", having a pilot run controls at his office desk and then logging off to go home and having dinner with family at the end of the day.
While drones are prominent in our minds when we think of remote fighting, ships firing cruise missiles thousands of miles away, or military pilots dropping bombs on the horizon are all part of remote warfare. Even 'unmanned aerial vehicles', or what we now know as drones, were being developed as early as the First World War.
The technology has been improved and honed to such an extent that these weapons are now able to conduct facial recognition. Drone supporters insist this way of fighting is more humane — militaries can identify targets and kill them, without having to take down entire villages.
This justification has led to more than 14,000 US drone strikes between 2010 and 2020 in Afghanistan, Pakistan, Somalia and Yemen, according to the Bureau of Investigative Journalism. The death toll estimated to be a staggering 16,901, more than 400 of whom were children. And the ones who survived, are growing up with the 'fear of the skies’. And it’s not just the US. As of 2019, nearly 100 countries owned military drones and many of them are not shy to use them.
If this already seems inhumane to you, brace yourself for what’s coming next: military AI. War mongers are convinced that artificial intelligence is the answer to preventing mistakes, like the one that killed Zemari Ahmadi in Kabul two weeks ago. Some of them argue that military robots could police skies and prevent torture by dictators, others believe robots will be infallible against human weaknesses, such as anger, sadism or cruelty.
Others are not so convinced about the infallibility of AI. The technology has not advanced enough to reach the kind of precision that drone supporters promise, not to mention the biases built into machine learning.
If automated weapons are to become the future of war, there are a lot of questions that need to be considered. How precise will AI become to correctly identify targets? How will we protect AI drones from being hacked by terrorists? How will we redefine the idea of 'war veterans' if they are no longer looking the enemy in the eye but just obliterating them like a video game? Do we even want to let automated weapons become our future? When we know that we are already facing unprecedented challenges from climate change, is it really the best idea to waste our resources on weapons? Or is it possible to use this technology to help us offset some of the challenges that climate change will bring?
Until next week, take care and stay safe!
Hira - Editor - The Global Tiller
If you’d like to read our previous issues, you can access our archives here.
Dig Deeper
What happens when automated weapons become cheap and ubiquitous? Watch Slaughterbots, an attempt by AI researchers, who are calling for a ban on these kinds of weapons at the United Nations, to show just how bleak that future could be.
…and now what?
In 1950, Alan Turing, the famous “inventor” of modern computers, coined a test to make sure that, as machines were becoming more and more common, we could find ways to identify when they’ll become as humans as we are. The test was designed to test a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
A few years ago, as I was starting to get interested in AI and studying its development, I reflected on the deep nature of this test and what it was saying, not about our machines but about our own humanity.
Because, up until today (and for a few more years), machines are the product of human will and knowledge. They are the physical and mechanical expressions of our intellectual abilities and our fight to survive. They are the direct descendants of other things humans created: the wheel, the hammer, the printing press and so on.
Unfortunately, along the centuries, we somehow managed to create efficient weapons before anything else. Military development has always been at the forefront of technological evolution. Many of the machines we use, such as the computer I’m typing on which is a descendant of Turing’s work for the Allies during World War II, are based on military use at first. Ironically, even the machines that save lives now.
So yes, military technology has been a good drive for our technological evolutions. But does it legitimise this approach of aiming to kill at first?
Not all of us approached inventions the same way. Many cultures across the world have created tools and machines that could have been used to kill but they did not make this choice. Polynesians used bows and arrows for games, they never found it useful for war. The Chinese used gunpowder for fireworks, never for canons.
As we push our technology forward, perhaps we can develop machines without going through the killing phase first. Since the world is commemorating the 20th anniversary of 9/11, it may be pertinent to ask this question now. After all, drone strikes were a direct consequence of these attacks, even if on that day, the then US President George Bush announced his plan to "bring those people to justice". It’s been a while since I was a lawyer but I don’t remember justice meaning killing people from a distance without due process.
Maybe machines are just a reflection of the arm holding and controlling it. The same arm that bypassed its own principles of justice to take revenge. So perhaps the debate around modern weapons, such as drones or AI, is actually about questioning our own ability to be human. Are we able to take up Turing’s challenge and test our own humanity?
In a recent conversation with Audrey Tang, Taiwan’s social innovation minister, I asked her if technology becomes a risk according to who uses it. She denied this premise, suggesting that we have the power to start designing technologies that have no ability to harm. “A rice cooker is not dangerous,” she told me. So why should AI be? Just because it has the ability to? Does it mean that every time we’re facing the choice, we would always choose the path towards destruction? Is it in our very nature?
Not according to the latest research on evolution. Our friendliness is our greatest advantage. As we remember Alan Turing’s work in a time when we’re analysing the consequences of a world-changing event, maybe it’s time to test our ability to remain human and to call out on our abilities to do good. Because if that’s what we believe we are, we should be able to live by it. And machines, these matrix of logic and algorithms, can then help us resurrect and get out of this dark hole of violence.
Philippe - Founder - Pacific Ventury