There was once a
quite stupid Star trek episode where two planets at war with each other fought entirely by computer. When one computer successfully attacked the other one and simulated the destruction of a city, al of those city's inhabitants would report to a disintegration chamber to get vaporized. In this way, war was nice and clean.
The writers ensured that the war was fought for hundreds of years, thereby giving Kirk a reason once yet again violate the Prime Directive and interfere with the compu-war.
The point, of course, is that war is ugly and when people mask the ugliness, it makes it easier for them to kill far more innocent people.
A prominent British professor seems to have seen that episode, and
internalized it:
Noel Sharkey of the University of Sheffield said that a push toward more robotic technology used in warfare would put civilian life at grave risk.
Technology capable of distinguishing friend from foe reliably was at least 50 years away, he added.
However, he said that for the first time, US forces mentioned resolving such ethical concerns in their plans.
"Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas," Professor Sharkey said at a meeting in London.
"The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction."
Professor Sharkey, a professor of artificial intelligence and robotics, has long drawn attention to the psychological distance from the horrors of war that is maintained by operators who pilot unmanned aerial vehicles (UAVs), often from thousands of miles away.
"These guys who are driving them sit there all day...they go home and eat dinner with their families at night," he said.
"It's kind of a very odd way of fighting a war - it's changing the character of war dramatically."
The rise in technology has not helped in terms of limiting collateral damage, Professor Sharkey said, because the military intelligence behind attacks was not keeping pace.
Between January 2006 and April 2009, he estimated, 60 such "drone" attacks were carried out in Pakistan. While 14 al-Qaeda were killed, some 687 civilian deaths also occurred, he said.
Notice that Sharkey is conflating two completely different scenarios - that of a fully-automated robotic war machine, and that of the weapons that can be operated manually at a distance.
His point about fully automated weapons systems not being ready for a while is quite true, but the example he brings from the use of drones in Pakistan does not prove it. What it does show, as he mentions, is that the military intelligence has to be much more accurate for such weapons to be effective in targeting the bad guys.
I don't know whether the numbers he gives on casualties from drones in Pakistan are correct, but if they are it says zero about the morality of using drones. All it says is that the drones are not being used correctly and with the proper information being gathered before the decision to shoot is made.
In Gaza, Israeli drone operators did make a few mistakes - and far more legitimate hits. The percentages of civilian casualties from drone attacks was very small compared to the numbers that he quotes. Arguably, the ratio of civilian casualties compared to fighters from Israel's use of drones and other long-distance weapons are lower than in close-proximity fighting.
This means that Sharkey's implication that remotely-controlled weapons are inherently less moral (his dinner scenario emphasizes that point) is not true at all. Consistent policy, a clear moral code, good intelligence, as well as more accurate targeting and superior optics, would contribute to a much better fighter-to-civilian ratio.
Moreover, Sharkey does not even talk about the value of the lives
of the other side. Creating remote controlled weapons means exactly that the person doing the targeting will be alive to have dinner that night with his family. In Sharkey's moral universe, this is a bad thing - it would be much better if the operator would be within Katyusha or Kalashnikov range of the enemy. This is an absurd notion, where the point of a war - to decisively defeat the enemy - is mixed up with a childish concept of "fairness."
The irony is that these types of weapons are designed specifically to save lives, both of one's soldiers and of the enemy civilians. They are not inherently more lethal than cheap, dumb mortars, IEDs, suicide vests or rockets. When used correctly, they are significantly better both in terms of accuracy and in limiting collateral damage. These sorts of weapons should be encouraged, and Professor Sharkey is displaying a moral inversion in his opposition to them.
(h/t
cyberlens who linked to an
earlier article of mine.)