You may not know what a 9 mm Parabellum firearm round looks like, but you’ll certainly have seen one in the movies or on TV, a snubby, often brass-jacketed shell used in ammunition belts or stacked in automatic magazine cartridges. It was designed by Georg Luger in 1901 and is the standard military round for US and Nato forces, because it’s cheap, easy to load, has moderate recoil, doesn’t excessively wear out the gun, and is readily available just about anywhere. There are heavier calibres, including hollow points – invented by the colonial British at their Dum Dum arsenal in Kolkata – which explode on entering the body, but for general-purpose killing at ranges of 160 feet or more, the 9 mm Parabellum cartridge is considered highly effective by ‘the modern science of wound ballistics’ (bit.ly/3yORWIS). The name comes from an old Latin proverb, since adopted as the Royal Navy’s motto: Si vis pacem, para bellum – If you want peace, prepare for war.
A socialist would of course say, if you want peace, prepare for socialism, because you won’t get peace any other way. Capitalist rulers constantly prepare for war but they don’t like to admit this too baldly, so they call it ‘defence’ and engage in worthy and much-publicised conferences about what is and what is not a civilised way of killing someone. Thus, in the history of firearms ammunition, we find that there is by implication such a thing as an ethical bullet. The 19th-century Hague Convention swiftly pronounced itself shocked by the British invention of the hollow point, and moved to outlaw such exploding bullets for military use on the grounds that they caused an unacceptable (ie, horrific) level of injury. This ban did not apply to civilian use however, and what’s possibly even more horrific is the accepted argument made by police forces that, because such bullets fragment inside the body, they don’t pass right through and kill some innocent bystander. Believe it or not, this is why police forces to this day are allowed to use these vicious rounds, at least in certain circumstances.
The question, what is the ‘ethical’ way to kill someone, reappears every time weapons technology gets an upgrade, or when the opposite side is found to be using some weapon you haven’t got (or do not admit to having). Much was made, in the early days of the Ukraine war, about Russian forces using cluster bombs, which are banned under the 2010 Convention on Cluster Munitions. These are indeed nasty because they throw out little bomblets which explode all over the place, and worse, often fail to go off immediately but explode later when picked up by children who think they’re bright orange toys. But media talk of Russian ‘war crimes’ obscured the fact that Ukraine forces were also using cluster bombs, and that neither side in the conflict had actually signed up to the ban. As it turns out, the ban is most heavily supported by countries either having suffered directly from their use or not having any such munitions anyway, and least by countries who have stockpiled them, this latter category including the USA, China, India, Pakistan, Israel, Brazil and Finland. The ugly truth is, cluster bombs are highly effective, and unlikely to blow back unexpectedly on the user, unlike unpredictable chemical and biological weapons, which may explain why the ban on these is more universally adhered to.
Now the debate is moving onto the question of regulating autonomous weapons, so-called killer robots, that can make their own battlefield decisions without any human agency at all. As the Tofflers pointed out years ago (War and Anti-War in 21st Century, 1995), weapons development since the Bronze Age has been all about the separation of combatants to ever greater distances, from primitive hand-to-hand stab and slash to modern remote push-button drones, with a consequent desensitisation to the effects of the weapons and, it is supposed, an increasing readiness to use them. Now humans are on the point of being removed from the picture entirely, as machines decide who to kill and when, or where both sides are equally equipped, what other machines to destroy and when.
The argument for battlefield autonomy follows inexorably from the logic of drone warfare. Drone communications can be hacked or jammed, cutting off the human controller. In such circumstances it’s either lose the battle, or allow the drone to make its own decisions. Both sides in Ukraine are already using ‘loitering munitions’, which can lurk in a vicinity waiting for a suitable target to appear, and then destroy it (bit.ly/3chkSS6). These munitions can easily be made autonomous, though the accuracy of their target recognition remains a work in progress. Here is where the call for regulation becomes loudest, not so much in moral outrage at the idea of being killed by a machine instead of a human, but at the prospect of autonomous drones mistaking a school bus for a tank or a hospital for a missile bunker. What’s not up for debate though is that as soon as one country ups the ante with killer drones, the rest will surely follow. Wars are about winning a fight to the death, not storming the moral high ground and hoisting pretty flags.
Where technology is concerned, science fiction can often have profound things to say. In a 1967 Star Trek episode called A Taste of Armageddon, two planets are locked in perpetual war, but they’ve agreed to keep it clean and environmentally destruction-free by computerising the whole business. Thus, citizens are randomly selected as virtual casualties and must then report to termination booths to be killed in reality. This simulation is regarded as affordable and acceptable to both planets and is the reason why the war never ends. As the show pithily observes, attempts to take the horror and destruction out of war are really attempts to perpetuate it.
In socialism, where wealth is commonly owned and thus not something to be fought over, autonomous drones could be used as weather gauges, for crop monitoring, or as solar-powered swarms in large-scale search and rescue missions, criss-crossing thousands of square miles of ocean to look for crash debris. And if society wills it, they could explore other worlds in our solar system, whose distance in light-minutes would make Earth-based decision making impractical. Autonomous drones could be part of a fantastic future for humans, just not as weapons.
Paddy Shannon
No comments:
Post a Comment