Save Lives - Build 'Killer Robots'

-fae5d7a2-6751-4d49-847a-954c851c3147.jpeg

Yesterday Stephen Hawking and other tech scientists wrote a letter saying 'a military AI is a bad idea'. Essentially the argument is build 'killer robots' and some time down the line everyone dies. But this is not the case, 'Killer robots' will save lives.

Modern Western foreign policy deals with the principle of defence. Western nations are not bully boys that launch unprovoked attacks against smaller states. Any Western 'killer robots' are not out to massacre populations, but protect populations from being massacred.

But the West exists in a very dangerous world with lots of enemies. Obviously the old and highly successful principle of deterrence applies to more traditional states. But of course the foreign policy discourse has moved on from states fighting states. We now have illogical and psychotic terrorists who absolutely will not stop. It is worth remembering that terrorists will always try to kill us in more effective ways, and thus develop the new AI super weapons. Whatever preventative measures are taken the technology will one day become available. When terrorists do develop the 'killer robots' I think we'd all be a lot safer if we had them as well to fight back.

But most importantly 'killer robots' offer an interesting principle of wars being conducted almost purely by AI. When you take troops off the battlefield and replace them the simple fact is people do not die. Only robots die. In other words killer robots have a chance of limiting deaths in war. If conflict is going to continue on in the future, and I hope it does not, this is surely a good thing.

Bring on the robot armies!

Elliott Johnson is the Political Editor of Conservative Way Forward.

Follow him on Twitter

Showing 1 reaction

  • commented 2015-07-29 11:57:37 +0100
    There is a world of difference between a battlefield ‘robot’ following a set of pe-programmed parameters or remotely controlled by a human operator (Reaper drones for example) and an autonomous machine intelligence that has the power to decide what’s a legitimate target and what isn’t… I strongly suspect that Hawking et al are referring to the latter because with independent intelligences comes independent logic and it’s only a matter of time before those T1000’s decide those little squishy things running around the place really are obsolete…