In a rare showing of international cooperation, the US and Russia refused to sign a UN treaty banning killer robots. As if the rise of the Reich Wing wasn't enough to worry about, now we get to live in fear of the robocalypse too. At least we won't have to worry about Trump making America great, we'll be too busy fighting off the Terminators.
Politico reports that, on Saturday, the UN Convention on Conventional Weapons (CCW) concluded a debate on fully autonomous weapons powered by artificial intelligence, AKA "killer robots." The bad news is that the US, South Korea, Israel, and Australia opposed signing on to non-binding regulations that could ban the use of robots with artificial intelligence to engage military targets independently. The good news is that 26 countries did sign on; and called for tough regulations governing their use as the UN starts the long, boring negotiating process of how humanity uses slaughterbots.
Right now there isn't any clear definition for "killer robot," and the UN has been trying to define that within the context of humanitarian law for the last several years. In speaking with The Verge, UN CCW chair Amandeep Gill says the goal of these talks isn't so much to talk about banning them as it is creating a dialogue. Gill states that some countries are "quite content with leaving this to national regulations, [and] to industrial standards," and notes there are many different ideas on what constitutes lethal autonomous weapons (LAWs). So far, the CCW has been able been able to break down the issue into four nerdy areas :
First, the characterization issue -- How do you define lethal autonomous weapons systems? Second, what should be the nature of the human element in the use of force through such systems? What should be the human-machine interface when such systems are deployed or developed? The third item is, what are the various options for dealing with the international humanitarian law and the international security-related concerns coming from the potential deployment of such weapons systems?
...The fourth point is about technology review. In this field, more than any other today, technology is evolving very rapidly. So you want your policy responses to be tech-neutral. They should not have to be fundamentally revised when technology changes. At the same time, you want to make sure that the implementation stays in step with technical developments.
Human beings remain in control of most LAWs; so a robot-powered murder crusade may not be here yet, but the threat is no longer the stuff of cheesy sci-fi. The US military has been using semi-autonomous unmanned aerial vehicles (UAVs) since Bush 43 started playing "Risk" in the Middle East. Obama may have been reluctant to put boots on the ground, but he had no problem launching hundreds of drone strikes throughout Africa and the Middle East. But all that pales in comparison to Trump who rapidly expanded the use of UAVs by the militaryandthe CIA throughout the world. He really is a lazy SOB.
For what it's worth, Obama famously noted that civilian casualties will "haunt" himfor as long as he lived. But when Trump was shown video of a CIA drone strike where a suspected terrorist was walking away from a group of civilians, Trump reportedly asked, "Why did you wait?"
Defense contractors love to advertise their toys. They boast about how cheap and easy it is to blow shit up from half-a-world away. And since drones have no families or feelings, they're entirely expendable! That means politicians will never have to wash the blood from their hands, waste time attending funerals instead of fundraisers, or pay out death benefits. It's a win-win-win!
Almost two decades of warfare has ensured that the US is no longer the only player in the robot wars. Since 2015 the US and China have been selling UAVs to countries like Turkey, Iran, Pakistan, India, Italy, Iraq, and more. The Russian state media has been bragging about the robot tank it's had rolling around Syria for months. ISIS even has a bootleg air force of DIY drones built from plywood, trash bags, and cheap, off the shelf technology. They don't all have the capability to pilot themselves, or drop bombs and shoot missiles, but the good ones can identify heat signatures and pre-select targets in combat situations. And it isn't very difficult to strap some flying junk-bot full of explosives for kamikaze run.
A number of big names from DOD brass to Silicon Valley science bozos have already been sounding alarms. Just last week Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff , expressed concerns about the "Terminator Conundrum" and opined, "...the tools we are developing absolve humans of the decision to inflict violence on the enemy." Selva noted that the DOD's biggest beef with LAWs is that they can't exactly trust a kill-bot.
Without such trust, autonomous systems will not be adopted except in extreme cases such as missions that cannot otherwise be performed. Further, inappropriate calibration of trust assessments — whether over-trust or under-trust — during design, development, or operations will lead to misapplication of these systems. It is therefore important for DOD to focus on critical trust issues and the assurance of appropriate levels of trust.
The idea that you just can't trust a soulless toaster has been echoed by a prominent human rights organizations and a squad of peaceful geeks. A super group of NGOs called The Campaign to Stop Killer Robots is on a mission to halt the use of fully autonomous weapons, arguing that their use "crosses a fundamental moral line" that makes it easier for half-wit political despots to declare war. Some hippies at Amnesty International have also joined calls for a ban. Back in April thousands of Google employees signed a letter protesting the company's development of AI for the Pentagon. During a meeting on the future of artificial intelligence last year, one advocacy group launched a creepy viral video in an effort to slutshame the nerds working with the defense industry to push the boundaries of 21st century war machines.
Humanity has yet to create something akin to Skynet, that technology just doesn't exist, but we're getting closer. The use of artificial intelligence, be it Siri or or a sex-doll, heralds just as many benefits for our future as it does our doom. Many nerds argue that they're not trying to put guns in the hands of robots, but it's a slippery slope. Mark Zuckerberg didn't see a problem selling highly personal user data to Cambridge Analytica, and Peter Thiel's evil spy machine has redefined the surveillance state. The DOD might want to keep people in control for now, but all it takes is one Trump Twitter tantrum, or some crackpot general obsessed with our precious bodily fluids, to start the robocalypse.
[ Politico / Defense News / The Verge / Campaign to Stop Killer Robots ]
Wonkette is independent and fully funded by readers like you! Click here to tip us!
US Kind Of Likes The Idea Of Killer Robots
I think your needle's stuck, fuckwit.
Why would I need a safe room? I'm not afraid of some tedious wingnut like you.
But you're out of your depth, fuckwit. You do not belong here, and should go running back to Dead Breitbart's Home for the Incurably Dipshitty.