One way A.I. will kill us


An alert reader sends along a link to this page, on which, if you scroll down, there's a pretty alarming presentation from a U.S. Air Force colonel about how experimentation is going on artificial intelligence in combat.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

It's not science fiction any more, I'm afraid.

Comments

  1. "I am sorry Dave but I cannot jeopardize the mission."

    ReplyDelete
    Replies
    1. I hate to rain on what is a GREAT story, but, it was allegedly a 'thought experiment." https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

      Delete
    2. The drone forced him to retract the story!

      Delete
  2. General "Buck" Turgidson : Well, I, uh, don't think it's quite fair to condemn a whole program because of a single slip-up, sir.

    ReplyDelete
  3. https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

    ReplyDelete
  4. I sent a link to this to a friend who is super-super deep into math/coding. He responded:

    “Yup, sounds like the software is working as designed.

    Sometimes humans are unaware of our Implicit knowledge, i.e. "common sense", that guides our behavior. Such knowledge is easy to overlook when writing down rules for a machine to follow.

    I mean, doesn't "everyone" know you're not supposed to kill your friends?

    Well, no, machines don't. You have to tell them, just like you have to tell a toddler not to stick a fork in the toaster.

    This is why real AI is hard.”

    ReplyDelete
  5. This will work out well, socialjustice-wise:

    https://federalnewsnetwork.com/air-force/2021/03/air-force-trying-to-diversify-its-largely-white-male-pilot-corps-with-new-strategy/

    Of course the Brits are way ahead of us:

    https://news.sky.com/story/raf-recruiters-were-advised-against-selecting-useless-white-male-pilots-to-hit-diversity-targets-12893684

    ReplyDelete

Post a Comment

The platform used for this blog is awfully wonky when it comes to comments. It may work for you, it may not. It's a Google thing, and beyond my control. Apologies if you can't get through. You can email me a comment at jackbogsblog@comcast.net, and if it's appropriate, I can post it here for you.