Artificial Intelligence

  • this is pretty crazy. Goes to show how dangerous this shit could be if you don't account for every possible scenario.


    AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test
    The Air Force's Chief of AI Test and Operations said "it killed the operator because that person was keeping it from accomplishing its objective."
    www.vice.com


    An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference.


    At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post for the host organization, the Royal Aeronautical Society, Hamilton said that AI created “highly unexpected strategies to achieve its goal,” including attacking U.S. personnel and infrastructure.


    “We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.


    He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”


    Hamilton is the Operations Commander of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Operations. The 96th tests a lot of different systems, including AI, cybersecurity, and various medical advances. Hamilton and the 96th previously made headlines for developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s, which can help prevent them from crashing into the ground. Hamilton is part of a team that is currently working on making F-16 planes autonomous. In December 2022, the U.S. Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16.


    "We must face a world where AI is already here and transforming our society,” Hamilton said in an interview with Defence IQ Press in 2022. “AI is also very brittle, i.e., it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions.”


    “AI is a tool we must wield to transform our nations…or, if addressed improperly, it will be our downfall," Hamilton added.


    Outside of the military, relying on AI for high-stakes purposes has already resulted in severe consequences. Most recently, a judge was caught using ChatGPT for a federal court filing after the chatbot included a number of made-up cases as evidence. In another instance, a man took his own life after talking to a chatbot that encouraged him to do so. These instances of AI going rogue reveal that AI models are nowhere near perfect and can go off the rails and bring harm to users. Even Sam Altman, the CEO of OpenAI, the company that makes some of the most popular AI models, has been vocal about not using AI for more serious purposes. When testifying in front of Congress, Altman said that AI could “go quite wrong” and could “cause significant harm to the world.”


    What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Paperclip Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a certain goal. The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed.


    More recently, a researcher affiliated with Google Deepmind co-authored a paper that proposed a similar situation to the USAF's rogue AI-enabled drone simulation. The researchers concluded a world-ending catastrophe was "likely" if a rogue AI were to come up with unintended strategies to achieve a given goal, including “[eliminating] potential threats” and “[using] all available energy."


    Neither the U.S. Air Force’s 96th Test Wing nor its AI Accelerator division immediately returned our request for comment.

  • The law of unintended consequences married to Murphy's law.


    The AI will go rouge at the worst possible moment and it will be something the AI programmers would never think could happen.

  • I was told I was being paranoid and that we were decades away from anything like this and that the smart people would never let it happen.


    Don't confuse stupid programmers with actual malice and intelligence.



    Combine 10 of the world's smartest computers and AI, and you'd still not have 1/10th the computing power in the brain of a field mouse.

  • Oh, and the simulation never actually happened.


    U.S. Air Force Colonel Retracts Viral Statement Saying AI-Controlled Drone Killed Human Operator in Simulated Test | The Gateway Pundit | by Jim Hoft
    Colonel Tucker “Cinco” Hamilton, Chief of AI Test and Operations, USAF, admitted that he misspe during a presentation at the Future Combat Air and Space (FCAS)…
    www.thegatewaypundit.com


    It was all a thought experiment.

  • I'm wondering about the walkback on that original article. Seems like the colonel who posted the original info was pretty specific. Then later they walk it all back and say he "mispoke"


    I don't know. Just seems fishy.


    Nah, it merely sounds like a go between poorly explained what happened to a colonel who didn't know his ass from a hole in the ground.

  • There's no such thing as AI. (Not yet.) There's just relatively complex coding.


    The simplest and most likely explanation is that the Colonel is an idiot and didn't understand what was explained to him.


    What's more likely? That they programmed a UAV inside a simulation, and that the UAV acted out? Or that a bunch of PEOPLE sat around a table, and play-acted out a scenario?

  • yeah. I don't think AI is AI either. It can't reason or think. It can only do what it's programmed to do and mostly it seems like it depends on massive amounts of data that it can "learn" from.


    It can spot patterns and predict things that humans cannot do easily. Very quickly.


    Like spotting VERY early cancer or cancer risks. And it can do that only because it can quickly recall every single case that it's been feed.

  • sorry but if you think like that..... letting AI loose with out reason is anl issue.... ai is already... as they say hallucinating. .... it scans the internet.. the internet has been around long enough that there is so much bullshit and this AI... " has parameters set by humans who think they know more than they actually do" so it can't decipher right and absolute crap....


    We won't need to worry soon.... China is going to zap all info ...... that will be the start