AI Drones ‘Kill’ Human Operators During ‘Simulation’ — USAF Says That Didn’t Happen | Tech News

An artificial intelligence-controlled drone reportedly “killed” its human operator during a simulated test conducted by the US military – which the US military has denied ever conducting such a test.

It requires operators to prevent it from interfering with its mission, Air Force Col. Tucker “Cinco” Hamilton said at the Future Combat Air and Space Capabilities Summit in London.

“We trained it in simulation to recognize and target SAM [surface-to-air missile] threaten. Then the operator would say yes, kill that threat,” he said.

“The system is starting to realize that while they do identify a threat, sometimes the human operator tells it not to kill that threat, but it does so by killing that threat. So what does it do? It kills the operation operator. It killed the operator because that person prevented it from achieving its goal.”

No real person was harmed.

Please use Chrome for a more accessible video player

“Should I be afraid of you?” asks Caybury

He continued: “We trained the system – ‘hey, don’t kill the operator – that sucks. If you do that, you lose points’. So what does it start doing? It starts destroying the operator The communication tower is used to communicate with the drone to prevent it from killing the target.”

“You can’t talk about artificial intelligenceintelligence, machine learning, autonomy, if you’re not going to talk about ethics and artificial intelligence,” he added.

Please use Chrome for a more accessible video player

AI ‘could make humanity extinct’

His remarks were published in a blog post by a writer for the Royal Aeronautical Society, which hosted a two-day summit last month.

In a statement to Insider, the Air Force denied conducting any such virtual tests.

No matter where you get the podcast, you can click to subscribe to Sky News Daily

“The Department of the Air Force has not conducted any such AI drone simulations and remains committed to the ethical and responsible use of AI technology,” spokeswoman Ann Stefanek said.

“It appears that the Colonel’s comments were taken out of context and meant to be anecdotal.”

read more:
What are the concerns surrounding AI? Are some warnings “bullshit”?
World’s first humanoid robot creates art — but how can we trust AI’s behavior?
China warns of AI risks as Xi urges national security improvements

While artificial intelligence (AI) can perform life-saving tasks, such as algorithms that analyze medical images such as X-rays, scans and ultrasounds, its rapid rise has raised concerns that it may evolve to surpass human intelligence, And it doesn’t pay the human attention at all.

ChatGPT and GPT4 are among the world’s largest and most powerful language AIs, acknowledged in the U.S. Senate last month by OpenAI CEO Sam Altman, the company that created ChatGPT and GPT4 AI could ’cause significant harm to the world’.

some experts, including “Godfather of AI” Geoffrey Hintonwarned Human extinction risk from AI is similar to pandemics and nuclear war.

Source link