364 online
 
Most Popular Choices
Share on Facebook 30 Printer Friendly Page More Sharing Summarizing
General News    H3'ed 7/11/23

Tomgram: Michael Klare, The Military Dangers of AI Are Not Hallucinations

By       (Page 1 of 3 pages)   No comments

Tom Engelhardt
Follow Me on Twitter     Message Tom Engelhardt
Become a Fan
  (29 fans)

This article originally appeared at TomDispatch.com. To receive TomDispatch in your inbox three times a week, click here.

I give myself credit for being significantly ahead of my time. I first came across artificial intelligence (AI) in 1968 when I was just 24 years old and, from the beginning, I sensed its deep dangers. Imagine that.

Much as I'd like to brag about it, though, I was anything but alone. I was, in fact, undoubtedly one of millions of people who saw the movie 2001: A Space Odyssey, directed by Stanley Kubrick from a script written with Arthur C. Clarke (inspired by a short story, "The Sentinel," that famed science-fiction writer Clarke had produced in -- yes! -- 1948). AI then had an actual name, HAL 9,000 (but call "him" Hal).

And no, the first imagined AI in my world did not act well, which should have been (but didn't prove to be) a lesson for us all. Embedded in a spaceship heading for Jupiter, he killed four of the five astronauts on it and did his best to do in the last of them before being shut down.

It should, of course, have been a warning to us all about a world we would indeed enter in this century. Unfortunately, as with so many things that are worrying on planet Earth, it seems that we couldn't help ourselves. HAL was destined to become a reality -- or rather endlessly multiplying realities -- in this world of ours. In that context, TomDispatch regular Michael Klare, who has been warning for years about a "human" future in which "robot generals" could end up running armed forces globally, considers wars to come, what it might mean for AI to replace human intelligence in major militaries globally, and just where that might lead us. I'm not sure that either Stanley Kubrick or Arthur C. Clarke would be surprised. Tom

AI Versus AI
And Human Extinction as Collateral Damage

By

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable "hallucinations," resulting in potentially catastrophic outcomes. But there's an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film "WarGames," a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced "whopper") nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The "Terminator" movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called "Skynet" that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of "autonomous," or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called "robot generals." In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America's atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity's demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending "fire" instructions directly to "shooters," largely bypassing human control.

"A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement," was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that "we do need to change the name" as the system evolves, Roper added, "I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don't think we can go there."

And while he can't go there, that's just where the rest of us may, indeed, be going.

Mind you, that's only the start. In fact, the Air Force's ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced "Jad-C-two"). "JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon" to engage the target," the Congressional Research Service reported in 2022.

AI and the Nuclear Trigger

Next Page  1  |  2  |  3

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

Tom Engelhardt Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

Tom Engelhardt, who runs the Nation Institute's Tomdispatch.com ("a regular antidote to the mainstream media"), is the co-founder of the American Empire Project and, most recently, the author of Mission Unaccomplished: Tomdispatch (more...)
 

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

 
Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
Name
Email
   (Opens new browser window)
 

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Tomgram: Nick Turse, Uncovering the Military's Secret Military

Tomgram: Rajan Menon, A War for the Record Books

Noam Chomsky: A Rebellious World or a New Dark Age?

Andy Kroll: Flat-Lining the Middle Class

Christian Parenti: Big Storms Require Big Government

Noam Chomsky, Who Owns the World?

To View Comments or Join the Conversation:

Tell A Friend