101 online
 
Most Popular Choices
Share on Facebook 103 Printer Friendly Page More Sharing Summarizing
General News    H3'ed 2/20/24

Tomgram: Michael Klare, Swarming Our World

By       (Page 1 of 3 pages)   No comments

Tom Engelhardt
Follow Me on Twitter     Message Tom Engelhardt
Become a Fan
  (29 fans)

This article originally appeared at TomDispatch.com. To receive TomDispatch in your inbox three times a week, click here.

Make no mistake, artificial Intelligence (AI) has already gone into battle in a big-time way. The Israeli military is using it in Gaza on a scale previously unknown in wartime. They've reportedly been employing an AI target-selection platform called (all too unnervingly) "the Gospel" to choose many of their bombing sites. According to a December report in the Guardian, the Gospel "has significantly accelerated a lethal production line of targets that officials have compared to a 'factory.'" The Israeli Defense Forces (IDF) claim that it "produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to noncombatants." Significantly enough, using that system, the IDF attacked 15,000 targets in Gaza in just the first 35 days of the war. And given the staggering damage done and the devastating death toll there, the Gospel could, according to the Guardian, be thought of as an AI-driven "mass assassination factory."

Meanwhile, of course, in the Ukraine War, both the Russians and the Ukrainians have been hustling to develop, produce, and unleash AI-driven drones with deadly capabilities. Only recently, in fact, Ukrainian President Volodymyr Zelensky created a new branch of his country's armed services specifically focused on drone warfare and is planning to produce more than one million drones this year. According to the Independent, "Ukrainian forces are expected to create special staff positions for drone operations, special units, and build effective training. There will also be a scaling-up of production for drone operations, and inclusion of the best ideas and top specialists in the unmanned aerial vehicles domain, [Ukrainian] officials have said."

And all of this is just the beginning when it comes to war, AI-style, which is going to include the creation of "killer robots" of every imaginable sort. But as the U.S., Russia, China, and other countries rush to introduce AI-driven battlefields, let TomDispatch regular Michael Klare, who has long been focused on what it means for the globe's major powers to militarize AI, take you into a future in which (god save us all!) robots could be running (yes, actually running!) the show. Tom

"Emergent" AI Behavior and Human Destiny
What Happens When Killer Robots Start Communicating with Each Other?

By

Yes, it's already time to be worried -- very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of "killer robots" have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it's anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided "autonomous" weapons systems -- combat drones that can employ lethal force independently of any human officers meant to command them. Called "killer robots" by critics, such devices include a variety of uncrewed or "unmanned" planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its "collaborative combat aircraft," an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they'll be able to communicate with each other without human intervention and, being "intelligent," will be able to come up with their own unscripted tactics for defeating an enemy -- or something else entirely. Such computer-driven groupthink, labeled "emergent behavior" by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, "uninhabited") versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they'll be part of a "networked" combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a "loyal wingman" for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot "Swarms"

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs -- all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

"Emerging robotic technologies will allow tomorrow's forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today's networked forces," predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). "Networked, cooperative autonomous systems," he wrote then, "will be capable of true swarming -- cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole."

Next Page  1  |  2  |  3

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

Tom Engelhardt Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

Tom Engelhardt, who runs the Nation Institute's Tomdispatch.com ("a regular antidote to the mainstream media"), is the co-founder of the American Empire Project and, most recently, the author of Mission Unaccomplished: Tomdispatch (more...)
 

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

 
Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
Name
Email
   (Opens new browser window)
 

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Tomgram: Nick Turse, Uncovering the Military's Secret Military

Tomgram: Rajan Menon, A War for the Record Books

Noam Chomsky: A Rebellious World or a New Dark Age?

Andy Kroll: Flat-Lining the Middle Class

Christian Parenti: Big Storms Require Big Government

Noam Chomsky, Who Owns the World?

To View Comments or Join the Conversation:

Tell A Friend