Back | Next
Contents

Robowar

Written by Gregory Benford
Illustrated by Laura Givens
193209300265.jpg

 

Most likely, robots will make our battlefields less bloody

. . .for some.

People seem to especially like to order others around. That may be the greatest social use of robots.

—Isaac Asimov, in conversation

In 1994 Michael Thorpe, a former model maker at George Lucas' Industrial Light and Magic, began public, live robot fights in San Francisco. These Robot Wars began as displays of engineering craft and imagination, allowing the geek community of the area to show off their inventive, destructive talents. Most of the gladiators looked like moving junk piles, springing clever knives, hammers, spikes, electrical arcs and other instruments of mayhem upon their opponents.

Thorpe quickly drew a large audience. Hundreds of technonerds proved quite willing to spend thousands of dollars and hours of labor to make combatants that they hoped might survive for a few minutes in the ring.

A few of the battlers looked benign, but even that was a disguise. A 14-year-old girl brought a ladybug-looking robot whose pretty red shell lifted to deploy a hook, which then skewered rivals. Their very names aimed at intimidation—Toecrusher, Mauler, The Hammer, Stiletto.

Their audience grew steadily until a legal dispute closed the games, but not before a promoter saw their potential. Similar contests went through a brief pay-per-view series, then ended in 2000 with a slot on TV's Comedy Central. There, "battlebots" showed their dual nature—focus for malicious mayhem, plus inadvertent comedians. They offer ritual violence directed by their creators using remote control, so they are only the simplest sort of the robot species, incapable of independent thinking and action.

These are techno versions of aggression, a weird blend of the "sports" of cock fighting and tractor pulls—and the direct descendants of demolition derbies. The audience experiences both jolts of slashing, banging violence and the hilarity of absurd scrap heap machines doing each other in. Robot toys have been around for decades, but they were weak, simple and did no real damage. Robowar fighters are genuinely dangerous.

We have become used to connecting with events like these, through adroit identification with technology. Since the 1950s children could buy robot toys, which steadily got better. Sojourner's 100-meter voyage on Mars in the 1990s, which took an agonizing month to accomplish, enraptured millions. The adventures of later plucky Mars rovers (Spirit, Opportunity) took them through many-kilometer journeys lasting years. With telepresence human guidance, even time-delayed at Mars, this capability is developing very quickly as software takes over the routine navigation and piloting between commanded destinations.

The Gulf War of 1991 and then Gulf War II of 2003 onward both provided robo-conflict without Allied blood. In Gulf War I the machines died (at least on the allied side) far more scenically (smart bombs, etc.) than the few allied casualties; Iraqi losses got much less play. In the Second Gulf War several hundred robots dug up roadside bombs and a robot attack plane, The Predator, quietly prowled the skies day and night, inflicting casualties usually without warning. In 2005 the first robot bomb disposer appeared in Baghdad; it could shoot back with good aim at 1000 rounds a minute. A soldier nearby controlled it with a wireless laptop, the first offensive robot used in combat. Some got destroyed. Newscasters then used "kill" to describe the destruction of both people and machines. Then in 2005 came robot infantry, able to assault and fire while moving forward on tank-like treads. Plainly, more is to come.

Commanding these at a distance recalls a video game quality, and indeed, the troops using them have a long background in such skills. Fresh into the millennium, this gives us TV's BattleBots with its three minute slam-bang bouts. Here is violence both real and absurd, calling up memories of the delicious humor of the old Road Runner cartoons. Robots assault each other with clippers, buzz saws, spikes, crushing jaws, and other ingenious devices, often cobbled together from domestic machines like lawn mowers and power tools. In their BattleBox they can use any strategy, inflicting mortal wounds, while the human audience sits safely beyond the BattleBox's shatterproof glass walls, watching flying debris clatter against the barrier. The Box has its own tricks, with sledges, rods, saws and other bedevilments which pop up randomly to wound one or both the combatants.

All this gets played for ironic laughter, with over-the-top commentary from the sidelines. The rules of human combat get satirized into weight categories: lightweights below 87 pounds, up to super-heavyweight between 316 and 488 pounds. Their creators range from aging engineers to 12-year-old Junior High amateurs. A certain cachet attaches to one who has made his or her robot from the least expensive parts, and especially from scrap.

More such shows are coming, like the SciFi Channel's "Robodeath." Though the robots now contending have budgets of only a few thousand dollars, inevitably under the pressure of ratings the sums will rise. Roller robots built from blenders will give way to walking, stalking specialists with specifically designed pincers or scythes or guns.

History's arms race between human armies will be rerun in madcap, technofreak fashion on full fast forward. The warriors will get heavier, their armor thicker. Instead of being run by their creators from the sidelines, there will come competitions for robots that can direct themselves, concoct strategies in real time, assess opponents' weaknesses and find new uses for their own armaments.

All in good fun, of course. But as the human battlefield begins to accept more machines with greater capabilities, the comic mayhem of BattleBots will blend in the evolutionary chain with the coming of combat robots fighting among humans—and finally, inevitably, against them.

In the 1984 film Terminator, a woman confronted with the nonstop violence between a robot killing machine and a man sent to save her, asks, "It will never be over, will it?" Once machines can fight on equal terms with humans, what social force could stop their use? Worse, if directed by artificial intelligences, would fighter robots not carry out the competition between these two intelligent "species" inhabiting the Earth?

The relentless energy of the Terminator class of robot (Arnold Schwartznegger) confirms this woman's wary prediction as it pursues the two humans with single-minded ferocity, until crushed by a foundry press. That advanced robotic intelligence could have the fanatical concentration of humans, with immense strength and endurance added, makes their use as soldiers seem inevitable.

Robot Armies?

Most robot research funding comes from the US Department of Defense. Obviously armies would rather lose a machine than a man. Robots don't get hungry, feel fear, forget orders, or care if the robot next to them gets killed. Even better, for the accountants, they have no downstream medical or retirement plans. In 2005 the Pentagon owed its soldiers, sailors and airmen $653 billion in future retirement benefits, which it had no clear plan to pay. Indeed, each fighting man costs $4 million over his median lifetime. Robot fighters will certainly cost less than a tenth of that. They can even be retrofitted later for domestic jobs and sold off.

The Bosnian conflict of the late 1990s was the first campaign fought without a single casualty on one side, because the U.S. used only aircraft. None were robots, but that lossless victory whetted appetites and has probably set the mold. In 2000 Congress told the armed services to develop within a decade robotic ground vehicles and deep-strike aircraft. The goal is to make about a third of all such machines independent. The goal is combat without casualties.

And some things machines can always do better than people, anyway. Some innovations are already about to be deployed in the field.

Crawlers

The Micro Unattended Mobility System (MUMS) device, currently under development—is a small, autonomous vehicle no larger than 3 inches across and 12 inches long. It will be robust enough to travel on its own, and survive the high accelerations and decelerations associated with ground penetrations, suffering peak impacts of 1,500 G. (In the long run, it might be sent forth from a grenade launcher.)

This crawler uses two side-by-side wheels that drag behind them an active tail, which can double as an antenna. Its central body houses electronics and a suite of navigation and surveillance sensors, including a modular GPS antenna, communications antenna, seismic sensor, microphone, electromagnetic detectors, and perhaps chemical sniffers more sensitive than a human nose.

The MUMS rover's embedded intelligence system will be controlled by iRobotics' own Behavior Control software, featuring, as a brochure has it, "redundant sensing and flexible system architecture." Overlapping and redundant sensing makes systems robust in the face of sensor noise, failure, or unexpected conditions, such as loss of primary communication or sensors. Flexible system architecture adds supervisory layers to observe its own lower performance and notice problems. At present, self-moving robots often repeatedly run into the same obstacle or get caught in a cyclical path. The MUMS higher levels introduce a random action or series of actions, a simple way to add an element of "creativity" that often allows it to overcome or "solve" unexpected situations.

Again, such mobile sensor systems will first be used for covert surveillance and reconnaissance, but the need to travel unnoticed into hostile environments is not unique to the military. Since MUMS robots do not require airdrop, they can also help out law enforcement that needs to covertly position sensors to collect intelligence during standoff situations.

The next generation will feature combined wearable computers and mobile robots. For military use, the robot becomes part of a reconnaissance team, able to respond to verbal orders with local initiative and intelligence. The robot moves in advance of its human team members, keeping them in a safe position while sending back video Images and gathered intelligence.

A soldier will direct and monitor the robot's progress through a wearable intuitive interface, at a distance of about a kilometer. The system will use natural voice recognition, a head-mounted display and head tracking, so the robot will know that the command is, "Go in the direction I'm looking." The soldier will use a head-mounted display with computer generated graphic overlays. At first they will look like deadly toy trucks on treads, with camera snouts pointing front, side and rear, a machine gun that can be slaved to the cameras, and able to hear and smell. Weighing around 100 pounds, they will cruise at about walking speed and keep it up for four hours on lithium-ion batteries.

The soldier will be able to hear what the robot does, and maneuver it with a hand-held joystick, so combat will ape home computer games. This is no accident. A generation has trained using these entertainments, which in turn have been shaped by market forces to be the easiest and most responsive to use.

Beyond that era, robo-fighters will need less supervision. They will increasingly react, see and think like people, while going places we could not.

Underwater Rovers

 

193209300266.jpg

 

This class of autonomous robots seeks to equal the efficiency, acceleration, and maneuvrability of fish. Biologically inspired, they use flexible, wiggling, actuated hulls able to produce the large accelerations needed for fishlike bursts of speed and sudden swerves. They mechanically approximate a fish's fluid swimming motion and navigate environments previously considered inaccessible.

The prototype, named Dart, developed by iRobotics Corp. in cooperation with MIT's Department of Ocean Engineering, is roughly three feet long. It consists of a series of lined actuators, a spring-wound exoskeleton, flexible lycra skin, and a rigid caudal fin. Modeled after a pike, its flow-foil mechanism "flaps" to create vortices that produce jets to propel it efficiently.

A microprocessor housed inside the head provides the interface between control electronics and the Dart's body. The software, designed to allow rapid development of embedded routines, lets the driver dictate all swimming, starting, and turning parameters from an off-board computer via a graphical interface.

These swimmers can covertly gather intelligence close to shore. Their fishlike locomotion will reduce power requirements, make detection more difficult, and facilitate escape. On radar and sonar they will look very much like ordinary fish, particularly after "stealth" surfaces appear to outsmart reflected acoustic and electromagnetic waves.

For commercial and research use, where negotiation of hostile environments is essential, they can navigate intricate structures. For harbor cleaning this will help find hazardous materials. In mining, swimming prospectors will prowl far larger areas than human crews in submarines can. Exploring the deep ocean will be open to tough, independent robo-swimmers which can monitor for long times the countless valleys, caverns and geothermal vents we have only begun to fathom.

Fetchers—Counter Mine Intelligence

These offer a new approach to a global plague—the mines left behind in wars. A team of low-cost, robotic mine hunters can provide rapid and complete coverage of a mine field. A swarm of robots will ultimately be capable of cooperatively clearing a field of land mines under the supervision of a single operator.

Designed for low-cost duplication, because they can make mistakes and trigger the mines, these robots are just a few years from deployment. Already they have successfully detected, retrieved, and safely deposited munitions in the real world, visiting areas replete with unfavorable obstacles, terrain slopes, and poor traction.

There are common problems that will arise whenever robot teams do a job. How can a lightly trained technician operate such a complex system? How can the robots cooperate with one another to perform the task most effectively? IS Robotics' Fetch II robots perform their tasks autonomously but with the supervision of a single operator. Learning, "behavior-based" software keeps track of what the robots are doing and anticipates problems for the human. Without explicit instruction, this software mediates robot-robot interference within the swarm and supports cooperation among them.

Terminator...?

The above military 'bots snoop more than they fight. It does not take much imagination to see that modern tanks, outfitted with omnidirectional sensors in many frequencies, assisted by smart software and fast chips, could make their way through a future battleground without humans aboard. Current Pentagon plans are for combat units to have robot complements making up less than ten percent of the "troop" strength.

How good can they become? Much science fiction features fighting machines of the future outwitting human antagonists, and even cyborged people with formidable abilities of their own.

Perhaps this could happen, as munitions become smarter and warfare more mechanized. The Terminator robot of the science fiction films was a marvel of untiring ability, though at present utterly unrealistic. Just imagine what power source could run a Schwarzenegger-sized machine that can fight for even the duration of a two-hour film; lithium-ion batteries won't do it.

Experts like Robert Finkelstein, president of Robotic Technology, have told the Pentagon that a true robot that moves, thinks and fights like a soldier will not appear on battlefields for another 30 years. Today's best attempt is a boxy prototype on treads, with a Cyclops eye. Its right arm is a gun and its left is an all-purpose tool that can open doors, lift blocks and cut holes. Told to fire, it locates, identifies and then quickly shoots a Pepsi can ten meters away.

Of course, it will be quite a while before a shooter robot gets an order to find, identify and kill a human enemy all on its own. For now, we get good performance by specializing machines. We have scouts that can prowl buildings, caves and tunnels. Others drone overhead, staying patiently aloft for tens of hours. Big haulers carry tons of weapons and gear, while others scout and report back. Others will endlessly follow their rounds on security watch, often in the dark since they can see by infrared. More savvy types will sneak behind enemy lines, eavesdrop, even conduct psychological war—making the enemy always look over his shoulder at every odd sound or movement wears him down.

But the need for an all-purpose machine will persist. Robot intelligence is increasing, as chips shrink and software gets smarter. Perception is the fulcrum of improvement. With a bit more progress, quarter-ton trucks will have robot drivers in combat zones. With digital road maps and Global Positioning satellites, robot convoys are only a decade away.

Today's robots work at the level of perception of not terribly bright mammals. In a generation, robots will work at the level of primates. At that level, it will be possible to let machines fight on their own. Monkey see, monkey shoot.

 

193209300267.jpg

 

Still, many conflicts are messy matters of mud and blood. Machines cannot easily fight in trenches, snow, jungles or in house-to-house, hand-to-hand guerrilla conflicts. Robots will not fare well there, amid grit, smoke and rust.

So pressure will make them better—more rugged, savvy, perceptive. Doctrine always lags technology—the longbow, cannon, tank, plane and nuclear weapon all outran the strategies first used to employ them. So it will be with machines. Asimov's Three Laws will not apply to a combat robot, so they will need no tricky moral calculus. But they will need to tell friend from enemy, a surrendering foe from a fighting one, and enemies lying doggo. Our doctrines will change, too: will few casualties on one side make war with technologically inferior societies more tempting?

And what of the robots? If the machines are smart enough to outwit humans amid difficult terrain, they might very well have to be smart enough to question why they are doing it—a point seldom noted by film makers, who assume all advanced machines will still be absolutely obedient and have no desires other than perhaps malformed human motivations.

A common movie idea, which applies so broadly it includes many tales of alien contact, is The Menace Theme:

An intelligence we do not understand goes crazy (by our definitions, but maybe not its own). So it does evil things outside our moral code—mostly destruction of people and cities.

Robots are just one category of menace. Why do we like this idea so much that it has spawned hundreds of films?

Perhaps it's because we derive some unacknowledged gratification from watching the destruction. Many love Godzilla, even though it has a grudge against Tokyo. We watch Battlebots not out of love of robots, but of the smashups. We can wash our hands of any guilt feelings because they are just machines, after all; so are cities. Though they might get smart, they won't be human.

This extends to warfare. Robots can take the risks for us only in stylized, well-defined physical situations. The advanced nations will probably seize upon this in future, trying to make their conflicts resemble the Gulf Wars rather than Vietnam. Their antagonists will do the opposite, trying to pin down vulnerable infantry. The success of the NATO air war against the Serbs in 1998 shows that even messy conflicts can be won with high tech, especially if one attacks the obvious, fixed economic infrastructure rather than only troops in the field.

Robots will make these contrasts ever greater. We will always see men with guns and bombs seeking power, but as the technological gap between societies widens further, such groups will have to resort to terrorism (which itself gets ever more complex and technological) to make their bloody points. Against them will stand robots of ever-greater sophistication, patience, savvy and strength. Under enemy fire they will haul ammo, reconnoiter, search buildings, find the wounded.

They will have many shapes—crawlers like caterpillars or cockroaches, heavy assault craft like tanks or tractors, fliers looking like hummingbirds, or even "smart dust" swarms of robo-insects. Some will resemble animals and insects, to escape notice. Others will intentionally look bizarre, to frighten or intimidate. Few will be able to pass as human, even at a distance and at night, for quite a while.

Their inner minds will be odd, stylized, but steadily improving. We may come to see these metallic sentinels as our unique heroes, the modern centurions. The other side will see them as pure, walking terrors, killing whatever romance might still be left in war.

Or perhaps not. For we do have some historical precedent to instruct us. Medieval warfare in the centuries-long age of knights developed conventions quite unlike those we know in too many modern wars. Knights required a large support team, a hundred or more who carried out the heavy-lifting jobs in the logistics of horse and armor. These were in the army, but were kept outside the bounds of battle, and even if overrun were not killed – though their gear might get stolen. Knights themselves were fair game, but here, too, a thrifty ethics ruled. The were most often not killed but instead cornered or injured, then captured and ransomed for large sums; then they could fight again, for capture was no disgrace.

The prevailing rules were: fight only the fighters.

No one attacked the camps supporting the knights, or executed prisoners, since they could be ransomed or sold as slaves. To kill non-combatants was an atrocity, often punished. So until around 1650, European war was a conflict of big metal war machines that happened to have humans inside.

This suggests a strategy: Remove the humans, use robots in combat wherever possible, and knowingly drive the war culture toward a different moral standard. Use international standards, such as the rather outmoded Geneva Conventions, to create a new view. We could see the eventual evolution of robot warfare back to such a code. Of course, medieval times had plagues and starvation that ran alongside wars, but these messy side effects exist now, too; the four horsemen of apocalypse often ride together, led by War. A semi-medieval code would be in some ways superior to our current style of total war. The second half of the 20th Century saw common terror, atrocity and wholesale destruction, even in "advanced" nations like those in the Bosnian-Serbian conflict, that lasted a decade and slaughtered half a million.

A robot war culture does not have to be worse than our moral standards today. This may seem a radical conclusion, given the pervasive imagery of The Terminator, Robocop, etc. But it is important to believe that our future can be better than the worst case scenario. Indeed, it is essential.

* * *

Gregory Benford is the author of many novels and short stories, and has edited a number of anthologies.

Back | Next
Framed