December 2010

Top of This issue Current issue


By Thomas Vincent

Interested readers are invited to check out Tom's Political Blog "Certain Doubt"


- Slow to learn or understand; obtuse

- Tending to make poor decisions or careless mistakes

- Marked by a lack of intelligence or care; foolish or careless.

- Dazed, stunned or stupefied.

Will machines surpass people? Like a pesky weed, the question keeps popping up again and again. Some are sanguine about the possibilities:

"It seems plausible that with technology we can, in the fairly near future," says sci-fi legend Vernor Vinge, "create (or become) creatures who surpass humans in every intellectual and creative dimension."

Others are less so:

The science fiction author Ken MacLeod described the idea of the singularity (the point where machines surpass us) as “the Rapture of the nerds.” Kevin Kelly, an editor at Wired magazine, notes, “People who predict a very utopian future always predict that it is going to happen before they die.”
I believe if – or even when – machines do overtake man, it will not be because of advances in artificial intelligence but instead because of retreats on the human front. To put it bluntly, humans are not losing the race against machines because machines are speeding up; they are losing because man is slowing down.

By any measure, mankind – at least the American version – ain’t getting any smarter. For example, You can't pick up a paper today without reading about declining test scores and failing schools. One only need take a ride on any inner city bus to wonder if our intellectual gene pool isn't leaking. And no wonder. Survivor, Dancing with the Stars, Monday Night Football, Fox News, and Sarah Palin. At the rate we’re going, give us another decade and the average IQ of all Americans will be about as robust as soggy toast.

In many ways, I feel the tipping point at which machines pass us by has already been reached. The best evidence I can give of this is our use of machines in war. In a recent front page article in the entitled “War Machines: Recruiting Robots for Combat,” John Markoff presents a great example of how we have already lost the battle:

In a mock city here used by Army Rangers for urban combat training, a 15-inch robot with a video camera scuttles around a bomb factory on a spying mission. Overhead an almost silent drone aircraft with a four-foot wingspan transmits images of the buildings below. Onto the scene rolls a sinister-looking vehicle on tank treads, about the size of a riding lawn mower, equipped with a machine gun and a grenade launcher.

Three backpack-clad technicians, standing out of the line of fire, operate the three robots with wireless video-game-style controllers. One swivels the video camera on the armed robot until it spots a sniper on a rooftop. The machine gun pirouettes, points and fires in two rapid bursts. Had the bullets been real, the target would have been destroyed.

In his piece, Markoff dutifully trots our arguments for and against the use of robotics in warfare. The arguments against, largely fall under the heading of morality, ethics, legality and foreign policy.

“Wars will be started very easily and with minimal costs” as automation increases, predicted a scholar at the Yale Interdisciplinary Center for Bioethics and chairman of its technology and ethics study group…The short-term benefits being derived from roboticizing aspects of warfare are likely to be far outweighed by the long-term consequences,” said Mr. Wallach, the Yale scholar, suggesting that wars would occur more readily and that a technological arms race would develop

On the side of man machine interaction in war,Markoff quotes an array of “military strategists, officers, and weapons designers” whose defense of robots in war focus on the “practical” benefits that the machines offer – they are never distracted, they never panic, they never tire, they are more precise in targeting – in other words, they are more effective at killing than humans. As an added bonus, manufacturers never miss an opportunity to note how robots take soldiers out of the line of fire. Lastly, the claim is made that civilian casualties can be reduced because of the aforementioned precision and the fact that because they are machines “they can fire second.”

The claim about reduced civilian casualties seems somewhat dubious in light of the high rate of “collateral damage” in recent Predator drone strikes in Pakistan.

The point here is that Markoff and those he quotes who have been examining the issue of robots in war, all raise straw men arguments. Drones and robots may keep soldiers out of harms way and even cut down on civilian casualties. So what? So does not fighting wars in the first place. Robotic sentries like MAARS may be able to “follow the military rules of engagement,” by “using voice warnings and tear gas before firing guns.” Super. Wouldn't it be better to simply close our bases and bring our occupying forces home? I fail to see how even the best robotic sentry in the world can win us friends and influence in foreign lands when what the people there really need are good jobs building roads and bridges and hospitals.

Sadly, even some of the arguments against using robots in war border on the absurd. For example, how can one be guilty of “war crimes” for employing a weapons system when there are no existing laws, international or otherwise, governing the use of that weapon? As for the argument that robotic soldiers will make war more likely, Humans have been killing each other for millennia before robots came along. Not being able to keep soldiers “out of harm’s way,” never stopped presidents from finding ways of entering foreign wars before now. Even if the U.N. decided to outlaw drones and bots tomorrow, man would still be just as likely to engage in war.

The most ridiculous discussion about robots in war, however, has to be the imbroglio over whether to allow robots to make autonomous life and death battlefield decisions or whether to require that humans remain the ones pulling the trigger. Supporters of robotic warfare may try to reassure us that the United States will always hold to the convention that humans must remain in control. All it will take for that convention to be swept away, however, is for someone we are fighting against to decide to allow autonomous robots to do their fighting for them. The superior tactical advantage enjoyed by those employing automated killing machines means that all armies would have to follow suit or risk losing the next war.Given their obvious disregard for life and liberty, it seems clear that should they get their hands on robotic military technology, al Qaeda would not hesitate to send autonomous robots against us.

With regard to the debate over human soldiers versus machines, I feel it is too late already. The genie is already out of the bottle. The more prevalent robotics becomes, the sooner the day will come when we face killing robots deployed by an enemy without even the few meager scruples we still employ.

The final absurdity – the ultimate in surrealism – is a battlefield where the parties on both sides of a conflict employ fully autonomous killing machines. After all, since robots are so much better and more efficient at killing than humans, why involve humans at all? Of course, if no humans are involved in the conflict, why fight at all? Therein lies the real question in the debate. It is not important whether humans control the machines or whether the machines should be let loose like savage hunting dogs. What is truly important is whether mankind can ever grow and evolve his way out of his need to rain down death on his fellow man. I’m sad to say I see little hope on that score. From Viet-Nam to Iraq and Afghanistan and on to Yemen, Iran and North Korea, the people who make the decisions to deploy our armies seem to have learned nothing. If anything, they seem less moral, less sensible, and less intelligent in their decision making to go to war than they have ever been. And it is this decision making that is the critical part of the equation.

In the end I am not worried about robots getting smarter. I’m more concerned that we humans are getting stupider.