Forum: Robots 'R' us?

The machines are getting smarter every day. Human beings better be thinking, says CHARLES RUBIN, about science fiction becoming reality

Share with others:


Print Email Read Later

The recent unveiling of the "Crusher" robotic combat truck by the Carnegie Mellon University Robotics Institute makes it clear that Pittsburgh is a leader in this increasingly important area of technology. After decades of slow change and unfulfilled promise, it may be that robots and artificial intelligence are on the verge of transforming what people do and how we do it.

Yet popular culture has long reflected how the rise of robots is not a prospect that everyone greets with enthusiasm. If people's fears are to be addressed honestly, the hopes behind the serious work of invention going on here will need to be matched by equally serious thought about the consequences for the human future these cutting-edge efforts will have.

At first glance, the benefits of ever more sophisticated robots are obvious. For example, unlike the Iranians in the Iran-Iraq war, we are not interested in using children as mine-detectors. If a job like that can be done by a machine, why put any human life at risk? A main thrust behind developing the present generation of robots is to reduce human exposure to jobs that are dangerous, dirty, difficult or demeaning.

Yet using robots this way is on a collision course with (controversial) claims that in the not-too distant future, robotic or artificial intelligence will be indistinguishable from, or indeed increasingly superior to, human intelligence.

In other words, serious people think that the robot of science fiction is not too far away from becoming a reality. Carnegie Mellon's own Hans Moravec, a pioneer in robotics, has been in the forefront of those who foresee a blurring of the lines between man and machine. We will, he expects, use the abilities of our machine creations to enhance and redesign ourselves, even to the point of "downloading" our minds into more durable and capable robotic bodies.

But popular culture suggests there is a problem with the rise of robots and hybrid forms of humanity. While sometimes they are portrayed as willing servants (think Stanley Kubrick and Steven Spielberg's underappreciated "AI: Artificial Intelligence") more often than not they are resentful and dangerous (HAL 9000 in "2001: A Space Odyssey" or the replicants in "Blade Runner"). They know they are doing our dirty work, they don't like it, and they want the respect they deserve for being better than we are.

In our world of dumb robots and dangerous jobs, concerns about artificial intelligence out-of-control are easy to dismiss as too speculative. But had you presented today's technologies to the "great generation" back when they were young, many would have sounded just as implausible -- to say nothing of how they would have sounded to generations now past. Indeed, it is a truism among those who think about the implications of the accelerating rate of technological change that if speculation does not sound like science fiction, it is probably missing the boat.

So what might be the outcome of the tension between using robots to solve the servant problem and efforts to create ever more capable artificial intelligence? Perhaps, as in "Blade Runner" and "AI," we will create legal and moral distinctions that discriminate between the human and the nonhuman, so that we can continue to treat machines in ways that we would not treat humans, even as we rely on them more and more.


At another extreme, imagine an extended sphere of moral concern like animal rights advocacy, which would protect robots on a par with humans, the way Lieutenant Commander Data was treated by the crew of the starship Enterprise.

Perhaps, in techno-topian fashion, technology will provide sufficient plenty with sufficient safety to make it ever less necessary for any kind of being to exploit another. Or perhaps the problem will be solved when we, that is, human-embodied intelligence, merely an evolutionary phase, is replaced by something so far superior to ourselves that we can't really imagine it.

Our tools to predict futures like these are poor. Yet it would be wrong to conclude from such uncertainty that we should put technological change in the driver's seat, sit back, and enjoy the ride. For the question of what will happen is not nearly so important as the question of what should happen. Along with the benefits, there will be risks and costs of living in any world of useful robots and powerful artificial intelligences.

Yet each of the scenarios above suggests a very different world. If one sounds better than another, that is a start towards clarifying what we would like our future to look like.

For example, in Japan, part of the push behind developing robots is an anticipated need for caregivers for the elderly. A neat solution, one might think, but one that raises the question: Why is one generation unwilling or unable to care for its progenitors? How attractive is a future based on that premise?

People will have different answers to fundamental questions like this one, answers that define our visions of progress. Perhaps the greatest virtue of liberal democracies is that they broadly empower citizens to shape the future in accordance with the visions of a good life, the values and norms, that they believe in. Already novel technological possibilities, like those inherent in genetic engineering, are creating new political coalitions of concern that cross today's conservative/liberal boundaries.

The question of whether a given technological development actually is an improvement of the human condition will have increasing political salience.


Inevitably, there will be concern that debates about progress will mean that "politics" stands in the way of some research agenda, just as we are already warned about the consequences of even the limited ban on federal funding for stem cell research.

In "Gulliver's Travels," Jonathan Swift presents a different view of the matter as he satirizes a society infatuated with science and technology. Its scientists, with their minds only on the intricacies of their own research projects, are constantly in danger of hurting themselves, and must be accompanied everywhere by "Flappers," who periodically strike them with inflated bladders to call their attention back to some bit of business or danger in the real world.

How can those who are uneasy about prospects for where technology is taking us best take on this helpful role? A certain kind of skepticism is in order.

When faced with amazing promises for the future, we should surely remember that not everything that is imaginable is possible, but even more that not everything that is possible is desirable. How we will deal with the very latest in technology still rightly depends on the very old question of what makes a good human life.

Stacy Innerst, Post-Gazette
Click photo for larger image.

Charles T. Rubin is an associate professor of political science at Duquesne University (ctrubin@worldnet.att.net.). He is at work on a book titled "Why Be Human? Defending Progress Against Its Friends."




Advertisement
Advertisement
Advertisement

You have 2 remaining free articles this month

Try unlimited digital access

If you are an existing subscriber,
link your account for free access. Start here

You’ve reached the limit of free articles this month.

To continue unlimited reading

If you are an existing subscriber,
link your account for free access. Start here