The Next Page / Personal robots: man's new best friend?

Personal robots will be a ubiquitous part of our lives one day, predicts Carnegie Mellon University robotics professor Illah Nourbakhsh. In his new book, "Robot Futures," he explores the pitfalls and possibilities.


Share with others:


Print Email Read Later


HOW THE BOOK CAME TO BE

In 1977, I walked into the first run of "Star Wars" with my parents, not knowing what to expect. Two hours later, I was transformed, branded with images of C-3PO and R2-D2. This is how my love affair with robots started, and it is also how an entire generation of robotics researchers, about my age, set its eyes on robots for life.

I have participated in the past two decades of robotics research, during which literally thousands of research groups across the planet have worked to close the gap between the promise of science fiction's robots and the reality of commercial robotics.

The ambition of robotics no longer is limited merely to copying us -- making walking, talking androids that are indistinguishable from humans. Robotics has grown out of that mold. Modern robotics is about how anything can perceive the world, make sense of its surroundings, then act to push back on the world and make change.

There is one special quality of modern robotics that is relevant to how our world is changing: Robots are a new form of glue between our physical world and the digital universe we have created.


Robots have physical sensors and motors -- they can operate in the real world just as well as any software program operates on the Internet. They will be embedded in our physical spaces -- our sidewalks, bedrooms and parks--and they will have minds of their own thanks to artificial intelligence.

Yet robots also are fully connected to the digital world -- they are far better at navigating, sharing information and participating in the online world than humans ever can be. We have invented a new species, part material and part digital, that eventually will have superhuman qualities in both worlds at once.

The questions that remain are: How will we share our world with these new creatures, and how will this new ecology change who we are and how we act?

As a roboticist, I ponder these questions often. And as the pace of robotics development accelerates, I'm convinced these are questions that our entire society should collectively answer. So, two years ago, I began to apply my understanding of robotics technology to predict how we someday will experience robots in the wild, a writing exercise that yielded my new book, "Robot Futures."

The book doesn't dwell on technologies, but on the possible human side effects. Could the creation of "do-it-yourself" robots and the proliferation of cheap, but intelligent, toys result in a zoo of obnoxious, exotic new creatures? Will robots, with limitations that will be easy for humans to take advantage of, bring out the worst in people, resulting in bullying behaviors and other abuse?

By imagining these possible futures, I don't mean to diminish the promise of robotics or minimize the potential risks. My hope is that it will help us envision, discuss and prepare for change, so that we can influence how the robot future unfolds.

• • •



Excerpts from "Robot Futures," to be published March 22 by MIT Press:

Hearing, Senate Subcommittee on Waste Disposal & Public Safety, Washington D.C., April 2040.

[partial transcript]

MR. LAMB. And we hit on a good design. I mean, it makes for a compelling interactive experience. Clearly, the consumers love it.

MR. HOBSON. Interactive experience!? OK. Please look at this picture, Mr. Lamb. Do you notice how many people are wearing very dark sunglasses in Central Park here? These are from the surveillance cameras, taken last week. Here, please note Exhibit Five, and put this on display, clerk.

MR. LAMB. I see the sunglasses, sir. Yes.

MR. HOBSON. OK. How about the people without sunglasses on. What are they doing? Describe it for the transcript.

MR. LAMB. They seem to be looking down at the sidewalk.

MR. HOBSON. No. They are stooped over, looking at their feet as they walk, Mr. Lamb. Here. Here is a time sequence from six years ago. One year before you invented botigami. Same park. Same time of year. Cloudy day. Notice the difference? Everyone is walking upright. Talking to each other. Laughing. Relaxed. Do you, Mr. Lamb, notice the difference when you yourself are out, walking in the park?

MR. LAMB. Senator, I am running a very large corporation. Unfortunately, I do not have time for a stroll in the park.

MR. HOBSON. Well, let me tell you what the sidewalks are like, since you do not go there. Everyone is afraid of being spotted by one of your robots, and making eye contact with it. Because off they fly and start circling round and round ... Nobody looks around any more, Mr. Lamb. In five years, you've singlehandedly changed how people stroll through the park, with a $30 child's toy. My time has expired, and I will recognize Mr. Remus.

• • •



By 2050, robotic machines with significantly human physical capability should be available. Robots will be able to go everywhere people go, and they will be able to manipulate objects with at least the dexterity of the human hand. We will be approaching the point at which, from a mechanical point of view, robots can extend a human's physical presence with high fidelity. Yet that physical extension will be of limited benefit if a human's undivided attention were required to operate that robot in real time. But such direct control will be unnecessary. The perceptual and cognitive intelligence available to any tangible device will have advanced just as much. Robots will be able to take care of the motor-control details of common activities, such as walking, running and manipulating the objects in a home. Cognitively, dialogue systems will be able to track and label conversations between people and engage in directed conversations using human language. Visual perception will have advanced to the point that identification of objects throughout the natural and designed worlds is a solved problem.

Human interface systems, thanks to the advances of communications technology, will be in spaces hard to imagine today. It is enough to note that immersing oneself in audio, visual and perhaps tactile realities thousands of miles away will be de facto aspects of how we plug into each other's lives for social and business visits. If robots are beginning to bridge the real world and the Internet world, then the physically realized, home-capable robot will begin to bridge our own physicality of location. Being there and not being there will become a blurred distinction; just where we are at any given point in time will have less meaning than ever before in our cultural experience.

Unlike Skype, FaceTime, and all other communication portals that place an image and audio source on the table or in a pocket, this time the space traveler will jump off the table, out of the pocket, and push back on the distant world with his own volition, sharing physical space with locals more literally. Three key ingredients combine to offer this blurred physicality in which people may interact with the world: the immersive human interface system, the spatial extension provided by physical robots and the seamless adjustable autonomy offered on those robots. Together, these three suggest a future in which people live multiple lives simultaneously, thanks to the ability to deputize robots that act out a portion of each life for them. The robot takes a walk with my friend and reports back when it is time for me to join in. The robot agent finishes a conversation for me because the key bits are done and now the topic is simply scheduling a lunch date next week. The robot goes running with my wife when I am away, with me along for the ride so I can still engage in dialogue and companionship with her even though half my attention is on a conference presentation.

In case this seems too far-fetched, note that in October 2011, Apple first released the iPhone 4S with Siri, a personal digital agent already on its way to scheduling dates and arranging dinners, all in plain, spoken English.

As artificial intelligence advances, such agents will become only more sophisticated, able to better mimic my speech patterns and my interests, and even able to decide when and how I should be brought into the conversation personally. Taking this forecast to the limit casts a shadow on human-human relationships. As my robots and agents conform to me more closely and need my help less frequently, I become CEO of a company where the company employees -- my agents -- need me less and less. Soon I am strategist-in-chief and nothing more. Of course, I can choose when I want to be present, and I can pick and fall into every experience that I deem most valuable to personally witness. In the limit, life becomes fragments of high-value experiences, with little time for the redundant, boring or undesired.

I call this condition "attention dilution disorder" because it technologically instantiates nearly the same psychological handicap that is diagnosed today as attention deficit disorder. The one difference is that, in the eyes of many, the new ADD will be a desirable station in life rather than a condition to be treated.

books - lifestyle - opinion_commentary

llah Nourbakhsh is a professor of robotics in Carnegie Mellon University's Robotics Institute. His research interests include educational and social robotics and use of robotic technologies to empower individuals and communities. He blogs about the future of robotics at http://robotfuturesbook.org.


Advertisement
Advertisement
Advertisement

You have 2 remaining free articles this month

Try unlimited digital access

If you are an existing subscriber,
link your account for free access. Start here

You’ve reached the limit of free articles this month.

To continue unlimited reading

If you are an existing subscriber,
link your account for free access. Start here