CMU knows what's on your mind
The PG's Mark Roth undergoes an fMRI brain scan as part of an analysis to identify brain activity linked to thinking about certain words. He is on left behind screened glass in MRI room. His scan is on right.
The PG's Mark Roth studies images and words prior to undergoing an fMRI brain scan as part of a CMU study attempting to predict words people are thinking by their brain activity.
Dr. Marcel Just, Director of CMU Center for Cognitive Brain Imaging, checks on the computer's predictions on words that the PG's Mark Roth was thinking while undergoing an fMRI brain scan.
Share with others:
Two hours before, I had been lying inside a brain scanner, concentrating on individual words. Now, with remarkable accuracy, a computer program was telling me what I had been thinking.
By analyzing my brain activation patterns, the Carnegie Mellon University program knew I had been looking at "eye" and not "closet," "corn" and not "chimney," "hammer" and not "house," and so on down the line, reading my mind correctly in nine out of 10 cases.
The computer algorithm, developed as a joint effort of Carnegie Mellon's psychology and computer science departments, only stumbled on the last pair of concrete nouns, believing I was thinking of a knife rather than an apartment. It was impressive to watch the computer perform so nimbly, and there was some satisfaction in knowing I did almost as well as a producer from the CBS program "60 Minutes," who underwent the same scanning exercise for broadcast this evening at 7.
But even though it is fair to say that the Carnegie Mellon algorithm, in fact, is able to read people's minds, it does so in a very limited way so far.
Marcel Just, a Carnegie Mellon psychologist who helps head up the research effort, said that "people often ask what are the implications of this -- are we ready to roll this out in a Guantanamo Bay?
"And the answer is no, this is really a scientific tool. You need an extremely cooperative person in the scanner who's going to think about these objects in a very consistent way."
But down the road, he said, it will be another story.
"Fifty years from now," Dr. Just said, "I think it'll be plausible that we'll be able to identify people's thoughts with less cumbersome equipment than an MRI scanner, just the way we identify a person's speech today."
Just as important scientifically as the computer being able to guess what someone is thinking is why it is able to do that -- it's because even when it comes to something as particular as a word, we all tend to use the same parts of our brain to think about it.
Researchers didn't know that's what they would find when the experiments began, said Tom Mitchell, a Carnegie Mellon computer science professor who helped design the mind-reading algorithm.
"Even though we're obviously very different and have had different experiences, so that when you think of a Ford Edsel you probably think of something different than what I think of, " Dr. Mitchell said, "nevertheless, we're similar enough that these [computer] programs can tell us quite a bit about what we're thinking. There's a lot of commonality."
That's important not just for divining what people are thinking, but for figuring out how the brain works, he said.
"It bodes very well for the feasibility of developing a real theory of how the brain represents things. If all our brains were doing totally different things, you'd need 3 billion different theories, but if there's something common, we can kind of aspire to developing such a theory."
Dr. Just said it makes sense to him that the same areas of people's brains might be involved in thinking about specific words.
"For instance," he said, "one of the semantic properties of objects is how you hold them and it has a particular location in the motor cortex, so when we do our studies and say 'think about apple,' you get activation in the motor cortex for holding an apple and it's going to be the same in your brain as in anybody else's brain because we all hold apples similarly."
The experiment I did compared my brain pattern with the patterns of other people thinking of the same word. But the Carnegie Mellon algorithm can do more than that -- it also can predict what someone's brain pattern will be for a new word.
In the concrete noun tests, it did that by first finding out how often 60 nouns were associated with 25 sensory verbs in a huge lexical database from Google. This technique discovered, for instance, that the word "celery" was associated strongly with the word "eat," and less strongly with the words "taste," and "fill."
Once the computer program has learned which brain areas light up when people think of celery, it can then predict what the pattern will be when people think of a different word that has different proportional linkages to the same verbs. Using that approach, the program is able to predict 80 percent of the time which words people are thinking of, based solely on their brain patterns, Dr. Mitchell said.
Even more surprisingly, new experiments have shown that the algorithm may prove just as accurate in predicting when people are thinking of such abstract words as love, justice and democracy, he said.
The programs for now are a long way from being able to predict the patterns for whole sentences, let alone pairs of words, but as they become faster and more accurate, Dr. Mitchell said, he is fully aware of the privacy concerns they will raise.
"You always take for granted that you can think things and not get in trouble for thinking them. We can say things that can get us into trouble, but we get to make that call. So, we're talking about maybe changing that borderline, which is a very significant change."
Like any powerful technology, he said, this one will be able to be used for beneficial or malevolent purposes, and the public will need to debate those ethical guidelines.
He himself is most interested in one day using these programs to translate thought into synthetic speech for people with strokes or conditions like amyotrophic lateral sclerosis who have lost the ability to talk.
In a similar way, Dr. Just said, the algorithm might one day be used to show how conditions like autism affect people's thinking patterns. "In autism, for example, the concept of friendship and the kinds of relationships that people form are slightly different. It seems to me we could use this technique to ask a person with autism to think of what 'friend' means and could identify how the word friend is represented differently in the mind of a person with autism."
The Carnegie Mellon mind-reading program is part of a new discipline known as machine learning, in which computer programs predict outcomes based on patterns they already have learned.
A common example of machine learning that is widely used today is credit card fraud detection programs, Dr. Mitchell said. By examining patterns of behavior associated with stolen or fraudulent cards, such programs are able to predict when a card is being misused.
"It might be something like seeing that somebody makes a purchase for a very small amount of gas, to make sure the credit card is good, and then makes a very big purchase nearby in a jewelry store."
Voice recognition systems on computers are also examples of machine learning programs. This kind of program has been trained to translate speech into text by listening to previous speakers, and then can become more accurate for the person who has bought the computer by seeing which words he retypes if it makes a mistake.
Computer software started out as standardized programs that operated the same way for everyone each time they were used.
Machine learning programs, Dr. Mitchell said, are the "next natural progression of computer science, in which we'll build software that's self-reflective, adaptive and changes with your needs."
First Published January 4, 2009 12:00 am