Sarcasm lost in translation when companies try to monitor social media

Share with others:

Print Email Read Later

From the moment the first disgruntled customer vented his rage to an online message board, companies have been seeking ways beyond human intervention to monitor what's being said about their brands -- good or bad.

And the market has responded to the call with automated sentiment analysis -- the use of computer programs to analyze text for keywords that decode the tone of a message.

Today hundreds of companies, including well-known brands such as Kia Motors, Kraft Foods and Best Buy, get instant feedback on public reactions to products, advertising campaigns, company rumors or just about any topic. A 2009 report by Forrester Research predicted the text analysis industry could nearly double from what was a $499 million market last year to $978 million by 2014.

Every day, every few seconds really, computers around the world are studying comments -- messages such as "This wuz my fvrite movie evr. No. Really." or "Wld luv to meet this guy" -- and rating them as positive, negative or neutral.

The trouble is computers just don't get sarcasm. It's one of the last emotional frontiers in the digital world. Even people can't always pick up on it.

And that casts doubt on exactly how accurate all that analysis is.

For as much as automated sentiment analysis seems to be growing, there hasn't been as much improvement in terms of understanding the nuances in various comments, according to its critics.

Most text analysis algorithms programs are designed to put weight on words perceived to be positive or negative, such as "love" and "hate," to gauge the overall tone of a message. But a phrase calling a company's service "fast and reliable, but unaffordable," could be interpreted as positive if the algorithm does not count the word "unaffordable" as negative.

The conflict was highlighted in a 2010 study by U.K.-based market research company FreshMinds. It found that only 30 percent of comments assessed by seven different automated sentiment analysis tools were categorized properly. Aggregate accuracy levels of automated sentiment analysis tools fall between 60 and 80 percent, according to the report.

Add sarcasm to the mix and all bets are off, said Annie Pettit, chief research officer for Conversition. The Toronto-based social media market research company runs TweetFeel, a site designed to analyze the overall sentiments of Twitter messages surrounding certain topics or products.

"Most sarcasm cannot be accurately measured," she said. "It requires personal knowledge between the two speakers and that just isn't possible when you're working with millions of datapoints.

"Indeed, while friends know each other intimately, they often don't know when one person is being sarcastic. We can identify certain phrases like 'yeah right' and 'not!' but for the most part sarcasm will be measured wrong."

There are no exact statistics to determine how often sarcasm is used on the Internet, but the topic has gained enough interest that there are websites such as "" dedicated to doing it properly as well as the Facebook page titled, "I hate when you use sarcasm on the Internet and they take it seriously." As of Friday, the page had more than 11,000 followers.

As a backup, companies using sentiment analysis programs take information compiled by the computers and then check to see if the content has been categorized correctly and to see if the information uncovers any other sentiments that the computer didn't detect.

Jan Wiebe, co-director of the University of Pittsburgh's Intelligent Systems Program and computer science professor whose research focuses on artificial intelligence and natural language processing, said words with several definitions, slang and many other features of natural language make it difficult for computers to translate meaning in straightforward messages.

She said researchers and software engineers will have to address that critical obstacle before even beginning to explore automated sarcasm analysis.

"If I say 'bank,' do I mean a river bank or a financial institution? If I say 'alarm,' I can be talking about the alarm on a clock or a fire alarm," she said.

Bryan Jennewin, product manager for New Brunswick, Canada-based social media marketing company Radian6, admitted sarcasm is a "challenge to detect." He said the company is working with partners to expand its Salesforce Radian6 tool to cover the topic.

One notable challenge, he said, will be determining what indicates sarcasm in different online communities.

"Tweets might indicate sarcasm with capital letters while Facebook posts -- which are longer -- might rely on particular turns-of-phrase and verb choice," he said in an email message.

Although he's upbeat about the company's research and the effectiveness of Salesforce Radian6, he acknowledged that the only way to achieve 100 percent accuracy with any automated sentiment analysis program is through human verification.

"When it comes to interpreting either spoken or written language, nothing beats the human mind," he conceded.

Ms. Pettit accepts inaccuracy as par for the course with automated sentiment analysis research and says the best course of action is for human beings to provide the final checks on what's interpreted through computers.

Even then, she said there's no guarantee a few ideas won't get lost in translation.

"New slang, new acronyms, new emoticons, misspellings and bad grammar all make sentiment analysis far more difficult than people realize. People reading and interpreting comments one by one make errors 15 percent of the time," she said.

"It continually gets better but will never be perfect."

businessnews - interact

Deborah M. Todd: or 412-263-1652. First Published June 26, 2012 4:00 AM


Create a free PG account.
Already have an account?