Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

May/June 2015 Issue

Technology Trends: Social Media And Mental Health — Risks vs. Benefits
By Sue Coyle, MSW
Social Work Today
Vol. 15 No. 3 P. 8

Last year, Social Work Today reported on the rising role of social media in mental health issues, specifically suicide prevention. Over the past year, that role has grown, because the use of social media in everyday life has also grown.
"For younger generations, their online life is perhaps more real to them than their real life. Social media is very important to these people. [With it], there is a real opportunity to help this generation," says Glen Coppersmith, PhD, a research scientist at Johns Hopkins University.

But with that growth has come concern that the programs being created could cause more harm than good. So we set out to discover if the worry is warranted and what can be done to offset the risks.

The Risks
The app that fired up the "should we or shouldn't we" debate came from a British suicide prevention organization called The Samaritans, which released an app that notified users when their friends appeared to be in emotional distress. The problem? The users weren't always friends. They could be bullies or stalkers, who were then alerted when their victims were most vulnerable. It didn't take long for The Samaritans to disable the app, but it raised a valid question: Could apps meant to help people actually lead to hurting them?

"The unfortunate reality is that people who wish to do harm to others will invariably pervert or otherwise take advantage of tools that are designed to help," says Craig Bryan, PsyD, ABPP, associate director of the National Center for Veteran Studies. "Some individuals will activate fire alarms in buildings just to create chaos and inconvenience for others; that certainly isn't a reason for removing all fire alarms from buildings, though."

Beyond alerting predators, another concern is that the information could lead to the mislabeling of individuals, particularly given that so many people use social media to overtly or covertly judge one another. For example, says W. J. Casstevens, PhD, LCSW, an associate professor in the department of social work at North Carolina State University, "we know that employers, if they can access Facebook, they do these days." That's not to say that the employers will be the ones labeling and stigmatizing but rather that the opportunity to survey and assume is consistently present and used.

But, Bryan cautions, "I can't help but wonder if the answer to this is similar to the steps we take to minimize the risks involved in using telephones, or any other 'old' technology, as a suicide prevention tool?

"I suppose what we need to do as a society," he continues, "is create the expectation that it is unacceptable to harm others, especially those who might be vulnerable. I think it's important to keep in mind that the risk involved here has very little to do with the technology of social media and everything to do with the character of that minority of individuals who have little concern for the well-being of others."

But what of those who do have concern for the well-being of others—the actual friends and family members worried about their loved one? There is a risk in notifying them that goes beyond harm to the vulnerable. It is the onus of responsibility. Notifying the user that a friend is in distress or asking a friend to notify the social media platform of a concern is "putting the burden on the friend," Casstevens says. "You're putting the friend in the position of middle man. No matter how well-intentioned a friend is, if something doesn't happen or get opened in a timely way, it could lead to feeling a great deal of guilt as a result."

Research
A crucial step in mitigating all of these risks is research. Coppersmith is currently studying the use of language on Twitter. Having gathered and analyzed the tweets of users who publicly spoke about their mental health conditions, Coppersmith and his team are looking for identifiers of depression, bipolar, PTSD, and seasonal affective disorder. "There are a class of words that evoke some kind of psychological meaning," he explains. "For example, how often do you use 'I' rather than 'we'? If you're using 'I' more than most people, that might be a signal. You are talking about yourself more and you're talking about yourself alone."

Through their analysis, Coppersmith was able to find signals that could indicate the prevalence of a mental health disorder. For instance, based on Twitter usage, there is a higher instance of PTSD language on military bases. What's more, there is a higher instance of PTSD language on military bases that deploy regularly vs. those that do not.

"The take-home message," Coppersmith says, "is that we can find data that is relevant to mental health. There are quantifiable signals. They have some predictable validity. It sets a foundation.

"[However], this is the early days," he says. "You're seeing the early prototypes and implementations of these ideas. We're seeing some interesting things. I'm very, very excited, but there's a long way to go."

The Now
In the meantime, there are still apps being presented to the public as viable tools for prevention and assistance. The key is knowing whether there is solid science behind them—and patience. Casstevens recommends KnowBullying and Suicide Safe, available at no cost through the Substance Abuse and Mental Health Services Administration at www.store.samhsa.gov. The first helps parents talk to their children about bullying, while the latter aids clinicians in suicide assessment.

For the everyday user, Facebook just rolled out an app that allows you to notify the company when you think a friend is experiencing emotional distress. Facebook responds first with resources for the user and then, if the user is still concerned, it contacts the friend in question. While this does carry with it the onus of responsibility that Casstevens warned about, it also has behind it a team of researchers who are aware of past mistakes and keeping a keen eye out for future ones.

"Before, if you saw content you were concerned about, you could report that content in the same way you could report child pornography or bullying," says Ursula Whiteside, PhD, developer of NowMattersNow.org. "The language was the same. So if you were concerned, you could ask Facebook to take a look, but the way that the language was reported, it was like you violated your community standards. It was just because they were using the same tool."

"We are helping Facebook improve the way that suicidal people are treated on the website. I bring [to the project] myself, my clinical experiences, personal experiences, and a team of people who have lived experiences."

Whiteside cites the inclusion of people with lived experiences as a critical aspect of tool creation. "The suicidal experience is so misunderstood," she says. "The noninclusion of people with those experiences when decisions are being made is harmful to the community. Including them is not only making the tool better, but it's also sending the message that their voice matters."

Whiteside adds that nothing like this has ever been done before. "It is an ongoing improvement project."

And that ongoing improvement is a part of the process. As society continues to find ways to help individuals through the tools and toys most prevalent in their lives, there must be room for several attempts.

"There are going to be fits and starts," Coppersmith says. "If [researchers are] taking the best of the science that we've got and they're trying to apply it to the real world, I have to applaud them. I am cautiously optimistic. We're going to refine how we intervene and interact. We're going to refine the sorts of information that we're using."

But remember, he warns, "They might not be right the first or second time."

— Sue Coyle, MSW, is a freelance writer in Philadelphia.