Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Social Services Innovations: Using Social Media to Predict Behavioral Health Risk
By Susan A. Knight
Social Work Today
Vol. 17 No. 5 P. 8

Every day, millions of people turn to social media platforms to express their thoughts, feelings, and opinions. With users posting content frequently and abundantly, these platforms provide a window into what people are thinking and experiencing not just in the moment but also over an extended period of time. Along with the incredible growth of social media, technologies for content analysis have also been advancing. Now, researchers are taking a closer look at how these technologies can be applied to all of those social media data, and exploring how the available data might be used to identify individuals most at risk of behavioral health issues such as depression.

Natural Language Processing
Natural language processing technology is a key element in using social media content for predictive purposes. By looking at relationships between words and the overall context, it allows content to be analyzed for the actual meaning and intent behind the literal word definitions.

This overlaps with sociolinguistics, which involves the analysis of language in a social and cultural context. Past research has shown that, through linguistic analysis of speech, it is possible to classify individuals experiencing depression and paranoia. The ability to use modern technologies for this type of analysis, combined with the ability to apply it within a social media context, offers enormous potential for behavioral health care.

"If the question is whether we can detect changes in behavioral patterns on social media that correlate with the behavior of high-risk individuals, the answer is a clear yes," says Dan Goldwasser, PhD, an assistant professor at the department of computer science at Purdue University. He continues, "Some of these changes can be directly observed and analyzed, for example, analyzing changes to individuals' social interactions [and] changes in linguistic patterns." He adds that the way an individual responds to key events happening around them on social media is another element that can be directly observed and analyzed. "In scenarios where most people express empathy for others, does a given individual express it?"

It's also possible to analyze users' behavior indirectly, Goldwasser says, by looking at indicators such as changes to typing rate, spelling, and intensity of social media usage.

Behavioral Attributes and Signals
In the 2013 Microsoft report Predicting Depression via Social Media, researchers developed a model for detecting and diagnosing major depressive disorder within social media platforms based on users' behavioral attributes and signals. Crowdsourcing was used to find Twitter users who reported being diagnosed with clinical depression. Their postings from the year preceding the onset of their depression were then analyzed in order to develop the prediction model.

Along with language and linguistic styles, the researchers measured behavioral attributes (social engagement, emotion, etc.) relevant to an individual's thinking, mood, communication, activities, and socialization. By analyzing the language used, the researchers identified where users appeared to be expressing feelings of worthlessness, guilt, helplessness, and self-hatred that characterize major depression. Certain behavioral attributes provided signals that characterized the onset of depression in individuals, such as a decrease in social activity and raised negative affect. Applying their prediction model, the researchers were able to predict outcomes related to depression with an accuracy rate of 70%.

"I do anticipate social media being able to predict some behavioral health risks, and look forward to that being a valuable tool if it's designed and implemented appropriately," says Christel Hyden, EdD, a research assistant professor in the department of family and social medicine, research division at Albert Einstein College of Medicine. She sees plenty of opportunities for how this type of language analysis capability could be integrated into existing interventions to identify at-risk patients.

Integrating New Technologies Into Existing Interventions
As an example, Hyden describes a current program she and her colleagues use that employs automated text messaging to check in on behavioral health interventions, but with generic rather than customized messages. "We're not doing crisis counseling," she explains, "we're asking things like, 'Hey, how's it going with that goal you set?'" The general intent is to remind users about their goals; more information is sent if needed.

Currently, the user might be prompted to provide a response of "great," "just OK," or "not so good," and the texting program will send back a preprogrammed message based on the user's response. The addition of reliable language processing technology would increase the program's capacity significantly.

"It would be great," Hyden says, "to have a valid method for flagging messages that show a user has gone beyond the preprogrammed dialogue and/or is at higher behavioral risk that might need an added layer of personal communication."

The thought of having such capabilities integrated into social media to support behavioral health risk prediction and early intervention will undoubtedly make many people uncomfortable, but Hyden sees it as an inevitable progression. "I do think social media will have an accepted role [in early intervention] as both the technology and our attitudes continue to evolve, and the latter might largely rely on generational turnover in the field." She cites the human resources field for comparison, where in just a few years, conducting a job interview by text message has shifted from being unthinkable to being acceptable and, in some cases, even preferable. "[T]he human resources field is seeing that their newest recruiters and candidates are neither interested in nor comfortable with long phone calls, and now there's a growing market for apps that facilitate initial interviews by text or chatbot."

Whether it's human resources or behavioral health care, this type of progression is bound to prove challenging for some individuals, while working well for others. But often, even if there is resistance from some corners, the tools are welcomed and embraced by the target user group. "What I often see with our online or social media content," Hyden states, "is that the people who complain about it aren't the people we built it for." She continues, "As behavioral interventionists, we're expected to understand who we're working with and to try to meet them where they are, and to some degree technology is no exception to that."

Privacy Concerns
Receptiveness to technology's influence and expansion is often tied to views on privacy. Working with adolescents and young adults, Hyden sees that there is often a generational aspect to this as well. Younger generations have "grown up using these platforms to share their lives either publicly or to a select audience, so it's a different realm for them, and I believe it feels like a safer space for them than it might to someone older." Comfort with sharing one's life publicly, ease around the various social media platforms, and a perception of safety within these environments all make it far more likely that younger generations will be accepting as social media is increasingly used to support more expansive behavioral health care tools and interventions. And with this acceptance, they're likely to have fewer concerns about privacy.

In the case of adolescents and young adults, however, Hyden cautions that their lack of concern regarding privacy might not always be a good thing, especially if they're failing to implement appropriate privacy measures. "I think the vigilance we do try to maintain about privacy is important even if it does seem like less of a concern to our consumers because (at the risk of sounding paternalistic) I don't know that their choices are always made with a full understanding of the potential consequences." She points out that this is a population whose brains haven't fully developed the link between actions and consequences, which can impact their judgment and decision-making.

Ethical Considerations
Along with privacy, there are broader ethical considerations that need to be taken into account before any type of widespread implementation occurs. "The fact that the ability is there," Goldwasser says, "does not necessarily mean it should be widely used, and there are many ethical questions that should be addressed." He cites user consent and potential commercial use of the data collected as just two of the many areas that need to be discussed thoroughly beforehand.

Goldwasser also refers to the potential use of "shallow indicators" and the issues that can arise from this, such as keeping a list of "bad words" and firing an alert when these words are used. "This can lead to a high rate of false-positives," he explains, "which would reduce the trust in such systems. Instead, I believe these systems can be used most effectively when teamed up with human health professionals."

Ultimately, the ideal situation would be one where the available technology is used to its fullest potential, but in a way that is safe, responsible, and highly accountable. This means that even in cases where consumers are receptive and not overly concerned about the risks, technology developers and health service providers need to ensure that all the risks are communicated clearly.

With openness, transparency, and accountability in how the tools are used, the ability to accurately predict behavioral health risk via social media could have a positive impact on behavioral health outcomes in future.

— Susan A. Knight works with organizations in the social services sector to help them get the most out of their client management software.