Social Services Innovations: Using Social Media to Predict Behavioral Health Risk
Every day, millions of people turn to social media platforms to express their thoughts, feelings, and opinions. With users posting content frequently and abundantly, these platforms provide a window into what people are thinking and experiencing not just in the moment but also over an extended period of time. Along with the incredible growth of social media, technologies for content analysis have also been advancing. Now, researchers are taking a closer look at how these technologies can be applied to all of those social media data, and exploring how the available data might be used to identify individuals most at risk of behavioral health issues such as depression.
Natural Language Processing
This overlaps with sociolinguistics, which involves the analysis of language in a social and cultural context. Past research has shown that, through linguistic analysis of speech, it is possible to classify individuals experiencing depression and paranoia. The ability to use modern technologies for this type of analysis, combined with the ability to apply it within a social media context, offers enormous potential for behavioral health care.
"If the question is whether we can detect changes in behavioral patterns on social media that correlate with the behavior of high-risk individuals, the answer is a clear yes," says Dan Goldwasser, PhD, an assistant professor at the department of computer science at Purdue University. He continues, "Some of these changes can be directly observed and analyzed, for example, analyzing changes to individuals' social interactions [and] changes in linguistic patterns." He adds that the way an individual responds to key events happening around them on social media is another element that can be directly observed and analyzed. "In scenarios where most people express empathy for others, does a given individual express it?"
It's also possible to analyze users' behavior indirectly, Goldwasser says, by looking at indicators such as changes to typing rate, spelling, and intensity of social media usage.
Behavioral Attributes and Signals
Along with language and linguistic styles, the researchers measured behavioral attributes (social engagement, emotion, etc.) relevant to an individual's thinking, mood, communication, activities, and socialization. By analyzing the language used, the researchers identified where users appeared to be expressing feelings of worthlessness, guilt, helplessness, and self-hatred that characterize major depression. Certain behavioral attributes provided signals that characterized the onset of depression in individuals, such as a decrease in social activity and raised negative affect. Applying their prediction model, the researchers were able to predict outcomes related to depression with an accuracy rate of 70%.
"I do anticipate social media being able to predict some behavioral health risks, and look forward to that being a valuable tool if it's designed and implemented appropriately," says Christel Hyden, EdD, a research assistant professor in the department of family and social medicine, research division at Albert Einstein College of Medicine. She sees plenty of opportunities for how this type of language analysis capability could be integrated into existing interventions to identify at-risk patients.
Integrating New Technologies Into Existing Interventions
Currently, the user might be prompted to provide a response of "great," "just OK," or "not so good," and the texting program will send back a preprogrammed message based on the user's response. The addition of reliable language processing technology would increase the program's capacity significantly.
"It would be great," Hyden says, "to have a valid method for flagging messages that show a user has gone beyond the preprogrammed dialogue and/or is at higher behavioral risk that might need an added layer of personal communication."
The thought of having such capabilities integrated into social media to support behavioral health risk prediction and early intervention will undoubtedly make many people uncomfortable, but Hyden sees it as an inevitable progression. "I do think social media will have an accepted role [in early intervention] as both the technology and our attitudes continue to evolve, and the latter might largely rely on generational turnover in the field." She cites the human resources field for comparison, where in just a few years, conducting a job interview by text message has shifted from being unthinkable to being acceptable and, in some cases, even preferable. "[T]he human resources field is seeing that their newest recruiters and candidates are neither interested in nor comfortable with long phone calls, and now there's a growing market for apps that facilitate initial interviews by text or chatbot."
Whether it's human resources or behavioral health care, this type of progression is bound to prove challenging for some individuals, while working well for others. But often, even if there is resistance from some corners, the tools are welcomed and embraced by the target user group. "What I often see with our online or social media content," Hyden states, "is that the people who complain about it aren't the people we built it for." She continues, "As behavioral interventionists, we're expected to understand who we're working with and to try to meet them where they are, and to some degree technology is no exception to that."
In the case of adolescents and young adults, however, Hyden cautions that their lack of concern regarding privacy might not always be a good thing, especially if they're failing to implement appropriate privacy measures. "I think the vigilance we do try to maintain about privacy is important even if it does seem like less of a concern to our consumers because (at the risk of sounding paternalistic) I don't know that their choices are always made with a full understanding of the potential consequences." She points out that this is a population whose brains haven't fully developed the link between actions and consequences, which can impact their judgment and decision-making.
Goldwasser also refers to the potential use of "shallow indicators" and the issues that can arise from this, such as keeping a list of "bad words" and firing an alert when these words are used. "This can lead to a high rate of false-positives," he explains, "which would reduce the trust in such systems. Instead, I believe these systems can be used most effectively when teamed up with human health professionals."
Ultimately, the ideal situation would be one where the available technology is used to its fullest potential, but in a way that is safe, responsible, and highly accountable. This means that even in cases where consumers are receptive and not overly concerned about the risks, technology developers and health service providers need to ensure that all the risks are communicated clearly.
With openness, transparency, and accountability in how the tools are used, the ability to accurately predict behavioral health risk via social media could have a positive impact on behavioral health outcomes in future.
— Susan A. Knight works with organizations in the social services sector to help them get the most out of their client management software.