Sunday, January 24, 2021

AI used Facebook data to predict mental illness

Must read


Sharath Guntuku, an assistant professor of computer science at the University of Pennsylvania who was not involved in the research, warns that while these algorithms have impressive results, they are far from replacing the role of clinicians in diagnosing patients. . “I don’t think there will ever be a time, at least in my life, when only social media data will be used to diagnose a person. It just won’t happen, ”Guntuku says. But algorithms like the one designed by Birnbaum and his team could still play a crucial role in mental health care. “We’re increasingly looking to use them as a complementary data source to flag people at risk and see if they need additional care or additional contact from the clinician,” Guntuku explains.

Schwartz notes that diagnosing mental illness is an inexact science, which could be improved with the addition of more data sources. “The idea is that you triangulate sanity,” he says. “Mental health assessment is an exercise that cannot rely on just one tool.” And since social media provides a continuous recording of a person’s thoughts and actions over a substantial period of time, it could effectively complement the hour-long clinical interviews that are typically used for making diagnoses. In such an interview, says Schwartz, “you always rely on a patient to remember everything, to remember things about himself. The clinician must determine when they are influenced by desirability biases ”, ie the patient tells his clinician what he thinks he wants to hear. Perhaps, then, social media data could give a less skewed impression of a patient’s mental state.

Munmun de Choudhury, a professor of interactive computing at Georgia Tech who previously worked with Birnbaum but was not involved in this particular study, is considering an opt-in social media plugin that could warn users when they are at risk of suffering. of mental illness. But such a plugin immediately raises privacy concerns – data about an individual’s psychiatric condition, if leaked, could be misused by insurance companies or employers, or force a person to reveal. her mental illness before she was ready to do so. To work at all, de Choudhury says, the makers of the plugin would have to be fully transparent about how it handles and secures user data. But if such an algorithm could detect symptoms of mental illness a year and a half before a patient is typically diagnosed, it could make a huge difference in people’s lives. “If we catch these symptoms much sooner, there might be other mechanisms to alleviate these concerns that don’t necessarily require a doctor’s visit,” she says.

There is already a precedent for using social media to prevent mental health crises. “Facebook and Google, they’re already doing this on some level,” Guntuku says. If a user searches for suicide-related terms on Google, the National Suicide Prevention Lifeline number appears before all other results. Facebook uses artificial intelligence to detect messages which may indicate a risk of suicide and sends them to human moderators for review. If the moderators agree that the post indicates a real risk, Facebook may send suicide prevention resources to the user or even contact law enforcement. But suicide presents a clear and imminent danger, while just being diagnosed with mental health often does not – social media users may be willing to sacrifice more privacy to prevent suicide than to detect it. ‘onset of schizophrenia a little earlier. “Any sort of public and large-scale detection of mental health, at the individual level, is very delicate and very ethically risky,” Guntuku says.

For its part, Birnbaum sees a smaller, but nonetheless impactful use case for this research. A clinician himself, he believes social media data could not only help therapists triangulate diagnoses, but also help them monitor patients as they progress through long-term treatment. “Thoughts, feelings, actions – they’re dynamic, and they’re changing all the time. Unfortunately in psychiatry we get a snapshot once a month at best, ”he says. “Incorporating this type of information really allows us to have a more complete and contextual understanding of someone’s life.”

Researchers still have a long way to go to design these algorithms and determine how to implement them ethically. But Birnbaum is optimistic that in the next five to ten years, social media data could become a normal part of psychiatric practice. “Someday digital data and mental health are really going to combine,” he says. “And that will be our X-ray in someone’s mind.” This will be our blood test to help support the diagnoses and procedures we recommend.


More WIRED stories

- Advertisement -

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest article