In an age dominated by social media, the need to effectively identify and respond to individuals in crisis, especially those expressing suicidal thoughts, is urgent.
Sentinet is at the forefront of leveraging the power of artificial intelligence to address this challenge, aiming not only to save lives but also to contribute to pioneering research in the field of mental health.
Sentinet was founded by Yasin Das, who lost a close friend to suicide. “People with suicidal thoughts often have a hard time sharing their feelings with family and friends,” Das said. “It can be easier for them to share their thoughts with strangers on the internet. That’s why I chose to tackle suicide prevention on social media platforms.”
The timing of Sentinet’s launch was also influenced by external factors. In 2022, many researchers reported an alarming surge in hate speech and cyberbullying on social media. This environment exacerbated issues related to suicidal thoughts and self-harm, which prompted Sentinet to debut online.
Sentinet’s mission is primarily suicide prevention, with advanced AI tools to identify suicidal social media posts. Language models scan the unfiltered stream of posts and flag possible suicide-related content. “Many worrying posts from accounts with low followers go unnoticed,” Das says. “We’re aiming to make the biggest impact in this space, especially since there are a lot of accounts with low traction.”
Using its network of volunteers, Sentinet flags 150-200 posts per day on average. It can flag more than 1,000 posts per week, and it plans to increase this number. “We plan to broaden our reach and increase the number of posts detected and reported,” says Dus. “By rough estimate, there are about 3,000 suicide-related posts per day. Our goal is to detect them completely one day. Sentinet’s next step is to roll out the system to more social media platforms.”
Sentinet isn’t just focused on suicide prevention, it goes a step further: Two of its co-founders, computer science graduate students Georgiy Nefedov and Yusuf Efe, oversee the organization’s comprehensive research focused on identifying key signs that people are on the brink of suicidal thoughts, digging into things like language patterns, affiliations to specific communities, and music and movie preferences.
Notably, Sentinet’s research goes beyond traditional studies to target specific cultures and online subgroups that have not received much attention until now. “We study the specific meanings found in posts by suicidal people,” Dus explains. “We work to identify trends among individuals who exhibit suicidal tendencies on social media, which can be traced back to the specific demographics and niche communities that these people belong to. We believe that our research will uncover some previously undiscovered patterns that will spark new discussions and ideas on the topic of suicide prevention.”
In a world where digital and physical realities merge, Sentinet is especially mindful of privacy and data handling. Access to sensitive information is limited to accredited researchers in the mental health field, and Sentinet has an unwavering commitment not to sell or release sensitive information, respecting the privacy of those suffering.
Combining AI technology with human oversight, Sentinet’s goal is unwavering: to provide timely support and comfort to people in difficult situations. In addition to fighting suicide, Sentinet is also laying the foundation for new research that could revolutionize our understanding of suicide prevention. Sentinet’s vision is clear: to be a force for good in this new AI era by leveraging technology to connect people, intervene, and ultimately save lives.
If you or someone you know is considering suicide, please call the Suicide and Crisis Lifeline at 988 or visit SpeakingOfSuicide.com/resources .