Facebook Launches an AI Technology that Seeks to Reduce Cases of Suicide

Facebook recently announced the use of IA to detect worrisome posts emanating from its users: posts that suggest that a user is on the brink of committing suicide. The AI technology employed by Facebook scans posts from users and can locate suspicious posts which it highlights to human moderators. The technology will drastically reduce the time between when such posts are made and when Facebook makes an effort to help out the individuals involved. What’s more, the AI will also classify user reports based on urgency.
While the Facebook’s artificial intelligence system can identify posts from mentally ill people, are the human moderators ready to offer assistance to such people? TechCrunch reports that Facebook is dedicating a significant number of moderators to suicide prevention; this involves training them to deal with cases at whatever time of the day. Also, Facebook has some new partners in its mission to stem suicide on its platform. They include Save.org, National Suicide Prevention Lifeline, etc. So far, Facebook has initiated over 100 “wellness check.” Guy Rosen, Facebook’s VP of product management emphasizes that the social media’s AI is tailored to reduce the time between when disturbed user posts a comment or uses Facebook Live and when first responders reach out to the user.
There is no doubt Facebook means well with its AI technology, but experts are raising questions regarding its use. Could the technology be misused? Well, Facebook does not have all the answers at this point, and Rosen explains that Facebook saw an opportunity to help and the social media giant embarked on it. Notably, users cannot opt out of the AI tech scanning their posts.
Rosen states that mental experts had an input in the development of Facebook’s AI. According to him, many ways of stopping suicide exist but connecting distressed people with their family and friends beats them all. Facebook is in a unique position to connect people at risk not only to their friends or family but also organizations that can assist them.

Facebook Dives into Curbing Suicide Cases through the Use of AI

Facebook has rolled out an Artificial Intelligence technology that aims at detecting suicidal posts. The AI will make a scan of all posts made by a particular individual, and if the feeds show pre-suicidal attempts, a message will be sent to the user’s friends or the local-first responders. The AI will decrease the time required to send help to the user by flagging worrisome posts and sending notifications to human moderators. The AI will scour through billions of data around the world besides the EU. The European Union has stringent measures against profiling people based on the sensitive information. As a result, the complexities of using such tech in the region is prohibited by law.

The AI is designed such that it prioritizes critical reports. It will also come up with the closest first-responder contact info. Additionally, Facebook is training more responders on methods of dealing with suicidal cases 24/7. They have partnered with several organizations that provide support services to victims. Mark Zuckerberg praised the AI saying that in the future, it will have the capability of identifying cases of bullying among many others. Victims get connected to health experts and organizations that can offer help.

How It Works

The AI will detect any Facebook post that expresses suicidal thoughts and flag it. When the information reaches the Facebook tech, they can proceed and highlight the suicidal part. The AI also prioritizes user reports over other content-policy violations. After all the info gets verified (usually very fast), the moderator proceeds to make a call to the responders closest to the user at risk. The AI is about saving minutes that may prove vital to avoiding a suicide. According to Guy Rosen, the VP of product management, the company’s goal is to have a response team that can offer support through a variety of languages.