One of the most ambitious efforts is being conducted by Facebook. The company recently announced that it was using artificial intelligence to scan posts and live video streams on its social network for signs of possible suicidal thoughts. If the system detects certain language patterns — such as friends posting comments like “Can I help?” or “Are you OK?” — it may assign a certain algorithmic score to the post and alert a Facebook review team.
In some cases, Facebook sends users a supportive notice with suggestions like “Call a helpline.” In urgent cases, Facebook has worked with local authorities to dispatch help to the user’s location. The company said that, over a month, its response team had worked with emergency workers more than 100 times. Some health researchers applauded Facebook’s effort, which wades into the complex and fraught realm of mental health, as well-intentioned. But they also raised concerns. For one thing, Facebook has not published a study of the system’s accuracy and potential risks, such as inadvertently increasing user distress.