Associate Sponsors

Co-sponsor

Instagram to notify parents of teen self-harm searches online: How it works

Instagram will alert parents if teens repeatedly search for suicide or self-harm-related terms, expanding its supervision tools with new safety notifications

Instagram's new safety feature
Instagram’s new safety feature (Image: Meta)
Sweta Kumari New Delhi
3 min read Last Updated : Feb 27 2026 | 12:34 PM IST
Meta has announced a new safety feature for Instagram that will notify parents if their teen repeatedly searches for suicide or self-harm-related terms within a short period of time. According to Meta, the alerts will apply to families using Instagram’s parental supervision tools and are part of the platform’s broader efforts to strengthen protections for teen users. The company said Instagram blocks such search results and redirects users to support resources and helplines. The new alerts are intended to notify parents if repeated searches indicate that a teen might need extra support.

Instagram’s new safety feature: How the alerts will work

According to Meta, parents and teens enrolled in Instagram supervision will receive a notification explaining that these alerts are being introduced starting next week. If a teen repeatedly attempts to search for terms promoting suicide or self-harm, including phrases that suggest they want to harm themselves or general terms like “suicide” or “self-harm”, parents will be notified. 
Meta stated that alerts will be sent through email, text message or WhatsApp, depending on the contact details linked to the account. Parents will also receive an in-app notification. When opened, the alert will explain that the teen has repeatedly tried to search for suicide or self-harm-related terms within a short timeframe. It will also provide expert-backed resources to help parents start a sensitive conversation with their child. The feature will first roll out in the US, UK, Australia and Canada, with other regions expected to follow later this year. 
Meta said that the goal is to help parents step in when necessary, without overwhelming them with unnecessary alerts. The company aims to ensure notifications are sent only when search patterns indicate repeated attempts, rather than one-off searches. 

Part of broader teen safety measures

 
Instagram already blocks content that promotes or glorifies suicide and self-harm. While users are allowed to share personal stories about their struggles, such content is hidden from teen accounts. Searches linked to suicide and self-harm are blocked, and users are redirected to mental health resources and local support organisations. Meta also said it will continue contacting emergency services in cases where someone appears to be at immediate risk. 
According to TechCrunch, the move comes at a time when Meta and other tech companies are facing multiple lawsuits that aim to hold social media platforms accountable for alleged harm to teens. 

AI support alerts

The alerts will first roll out for Instagram search, the company said. It added that as more teens turn to AI tools for support, it is developing similar parental notifications for certain AI interactions. While its AI systems are already designed to respond safely and provide relevant resources, the upcoming feature will inform parents if a teen engages in certain conversations related to suicide or self-harm with the AI. The company said it will share more details about these updates in the coming months.

More From This Section

Topics :InstagramLatest Technology NewsParentingSocial media apps

First Published: Feb 27 2026 | 12:34 PM IST

Next Story