A private member’s Bill has even proposed disabling the accounts of all under-16 users. Clearly, parents and governments are shaken by episodes where online harm appears to have played a role, including cyberbullying, content on self-harm, and addiction-like compulsive use. Social-media platforms are designed to maximise engagement, not wellbeing, and children are particularly vulnerable to manipulative feeds, recommendation loops, and hostile online behaviour.
In this context, Australia’s ban on social-media use for under-16s is now the most cited template. It places the burden on platforms, not families, and threatens fines of up to 49.5 million Australian dollars for repeated breaches. It also mandates reasonable steps for age assurance, including government IDs, facial recognition, or behavioural inference. Several European countries are exploring similar rules. In principle, the core idea behind the ban sounds reasonable. Early exposure to social media is not considered good for children. But Australia’s experience reveals why blanket bans are a blunt instrument. Reports suggest that enforcement remains leaky. Teenagers circumvent restrictions through virtual private networks, by giving fake years of birth, or by using their parents’ accounts. Furthermore, bans may simply push children towards obscure, less moderated corners of the internet.
Data-privacy tradeoffs are even sharper. Age verification at the scale of a billion internet users would require intrusive data collection, creating new surveillance risks in a country still building robust data-protection enforcement. There is also a cost in lost benefits. For many teenagers, especially those in remote areas or facing disabilities, online spaces can provide community and information that offline environments deny. Social media is also where young people increasingly encounter news and civic discourse. India’s challenge, therefore, is to put in place enforceable guardrails. The Economic Survey points in the right direction. A mandatory teen-safe design that disables autoplay and endless scroll by default; verified youth modes with stricter privacy settings and messaging controls; and clear platform liability backed by transparency obligations need to be considered. Thus, regulation should focus on how platforms are built, not only on who is allowed to enter. At the same time, schools and families must adopt healthier digital habits. Digital addiction should be treated as a public-health challenge, not merely a policing exercise. Given that Australia’s experiment is still too recent to yield meaningful evidence, the government would do well to approach this issue with care.