PRECISION SAFETY

When safety is enforced at the door, the burden shifts away from where it matters most: how systems respond when things go wrong.

To be clear, a safer internet is not just one with fewer bad actors. It’s one where harm is taken seriously, where victims are supported and where platforms are held accountable. That requires more than just gatekeeping – it requires a redesign of social media systems to ensure they can respond to failures and hold up under pressure.

Singapore’s model has been lauded as frontrunning but is nonetheless still evolving. While early legislation like the Protection from Online Falsehoods and Manipulation Act (better known as POFMA) raised concerns about its scope and ministerial discretion, it was designed to issue correction directions for falsehoods post-publication, not to impose blanket restrictions to social media platforms or services.

The Online Safety (Miscellaneous Amendments) Act 2022 expanded regulatory powers further to direct platforms to remove or block access to egregious content such as child sexual exploitation, suicide promotion and incitement to violence. Still, it left room for ambiguity – especially around harms that fall outside these categories, including non-consensual distribution of sexual content, targeted harassment, or content promoting dangerous behaviours.

The next step is the Online Safety (Relief and Accountability) Bill. Once passed, it will establish a dedicated Online Safety Commission in 2026. It will also give regulators the authority to request user identity information – but only when serious harm has occurred and legally defined thresholds are met.

In this case, identity disclosure is not the starting point. Instead, the focus is on harm-based disclosure: targeted, post-incident and justified.

Share.

Leave A Reply

Exit mobile version