Instagram has introduced new safety features in Canada, such as default private accounts and enhanced parental controls, to improve the online experience for teens. While these changes are a step in the right direction, they don’t fully address young people’s online challenges.
Social media platforms are integral to teen life, yet the associated risks are evolving at a rapid pace. While Instagram’s new initiative is positive, it does nothing more than scratch the surface of the larger, more serious online threats such as cyberbullying, identity-based harassment and learning how to navigate digital life.
Cyberbullying among teens is widespread, yet Instagram’s new safety features don’t fully address the issue. A 2022 Pew Research Center survey found 49 per cent of teens aged 13 to 17 have been victims of cyberbullying from the people they know.
This is a critical oversight.
Instagram focuses on external threats, but the real problem often lies within teens’ social circles where they face behaviours like name-calling, rumours and exclusion. Privacy settings and parental controls won’t solve these deeper social issues.
In addition to bullying from peers, identity-based harassment remains a significant issue online. A 2022 study by the Anti-Defamation League (ADL) found 65 per cent of marginalized groups experienced hate-based harassment. This statistic hasn’t improved over the years, pointing out the lack of progress. LGBTQ+ teens, in particular, are targeted, with 66 per cent reporting harassment, and Asian Americans saw a notable rise in abuse, increasing from 21 per cent in 2021 to 39 per cent in 2022. Instagram’s new features do little to tackle this growing problem.
The ADL report urges platforms like Instagram not just to create hate speech policies but also to redesign algorithms to reduce the spread of harmful content, which can help moderate it. As it stands, Instagram hasn’t made significant efforts in this area, leaving teens exposed to real risks. The new measures feel more like a band-aid than a real solution.
Another crucial issue is that teens must be taught how to navigate these online spaces safely. Experts from the American Academy of Pediatrics believe teens should learn how to make responsible decisions about the content they encounter. While parental controls can help in the short term, they aren’t a long-term solution. Teens need to build digital literacy and understand how to stay safe online. Instagram’s new features focus more on limiting access rather than educating teens.
It’s not just about keeping teens safe from strangers, it’s about giving them the tools and knowledge to protect themselves.
To make online spaces safer, we can’t just count on Instagram or other platforms to control their environments by themselves. We need better tools to stop hate, stronger ways to manage harmful content, and better education on using the internet responsibly. But it’s not realistic to expect companies, who care mostly about profit, to be the only ones keeping teens safe.
Governments, schools and parents need to work together to create rules and standards that protect young people from the dangers that exist online.
Instagram’s new protections are a step forward, but full measures would include proactive AI tools to detect harmful content, mandatory reporting mechanisms for cyberbullying, and increased clarity about how algorithms can influence what teens see. It’s possible to align ethics and corporate interests, but only if companies like Instagram prioritize long-term societal well-being over short-term profits.
Teens shouldn’t be caught in the crossfire of corporate strategies, and it’s up to both regulators and the public to hold these platforms accountable.