A new study by Danish researchers claims that Instagram is actively helping the spread of self-harm content among teenagers through poor content moderation.
Key Findings
The researchers conducted an investigation by:
- Creating fake profiles of users as young as 13 years old
- Sharing 85 pieces of self-harm-related content
- Gradually increasing the severity of shared images
- Including content with blood, razor blades, and self-harm encouragement
Moderation Concerns
The study found that Instagram’s content moderation is “extremely inadequate”. The researchers suggest that Meta (Instagram’s parent company) is:
- Failing to remove explicit self-harm images
- Encouraging users engaging with such content to connect with each other
Potential Risks
The research highlights significant concerns about:
- The impact of self-harm content on teenage mental health
- Instagram’s algorithmic recommendation systems
- Lack of effective content filtering
Important Context
The study raises serious questions about social media platforms’ responsibility in protecting vulnerable young users from harmful content.
Meta has not yet publicly responded to the specific claims in this research.
Note: If you or someone you know is struggling with self-harm, please seek help from a mental health professional or a trusted support network.