Social media algorithms knew I was sick, so they showed me content that fuelled my illness
This blog is written by Mad Youth Organise Leader Rosie.
Content warning: self harm, suicide and eating disorders
Today marks another “Safer Internet Day”, and this year's theme is “smart tech safe choices”; exploring the safe and responsible use of AI. The question I find myself asking is: how can we possibly make safer choices when we are at the mercy of big tech giants who base their business models to put their profit above our privacy and safety?
As someone who grew up just at the beginning of social media's introduction into our lives, I am increasingly fearful of the extent to which young people today are being exposed to harmful content and having their emotions manipulated by tech giants in order to keep them hooked on their platforms.
Recently Leaders in the Mad Youth Organise campaign had a group discussion about the role that social media has played in our own journeys of madness. I was struck by the similarities in the kind of content we were exposed to and the impact that it had on us. It felt comforting to know I was not alone in the troublesome tumblr content that glamourised self-harm and pushed pro-ana or “thinspirational” content.
Like many of us in the room, I had stumbled upon a seemingly never ending and increasingly extreme rabbit hole of images and posts relating to self-harm. Some of the content I was exposed to even offered tips on what type of blades to use, how to remove blades from everyday objects and ways to ensure a lethal wound. I couldn't tell you how exactly the jump went from somewhat cute depressed puppets to explicit images of self harm, but I do know I never created accounts on these platforms to seek out this content.
Instead, I, like many adolescents, was seeking support and community. I used Tumblr as a mode of self expression, as a way to understand myself and make sense of my experiences. Yet this turned into a sea of content that a 14-year-old with a serious self harm problem should not have been exposed to alone in their room. That however, was ten years ago and I was accessing that content on a desktop computer in my bedroom. I worry greatly about the access young people nowadays have to extremely damaging content 24/7 in their pockets.
In 2020 when I became acutely unwell in hospital while the world was in lockdown, I was again exposed to rabbit holes that pushed my self harm and suicidality to dangerous extremes. Somehow my phone mirrored the madness within my mind and created an echo chamber for my dangerous self destructive thoughts. I remember discussing this with a nurse during a hospital admission. I was asking for advice on how to get rid of the content that had come to flood my social media.
Being sectioned in lockdown meant all I did was go on my phone, as there was little else to pass the time. But as some sliver of clarity started to appear in my mind I began to become fearful of my phone as it was a minefield of triggering content. At the time I felt embarrassed; these were my social media accounts. Surely I had done something to cause this content to take over? I felt there was something seriously disturbed about me and this was just more proof of that. My madness and my phone had developed a symbiosis; feeding one and another in a joint mission of ending my life.
Of course, back then I wasn't aware that my Instagram and TikTok feeds were doing exactly what they were designed to do. This is their business model in action - the algorithm had learnt that I was sick and so it showed me content that was sick. A mere few interactions, a few seconds too long on the wrong video and my phone was set to echo the dark path in my mind and show me exactly where it leads. I'm glad I spoke to the nurse that day; she was able to show me how to discourage harmful content from showing up on my feeds. Slowly I began to untangle my madness from my social media, but it took a long time and I still see things on my accounts that worry me.
I know that these concerns are not just individual, and in fact are echoed on a global and structural scale. Meta is currently being sued by 41 US states for deliberately “harming children and teens cultivating addiction to boost profit”, and 6 families are in the process of suing Tik Tok for the app’s role in their children's deaths.
More of us are speaking up, but that comes in the heavy shadow of loss. Young people are losing their lives as a direct result of what they have seen on social media. Big tech giants still aren't doing enough to protect our young people online. A recent investigation launched by the Molly Rose foundation into so-called “safer” Instagram teen accounts rated 64% of safety features on the accounts as ineffective or not available. With high profile lawsuits under way the spotlight has fallen on the need to hold big tech giants to account for the harm they are causing to public health. The government is beginning to consider a ban on social media for under 16s. However, I worry this falls short of placing responsibility on the corporations who are making misery by design.
The only way we will be able to make safe choices with smart tech is if the tech itself is safe by design. With the rise of AI and the popularity of its use for social and emotional advice, it is more important than ever that we speak out against corporations who profit from our pain. We must have transparency about our technology. After all, social media was almost ironically introduced as a tool for connection, not a magnifying glass of insecurity.
If you are a young person whose mental health has been negatively impacted by Big Tech’s business practices - we want you to get involved in our fight. Check out our Mad Youth Organise campaign page for more information and ways to get involved!