Editor’s Note: Kara?Alaimo is an associate professor of communication at Fairleigh Dickinson University. Her book “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Take It Back” was recently published by Alcove Press. Follow her on?Instagram,?Facebook?and?X. The opinions expressed in this commentary are her own. Read?more opinion?on CNN.
New York Gov. Kathy Hochul has just signed two bills into law that ban social media companies from using algorithms to select what content young people see. The SAFE for Kids Act requires social networks to show minors content in chronological order, unless they gain permission from their parents to have algorithms control their feeds.?And the New York Child Protection Act?increases the?minimum?age?at which social networks are allowed to collect data on people to age 18 (previous federal privacy protections extended to age 13).
The idea is smart because algorithms are currently such black boxes. We have no idea how social media companies choose the content they show minors, but we do know?they have incentives?to serve them posts that are?bad for young people?individually and for our society.?We also have plenty of evidence that algorithms sometimes push?incredibly dangerous content to kids. This legislation would make it easier for young people — ideally with help from their parents — to intentionally select and consume healthy content. But it wouldn’t be a panacea because social media companies could still show them toxic advertisements.
As early Facebook investor?Roger McNamee?writes in “Zucked: Waking Up to the Facebook Catastrophe,” social networks have incentives to show us content that provokes strong emotions such as outrage, because emotional users spend more time on social media platforms.
Of course, the more time we spend on these platforms, the more money social networks make by showing us ads.
But getting young people angry isn’t just potentially bad for their mental health — it’s also bad for our country. It can help explain how we’ve arrived at what?New York Times columnist?Frank Bruni?calls “the age of grievance” in his new book — our era of extreme partisanship, where people (of all ages) often focus on our differences and disagreements with one another, instead of how we might arrive at reasonable common ground.
The content?that?algorithms promote to kids can also be?dangerous.?One woman I interviewed for?my new book?told me she got her eating disorder on Instagram as a teenager after she posted a picture of herself doing a handstand and the picture was shared by a so-called “fitspo” (“fitness inspiration”) page, which led her to consume more and more toxic “fitspo” content.
Studies have documented how algorithms sometimes serve young people this content. This year,?a report by UK researchers showed what happened when they set up TikTok accounts and searched for content that is commonly sought after by young men. Within five days, the amount of misogynistic content promoted to the accounts quadrupled.?In response,?TikTok told The Guardian?the report “does not reflect how real people experience TikTok.”
Similarly, in 2022, researchers at the?Center for Countering Digital Hate?set up accounts purporting to be 13-year-olds and briefly viewed and liked content about mental health and body image. Within minutes, the accounts were being served videos about eating disorders and suicide.?TikTok said?the study didn’t accurately reflect the experiences of users?and last month the company announced?new measures?to avoid promoting dangerous posts about weight loss and dieting.
But the fact remains that we have no idea how algorithms choose what they show kids. Social media companies?closely guard?how their algorithms are programmed because they consider it to be proprietary information — it’s central to how they differentiate apps from one another.
Preventing algorithms from determining what kids see would put children in the driver’s seat. They would see content from the accounts they choose to follow in the order in which it’s posted. While some kids might choose to follow very toxic content, others could select accounts about issues they care about or healthy interests they might later pursue as careers. The trick would be for schools to incorporate curricula on how students can find and follow accounts that serve and empower them. It would also be ideal for parents to help kids find healthy accounts to follow.
Get Our Free Weekly Newsletter
- Sign up for CNN Opinion’s newsletter
- Join us on Twitter and Facebook
Of course, even if social networks can’t use algorithms to determine what they show kids, they would still select what ads young people see — and they could still be very toxic. For example, the accounts set up by the Center for Countering Digital Hate were?shown ads?for things like tummy tuck surgery and weight-loss drinks. So lawmakers should also think about how they can prevent social media companies from showing ads to children that are harmful to their physical and mental health.
Still, this legislation will give kids who want to curate healthier feeds a way of doing so. Combined with proper education?and parental involvement to help them consciously choose to follow content that isn’t harmful, it will be a helpful step towards making social networks potentially safer for young people.
As a parent, given the choice between letting social media companies drive what my kids see online or letting my kids control more of it, I’d trust my?children?more — and then support them in making good choices.
This article has been updated to reflect news developments.