A jury said Instagram and YouTube are defective — now what?

Is social media not only bad, but illegally Bad? Should tech companies pay to make it this way? According to two American juries – and no shortage of outside observations – the answer to both questions is “yes.”

Earlier this week, two juries – one in New Mexico, one in Los Angeles – found Metra liable for a total of millions of dollars in damages caused to minors. YouTube was also found liable in Los Angeles and both companies are appealing their losses. In a sense these decisions were surprising. Meta and Google operate platforms to broadcast speech and are generally protected in various ways by Section 230 and the First Amendment; It is unusual for a suit to clear these hurdles. In others, it seems inevitable. The Web of 2026 has become almost synonymous with some widely disliked profit-based platforms, and the damage they have done is often tangible – but it is still not certain what this debacle will change, and what the collateral damage might be.

If these decisions stand up on appeal – which is not certain – the direct result would be millions of dollars in fines. Depending on the outcome of several more “bellwether” cases in Los Angeles, a much larger group settlement could be reached in the future. Even at this early stage, it’s a victory for the legal principle that social media platforms should be treated like defective products – a strategy designed to avoid Section 230’s shield, but which often fails in court. “The California case is notably the first time social media has faced a jury staredown and verdict for specific personal injuries,” said Cary Goldberg, an attorney who pursued major early social media liability lawsuits, including an unsuccessful case against Grindr. The Verge. “This is the beginning of a new era.”

“This is the beginning of a new era.”

For many activists, the overall goal is to make clear that lawsuits will continue to increase if companies do not change their business practices. what practice? In New Mexico, a jury was swayed by arguments that Meta had made misleading statements to users about the security of its platforms. In LA, the plaintiff successfully claimed that Instagram and YouTube were designed in a way that facilitated social media addiction that harms a teenage user. Meta and Google (and other nervous companies) may change specific features or be more cautious in their public statements and disclosures. But each case depends on a set of highly specific circumstances, and there is no one-size-fits-all answer about what changes are needed.

Legal blogger and Section 230 expert Eric Goldman sees a clear legal threat to social media services. “These verdicts indicate that juries are willing to impose substantial liability on social media providers based on claims of social media addiction,” Goldman wrote after the verdict. in an email to The VergeHe said the issue is bigger than just the jury. “Judges are certainly aware of the controversies that occur on social media,” Goldman said. In the Los Angeles case and other upcoming bellwether trials, “Judges have not given social media defendants much of the benefit of the doubt, which is why plaintiffs’ new cases were able to reach trials in the first place.” It’s a situation, he says, that “feels different than it did a decade ago.”

Goldman points out that New York and California have also passed laws banning “addictive” social media feeds for teens — so even if an appeals court reverses the recent rulings, it won’t necessarily turn back the clock.

The best results of all this are presented by people like Julie Angwin, who wrote the new York Times Companies should be pushed to change “toxic” features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and raw” content. The worst-case scenario plays out along the lines of an article by Mike Masnick. techdirtwho argued that these rulings are devastating to small social networks, which could be sued for allowing users to post and view First Amendment-protected speech under a vague standard of damages. He said the New Mexico case hinges partly on the argument that Meta harmed children by providing end-to-end encryption in private messaging, creating an incentive to turn off the feature that protects users’ privacy — and indeed, Meta turned off end-to-end encryption on Instagram earlier this month.

“Judges haven’t given social media defendants much benefit of the doubt.”

Colorado Law professor Blake Reed is more cautious. “It’s hard to predict right now what’s going to happen,” Reed said. The Verge in an interview. On Bluesky, he said companies would look for “cold, calculated” ways to avoid legal liability with the minimum possible disruption, rather than fundamentally rethink their business models. “There are clearly harms here and it is very important that the tort system takes into account those harms in recent cases,” he said. The Verge. “It’s just that what comes behind them is less clear to me.”

While Reed sees legal risks for smaller platforms with fewer resources in these decisions, he is not convinced they are more serious than the challenges new entrants already face in a hyper-consolidated online landscape built on massive amounts of data collection. “There are things that make it difficult to really do something new in this area that are driven by the market and the surrounding policy,” he said.

Reed, Goldman and Masnick all warn that there is a distinct possibility that the outcome could harm marginalized people who use social media to connect. “There will be even greater efforts to restrict or ban children from social media,” Goldman said. The Verge. “This harms many sub-populations of minors, from LGBTQ teens who will be isolated from communities that can help them form their identities to minors on the autism spectrum who can express themselves better online than through face-to-face interactions.”

If platforms like Instagram are inherently harmful and directly comparable to gambling or cigarettes, then being left out of the comparisons often made by critics would be no great loss. But even as research shows that social media can be harmful to teens, moderate use is associated with better well-being. In contrast, harmful online content such as harassment and eating disorder communities flourished even before recommendation-driven, hyper-optimized modern social media; Tinkering with specific algorithmic formulas may have positive effects, but it is likely that it will not provide any deep or lasting solutions. The appeal of punishing the meta is obvious – what it will mean for everyone else is less clear.

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.




<a href

Leave a Comment