It’s Toasted – Y/NOTES

In 1917, American Tobacco launched an advertising campaign for its cigarette brand Lucky Strike with the slogan it’s toasted. The campaign was designed to address the growing public awareness that cigarettes pose serious health risks. Heat treatment of tobacco during cigarette manufacturing was a common industrial process, by no means unique to Lucky Strike, but by focusing on it and giving it a name, American Tobacco managed to create the perception that Lucky Strike cigarettes were better and less harmful than other brands. This gave cigarette smokers an excuse to ignore the dangers of smoking, shifting the conversation from the toxicity of the product to the perceived benefits of its preparation. A fictional account of the slogan’s birth was famously performed in Mad Man Pilot episode.

This public relations strategy, which takes a standard industry practice and frames it as a unique, premium benefit to deflect attention from the underlying risks, is one of the key ways social media companies, particularly social media giant Meta, address public criticism of their products and their potential impact on users’ mental health, especially among teenagers. This may have been an effective public relations move, but in doing so, Meta reinforces the ideological connection between its business and the tobacco industry.

Lucky Strike ad 1926 10
Lucky Strike magazine advertisement (1926)

Recent research suggests that social media design features such as infinite scroll and algorithmic feeds may encourage compulsive use and contribute to anxiety, depression, and social comparison. These concerns have been heightened by claims that Meta’s internal research identified negative psychological impacts, but were not always fully disclosed or acted upon, strengthening public skepticism about platform accountability. A century ago, smoking was normalized. Increasing evidence of harm, industry knowledge of the risks, and ultimately regulatory action led to a change in public perception about cigarettes. While social media use does not directly equate to nicotine addiction, both show a pattern in which widespread everyday consumption is subsequently questioned as awareness of long-term health effects grows, especially on younger populations, and corporate incentives are seen as potentially out of step with public health standards. Furthermore, like cigarette manufacturers, social media platforms are now being accused by some of knowingly designing harmful, addictive products and marketing them to vulnerable teenagers. In December 2025, Australia banned social media for children under 16. Several other countries are also considering or have announced similar restrictions and bans. Some US states are mandating black box warning labels containing this type of language:

“The Surgeon General warns that although some young users may benefit from social media, social media is associated with significant mental health harms and has not been proven safe for young users.”

Last month, Meta and Google’s YouTube were found negligent in a case brought by a young woman named Kelly who claimed design features in their addictive products caused her mental health troubles. Thousands of similar lawsuits against social media companies are pending in US courts, and the decision will affect many who are expected to stand trial this year. Meta CEO Mark Zuckerberg testified during the trial. In his testimony, he argued that platform security features were appropriately implemented and that Meta has invested heavily in youth security measures. This focus on investment in security features is a common meta PR strategy evident in its websites and public statements. At a 2024 US Senate hearing, Zuckerberg told senators that 40,000 people are working in Meta’s trust and security division. He apologized to parents who say Instagram contributed to their children’s suicide or abuse:

“I’m sorry for what you all have gone through. It’s horrible. No one should have to go through the things that your families have had to go through… That’s why we invest so much and will continue to make industry-leading efforts to ensure that no one has to go through the things that your families have had to go through.”

Mark Zuckerberg may have been sincere in his apology, and Meta is undoubtedly investing in security measures, which, according to Meta, include automated and human content moderation, age verification, parental supervision features, and awareness campaigns. These are measures that are not merely cosmetic; They can improve social media security. However, it is imperative to focus again and again on investing in security measures it’s toasted Strategy in action.

imageimage
Lucky Strike newspaper advertisement (1917)

When Meta emphasizes security measures, it shifts the focus away from the design of its potentially harmful products and onto the process of social platform management. Meta is responding to public criticism by saying it is making “industry-leading efforts” to create a safe environment for its users. This is not to say that most of these security measures are required by law, such as the EU’s Digital Services Act, or are standard industry practice. This doesn’t mean that running a social media empire, with over 3.5 billion daily active users, can’t be done without huge investments in content moderation and other policy enforcement measures. meta Sure It employs thousands of moderators to prevent illegal content from flooding its platforms that would alienate advertisers. It is engaging in distraction tactics by taking something essential to its social media governance process and marketing it as a premium feature that can make its products safer.

Meta can work towards redesigning their social media platforms to be less addictive and less potentially harmful. It can eliminate infinite scroll and never-ending content streams in favor of pagination or end-of-feed markers. It may disable video autoplay by default, a feature that can drag users into an infinite content loop. This may prioritize chronological feeds over algorithmic feeds, and, more commonly, prioritize relationships over engagement, thereby reducing the visibility of viral, controversial, often polarizing, or anxiety-inducing content. It can create interfaces that allow frictionless, automatic actions such as easy shares and quick responses, rather than reflection. It may remove or hide scoring mechanisms such as number of followers, likes and view counts, which trigger social comparison and create anxiety. Finally, and most importantly, it may begin to move away from its surveillance-based revenue model toward a business model that does not incentivize it to maximize user engagement.

Rather than consider meaningful architectural changes that could make its social media platforms less addictive and less harmful, Meta prefers to focus on reactive security features, many of which are optional, thereby avoiding responsibility for potential harm to its users and shifting that responsibility onto users. By focusing on parental supervision tools and awareness campaigns, it subtly shifts the burden of protection onto the victim, a maneuver that ignores how social media architecture is deliberately designed to keep users engaged. Meta is taking a page out of Big Tobacco’s PR playbook and telling us this it’s toasted. By doing this he is placing himself in the same category as tobacco manufacturers. It’s no wonder that lawmakers are responding with the tools to fight smoking—massive lawsuits, bans, and mandatory warnings. This is a very bad place to be for a technology company. Meta must now change direction and make meaningful, proactive design changes to make their products safer, not just toasted.

1,584
·


4

·
May. 1, 2026
· market, policy
· all notes

Continue with this week’s special notes:

  • Altman's gapAltman's gap

    When Sam Altman calls for regulating AI, he is trying to create a regulatory moat, thereby substantially increasing the barriers to entry into the AI ​​market.



<a href

Leave a Comment