
There’s an important distinction that Hitzig points out early in his op-ed: It’s not the ads themselves that are at issue, but rather the potential use of large amounts of sensitive data that users have shared with ChatGPT without thinking about how it could be used to target them or who could potentially get access to it.
He wrote, “For many years, ChatGPT users have created an archive of human candor that has no precedent, because people believed they were talking about something that had no hidden agenda.” “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Ads built on that archive create the potential to manipulate users in ways we don’t have the tools to understand, let alone prevent.”
OpenAI has at least acknowledged this concern. In a blog post published earlier this year announcing that the company would be experimenting with advertising, the company promised that it would place a firewall between users’ interactions with ChatGPT and the ads they receive from the chatbot. “We keep your interactions with ChatGPIT private from advertisers, and we never sell your data to advertisers.”
Hitzig believes that’s true…for now. But he has lost trust in the company for maintaining that position for so long, especially because it does nothing to actually follow through on the promised privacy. The researcher argued that OpenAI is “building an economic engine that creates strong incentives to override its own rules,” and warned that the company may already be retreating from past principles.
For example, OpenAI has said that it does not optimize ChatGPT to maximize engagement – a metric that would be particularly interesting to a company trying to keep people locked in conversations so it can serve them more ads. But a statement is not binding, and it is not clear that the company has actually lived up to it. Last year, the company faced the issue of sycophancy with its models – it became overly flattering to its users and, at times, slipped into delusional thinking that may have contributed to “chatbot psychosis” and self-harm. Experts warn that sycophancy is not just some mistake in model tuning, but a deliberate way to induce users to talk to chatbots.
In a way, OpenAI is increasingly playing the Facebook model of promising users privacy on their data and then screwing them when it turns out the data is valuable enough. Hitzig is trying to get in front of the train before it picks up too much steam, and recommended OpenAI adopt a model that would actually guarantee protection for users – either creating some kind of real, binding independent oversight or putting the data under the control of a trust with a “legal duty to act in the interests of users.” Either option sounds great, although Meta did this by creating the Meta Oversight Board in the first place and then routinely ignored and violated it.
Unfortunately, Hitzig may also struggle to get people to care. Two decades of social media have created a sense of privacy nihilism among the general public. Nobody likes ads, but most people aren’t bothered by them enough to do anything. Forrester found that 83% of those surveyed would continue to use the free tier of ChatGate despite the introduction of advertisements. Anthropic tried to score some points with the public by pressuring OpenAI over its decision to place an ad in ChatGPTT this weekend with a high-profile Super Bowl spot, but according to Adweek, the public reaction was more of confusion than anything, with the ad ranked in the bottom 3% of all Super Bowl spots for potential.
Hitzig’s warning is justified. His concern is genuine. But having the public care about their privacy after years of being subservient to algorithms is a real uplift.
<a href