“This looks like AI.”
This is a phrase that scares me as a writer who is interested in illustration and amateur photography. In a world where generative AI technology is becoming increasingly adept at imitating the work of humans, people are naturally skeptical when online platforms refuse to label even explicit AI content.
This leads me to a conclusion: perhaps we should start labeling human-generated text, images, audio and video with something similar to the universally recognized Fair Trade logo. Machines certainly aren’t motivated to label their work, but creators are certainly at risk of being displaced.
Luckily, I’m not alone in my thinking.
Instagram chief Adam Mosseri suggested in December that it would be “more practical to fingerprint real media than fake media” as AI technology improves in creating content that is visually indistinguishable from content created by creative professionals.
No one can say for sure how much of what we find on the Internet is AI-generated, but according to a recent Reuters Institute survey, there is a widespread perception that news sites, social media platforms and search engine results are full of AI.
Authenticating human-generated actions was the C2PA content credential standard – already used by Meta’s platforms – that was considered. But so far, despite receiving widespread industry support, its implementation has been completely ineffective. It turns out that many people who create and platform AI content are motivated to hide its origins because of the clicks, chaos, and money it generates.
A large number of solutions have emerged in recent years to help human creators separate their work from the hype created by AI generators. And like C2PA, they face many challenges to widespread adoption.

Right now, there are plenty of AI-free labeling options to choose from. In total, I count at least 12, all attempting to address the same issue with different eligibility criteria and authentication approaches. Some are industry-specific, such as the Authors Guild’s “Human Written Certification” for books and other written works, and cannot be broadly applied to all types of creative content.
Other solutions like Proudly Human and Not by AI aim to be comprehensive, covering published text, visual art, videography, and music, but the verification processes used by these services can be just as questionable as those used by AI-labeling solutions. Some, like Made By Human, work entirely on trust, making badges and labels publicly available for anyone to download and apply to their work, without actually establishing provenance. Others, like no-ai-icon, say they inspect tasks and run them through AI detection services, which can be extremely unreliable.
Most of the services I’ve checked are doing it the hard way: having creatives manually show their work processes, like getting sketches and written drafts, to a human auditor. This is extremely laborious, but without any technical shortcuts, it is the most reliable method we currently have for establishing whether something was created by an actual human.
Another issue is agreeing on what “man-made” means. Since AI is now incorporated into so many creative tools, and its use is being encouraged by creative teachers, where do you draw the line?
“The problem is going to be one of definition and verification. Does interacting with the LLM before manually executing the idea count as using AI? And how can the manufacturer prove that no AI was involved?” Jonathan Stray, senior scientist at the UC Berkeley Center for Human-Compatible AI, said The Verge. “Other consumer labels, like ‘organic’, have regulations and agencies that enforce them.”
Nina Begus, a lecturer at the UC Berkeley School of Information, says we have already entered the era of hybrid content, which is colliding with how we define something as authentic.
“Any creative output today can be touched in some way by AI without being able to prove it,” Begus explained. The Verge. “Authorship is disintegrating in new directions, becoming more technologically advanced and more collective. We need to reformulate our criteria of creativity that were created solely for humans.”
A solution offered by a human-made label claimant called Not by AI is trying to take this ambiguity into account. It offers a variety of badges that creators can apply to websites, blogs, art, films, essays, books, podcasts, and more, provided that at least 90 percent of the work was created by an actual human. But the voluntary approach lacks verification of authenticity.
Other solutions like Proof I Did It rely on blockchain technology to provide a permanent record that anyone can use to reference creators and actions that have been verified by the service. By storing verification on the blockchain, creators get an unforgettable digital certificate that proves their work was created by a human, which is much more reliable than trying to use software to guess whether a piece of media was generated by AI.
Thomas Baer, executive director of the University of California’s Rady School of Management, says Web3 and blockchain technology are “what AI looks like?” Can provide a stronger solution by removing the question of. “Can this description prove its human history?”
“By issuing ‘human-made’ tokens to verified creators, the market creates a ‘premium tier’ of art where authenticity is mathematically guaranteed,” Baer explains. The Verge. Other experts such as Begus expressed similar sentiments regarding the potential increase in value of “human and biological creativity” amid the flood of synthetic media.
Despite their flaws, established standards like C2PA provide something that AI-free labeling solutions desperately need: integration. Big names in the tech industry like Adobe, Microsoft, and Google are committed to the standard, and AI providers are implementing it to appease global regulators. That said, when I weigh the various pros and cons between AI labeling efforts and efforts focused on verifying authentic human-generated content, I think the latter are more likely to succeed.
Many creative professionals, even those who do not entirely oppose the use of AI tools, are motivated to differentiate their work from the artificially generated competition that is saturating the industry and threatening their livelihoods. And yes, there are plenty of AI-evangelists on social media platforms who are happy to show off what the technology can achieve, but hesitant to disclose its use when money and influence could be lost.
Take the case of porn actors creating digital clones of themselves that will remain hot and young forever, or AI influencers selling an imaginary life that doesn’t exist. Revealing that they are AI could shatter people’s illusion that they are getting a real human experience. Scammers using AI-generated imagery to sell products online certainly don’t want to be outed, and the platforms like Etsy that host them don’t seem too concerned either. Similarly, anyone using generic AI to create discord or create mischief on social media can only succeed if people believe it is genuine. It’s no surprise that AI labeling has failed to catch on with C2PA.
We know that some AI-focused manufacturers will avoid being transparent because it is already happening. A notable example of this is romance author Coral Hart who told the new York Times He earned six figures last year after producing more than 200 AI-generated novels. There are no labels on any of his books indicating that they were written using AI tools, however, due to the “strong stigma” surrounding the technology he fears it would “damage his business for that work”.
We can see that disdain in action in how often artificially crafted content is described as “slop”, regardless of whether the works themselves are visually, aurally, or technically impressive. And this raises the question of how these human-generated or AI-free labeling providers will prevent their logos from being misused by those profiting from fraud. Proudly Human CEO Trevor Woods admits this may not be possible.
Woods explained, “Like other certification marks and company logos, we cannot prevent the Proudly Human certification mark from being displayed fraudulently. However, we make it easier for consumers to verify it.” The Verge. “If any bad actors we identify refuse to stop using the label, we will take legal action against them.”
If the goal is to achieve a universally recognized and applicable solution, a standard needs to be agreed upon not only by creators and online platforms, but also by global governments and regulatory authorities. In my understanding, those conversations are currently few and far between.
“Proudly Human has occasionally provided information to government and industry associations, but is not involved in formal conversations regarding integrated human origin certification,” Woods said. “The rapid development of AI capabilities and AI-generated content will outpace government and regulatory responses.”
Clearly, there is demand to make it easier for consumers to identify man-made works, so creatives, regulators and certification agencies will have to choose which approach to take to rally behind it. If a single standard can rise to the same level as symbols like Fair Trade and Organic – which carry their own concerns, but are recognized globally as something that aligns with a particular ethos – then perhaps we can return to the days of trusting what we see with our eyes.
<a href