Home
I received a link from a co-worker with an attached note:
I have prepared a plan for the project, take a look at it.
Doing a quick scan of the linked document I was pleased to see that it has some substance. And look, there are tables and step-by-step instructions. At the lowest level are the risks and potential mitigations. they have Definitely Make a plan and it’s done Definitely In this document.
Later, I drank another cup of coffee and actually read the document and something stirred in a part of my brain. Doubt arose, I clicked the “Document History” button at the top right and saw a cleared history of an empty document – and then wham – A fully fleshed out plan, as if it just came out of someone’s mind, straight to the screen, ready to share.
So it’s definitely AI. I felt cheated and a little foolish. But why? If this LLM has destroyed the entirety of human written output, shouldn’t this plan be better than anything anyone could ever imagine? Perhaps this was exactly the thought process that came to his mind when he turned to his LLM of choice.
I remember looking back at the note to check twice, triple, that they didn’t call for the use of AI. If this was his best effort, I would have to write the plan myself to save my face.
Regardless of their intentions, I sensed that something subtle had happened. Any time saved by (their) AI prompting is spent in validation overhead, the work is simply given to someone else – in this case me.
Have you been a victim of AI workarounds?
A recent, well-covered article in Harvard Business Review explores this new category of newly coined “workshops” – the process of relying on AI to create work content. The study provides extensive examples of cases where people have reached out to AI with the direct result of greatly increasing the amount of collective work required.
That increased work is validation – finding out if someone really thought about what they sent you – and it coincides with a completely different domain.
A core principle of cryptographic systems that keep our information private online are mathematical structures. Easy to verify but difficult To calculate.
With AI writing, we have reversed this: generation is trivial, validation is expensive. We still read, but we read differently: wary, withholding trust, looking for the tell. The Document History button becomes mandatory due to diligence.
This is not good at all
Using AI when writing for others is fundamentally about etiquette. It is not polite to share completely AI-generated writing without disclosing its origins. In most cases we are looking for an equitable exchange of ideas. If you know in your heart that you haven’t done the work, you are weakening the social contract between you and your reader.
By passing off your work to AI it is inevitable that you become passive, an observer of the act of creation, an assistant to the creator.
If you can’t share what you wrote, do you have any right to share it? This is why most PhD candidates defend their work orally.
Why should I bother reading something you didn’t bother to write?
Avoiding Accountability as a Service
In serious engineering circles we are reaching consensus that developers are held accountable for all code committed and shared, regardless of how it was produced.
Other works are in different areas. Side projects, throwaway code, single-use applications – creating something you lack the skills to create otherwise. But if you ship it and people use it, you’ve made an implicit promise: that you can maintain, debug, and extend what you’ve built. If the AI has assembled it and you can’t answer basic questions about how it works, you’ve misled users about what they can rely on. The working document and the shipped app both create dependencies – one on your strategic thinking, one on your technical follow-through.
Engineers who have adopted coding assistants to take over the messy work of actually entering code into the editor have seen a solid, if modest, increase in productivity.
The same is happening for writers too. Unless pressured by unrealistic expectations or deadlines (or in some cases, pleading ignorance of the risks), professional writers will agree on the same approach as software engineers. Anything worth writing has to be written.1
Writers and other professionals want to do good work and be recognized for their good work. This leads us to explore where AI helps that work and understand where it hinders it. It doesn’t help that we’re working on all this as we go along.
Despite the name, the transformation task – not the generation – is where generative AI justifies itself. In Journalism, Jason Koebler @404 Media Notes:
YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that weren’t possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons I was able to do this test on Indian AI slope creators, allowing me to get the gist of what was happening in a given video before handing it over to human translators to get an accurate translation.
When the team did a commendable job of translating critical reporting on ICE into Spanish, they turned to human translators to gain additional certainty. Some people will be happy with LLM translation. This is his line. 404 Media took the high road for responsible, authentic journalism.
In “Good Bot, Bad Bot”, Paul Ford applauds the proposal to use AI to help academics package and sell their work to non-technical writers.
he notes
It makes economic sense. Researchers who are not affiliated with large companies or large research laboratories at universities often have little resources to promote their research. And for the most part, biology postdocs can’t write good posts – at least not in their native language, but especially in multiple languages. AI won’t be as good at posting as a thoughtful human, but it will probably be better at posting funny, emoji-filled social media posts than an actuarial scientific assistant who speaks English as her fourth language.
It’s refreshing that Ford acknowledges practical realities. Promotional posts are not research themselves – marketing your paper is not the same as writing it. That’s Ford’s line. The economic reality of underfunded academics means adopting AI in ways that are actually welcome.
Unknown AI is becoming the default assumption. Reading anything is now an act of faith that one thought about longer than one thought about the outcome.
Faced with uncovering the author’s fingerprint, will we tire of the guessing game?
Validation today often leads to difficult conversations about Nature Of work and effort, authenticity and courtesy. Those conversations are useful now.
Thanks to Sarah Moir, Harrison Neuert, and Geoff Storbeck for their invaluable feedback.
<a href