
Not long ago, the idea of being a “generalist” in the workplace had a mixed reputation. The stereotype was a “jack of all trades” who could master many subjects but was a “master of none”. And for years, this was more or less true.
Most people did not have access to the expertise needed to perform highly cross-functional work. If you need a new graphic, you wait for a designer. If you need to change a contract, you wait for the legalese to happen. In smaller organizations and startups, this waiting game was usually replaced with inaction or improvisation – often with questionable results.
AI is changing this faster than any technology change I’ve seen. It is allowing people to succeed in tasks beyond their normal area of expertise.
Anthropic found that AI is “enabling engineers to become more full-stack in their work”, meaning they are able to make informed decisions across a wide range of interconnected technologies. The direct result of this is that tasks that were abandoned due to lack of time or expertise are now being completed (27% of AI-assisted work according to Anthropic’s study). This change is closely mirroring the effects of previous revolutionary technologies. The invention of the automobile or the computer didn’t just give us free time – it mainly made us do things that couldn’t be done before.
With AI as a guide, anyone can now expand their skills and increase their expertise to achieve more. It fundamentally changes what people can do, who can do it, how teams work, And What should leaders expect?
Well, not so fast.
AI progress has been incredible, and if 2025 has not fully fulfilled its promise of bringing AI agents for the workforceThere is no reason to doubt that it is on its way. But for now, it is not right. If it is human to make mistakes, then it is foolish to trust AI not to make mistakes.
One of the biggest challenges of working with AI is identifying hallucinations. I believe the term was coined not as a pretty way to refer to factual errors, but as a fairly appropriate way to describe the conviction that AI displays in its incorrect answers. We humans have a clear bias against self-confident people, which probably explains the number of smart people are burning After taking ChatGPT at face value.
And if experts can be fooled by overconfident AI, how can generalists expect to harness the power of AI without making the same mistakes?
Civic guardrails pave the way for the spirit of freedom
It’s tempting to compare today’s AI vibe to the coding wave with the rise of low- and no-code tools. No-code tools gave users the freedom to create custom software to suit their needs. However, the comparison doesn’t quite sit right. So-called “citizen developers” can only work within the limits that the tool allows. These strict constraints were limited, but they had the benefit of protecting users from themselves – preventing anything destructive from happening.
AI removes those limitations almost completely, and with great freedom comes responsibilities that most people are unprepared for.
The first phase of ‘Vibe Freedom’ is one of unbridled optimism encouraged by sycophantic AI. “You are absolutely right!” The dreaded report that took all night sounds better than anything you could do yourself and it only took a few minutes. The next step comes almost as a surprise – there is something that is not quite right. You start to doubt the accuracy of the work – you review and then wonder if it wouldn’t have been quicker to do it yourself in the first place.
Then comes bargaining and acceptance. You argue with the AI, leading you down a confusing path, but slowly you begin to develop an understanding – a mental model of the AI mind. You learn to confidently recognize what’s wrong, you learn to step back and cross-check, you learn to trust and verify.
generalist belief layer becomes
It is a skill that can be learned, and it can only be learned on the job, through regular practice. It does not require deep expertise, but it does require awareness. Curiosity becomes necessary. Similarly, the willingness to learn faster, think critically, detect anomalies, and rely on judgment rather than treating AI as infallible also increases.
This is the new job of the generalist: not to be an expert in everything, but to understand the AI brain enough to catch when something is wrong, and turn to a true expert when the stakes are high.
The generalist human becomes a layer of trust, sitting between the AI’s output and the organization’s standards. They decide what passes and what gets a second opinion.
As stated, this only works when the generalist exceeds the minimum flow level. There’s a big difference between “broadly informed” and “confidently uninformed”. AI makes it easier to bridge that gap.
Impact on teams and appointments
Clearly, experts will not be replaced by AI any time soon. His work remains critical. It will evolve to become more strategic.
What AI changes is everything around the edges. Roles that seemed important but were difficult to fulfill, tasks that were stuck in limbo because no experts were available, waiting for highly skilled people to review simple tasks created backlogs. Now, a generalist can go a long way on his own, and specialists can focus on the most difficult problems.
We are already starting to see the impact in the hiring landscape. Companies are looking to bring on individuals who are comfortable working on AI. People who adopt it and use it to work on projects outside their comfort zone.
Performance expectations will also change. Many leaders are already paying less attention to productivity alone, and more to how effectively one uses AI. We view token usage not as a measure of cost, but as an indicator of AI adoption, and perhaps optimistically, as a proxy for productivity.
Making Vibe Work Viable
- Use AI to enhance work, not eliminate it: Leaving the AI loose will get you burned. This requires guidance and supervision.
-
Know when to trust and when to verify: Make sense of AI brains so you can make good decisions on the work produced. When in doubt or when the risks are great, seek expert advice.
-
set cLearn Organizational Standards: AI thrives on context and humans too. Invest in documentation of processes, procedures and best practices.
-
Keep humans in the loop: AI should not remove oversight. This should make monitoring easier.
Without these factors, AI work remains in the “vibe” stage. With them, it becomes something the business can actually rely on.
return of the generalist
The emerging, AI-empowered generalist is defined by curiosity, adaptability, and the ability to evaluate the work produced by AI. They can perform multiple tasks, not because they are experts in each, but because AI gives them access to expert-level expertise. Most importantly, this new generation of generalists knows when and how to apply their human judgment and critical thinking. This is the real determining factor for transitioning to something reliable, sustainable and viable in the long run.
Cedric Savarese is the Founder and CEO of FormAssembly.
<a href