I Still Prefer MCP Over Skills

TL;DR: The AI ​​field is pushing “skills” as the new standard for delivering LLM capabilities, but I’m not a fan of it. Pure knowledge and skills are very good for teaching LLM How To use an existing device. But for providing real access to services to LLMs, Model Context Protocol (MCP) is a far better, more practical architectural choice. We should be building connectors, not just more CLI.


Maybe it’s an artifact of spending too much time on Everywhere I look, someone is celebrating the death of the Model Context Protocol in favor of abandoning it. SKILL.md In their store.

I’m a very heavy AI user. I use Cloud Code, Codex, and Gemini for coding. I rely on ChatGPIT, Cloud, and Perplexity almost every day to manage everything from Notion Notes to my DEVONthink database and even my email.

And honestly? I don’t like skills.

I hope MCP continues. I really don’t want a future where every service integration requires a dedicated CLI and a Markdown manual.

This is why I think the emphasis on skills as a universal solution is a step backward, and why MCP still considers the architecture correct.

Receiving user feedback from Kikuyo via Cloud MCP

Cloud Kikuyo is pulling recent user feedback from Kikuyo via MCP, no CLI required.

What I like about MCP#

The basic philosophy of MCP is simple: it is an API abstraction. No need to understand LLM How; just need to know What. If an LLM wishes to interact with DEVONthink, he or she calls devonthink.do_x()And the MCP server handles the rest.

This separation of concerns brings some unbeatable benefits:

  • Zero-Install Remote Usage: For remote MCP servers, you don’t need to install anything locally. You simply point your client to the MCP server URL, and it works.
  • Smooth Updates: When a remote MCP server is updated with new tools or resources, each client immediately gets the latest version. There is no need to push updates, upgrade packages, or reinstall binaries.
  • Saner Authentic: Authentication is handled decently (often with OAuth). Once the client completes the handshake, it can take action against the MCP. You are not forcing the user to manage raw tokens and secrets in plain text.
  • True portability: My remote MCP servers work from anywhere: my Mac, my phone, the web. It doesn’t matter. I can manage my idea through LLM of my choice wherever the client is available.
  • Sandboxing: Remote MCPs are naturally sandboxed. They expose a controlled interface rather than giving raw execution power to the LLM in your local environment.
  • Smart Discovery: Modern apps (ChatGPT, Cloud, etc.) have tool search built-in. They discover and load tools only when they are really needed, saving valuable context windows.
  • Frictionless Auto-Update: Even for local setup, an MCP is installed directly npx -y Or uv May update automatically on each launch.

Friction with skill#

Not all skills are the same. A pure knowledge skill (which teaches the LLM how to format a commit message, write tests a certain way, or use their internal jargon) works really well. Problems start when a skill actually requires CLI to do Some.

My biggest complaint about Skills is the assumption that every environment can or should run arbitrary CLIs.

Most skills require you to install a dedicated CLI. But what if you are not in a local terminal? Cannot run ChatGPT CLI. Even the standard web version of Perplexity or the cloud can’t do this. Unless you’re using a full-blown compute environment (like Perplexity Computer, Cloud Cowork, Cloud Code, or Codex), any skills that rely on the CLI are dead on arrival.

This gives rise to a bunch of annoying UX and architectural problems:

  • Deploy Error: CLI needs to be published, managed and installed via binaries, npm, uv, etc.
  • Secret Management Nightmare: Where do you keep the API token needed to authenticate? If you’re lucky, there’s one in the environment .env Into the file you can dump plain-text secrets. Some ephemeral environments self-erase, meaning your CLI works today but forgets your secrets tomorrow.
  • Fragmented Ecosystem: Skills management is currently the wild west. When a skill updates, you’ll need to reinstall it. Some tools help in establishing skills npx skillsBut this only works in Codex and Cloud Code, not in Cloud Cowork or standard cloud. Pure knowledge skills work in the cloud, but most others don’t. Some tools support “skills marketplace”, others do not. Some can install from GitHub, others can’t. You try to install an OpenClaw skill in the cloud and it explodes with YAML parsing errors because the metadata fields don’t match.
  • Context Bloat: Using a skill often requires loading it. Complete SKILL.md In the context window of the LLM, instead of highlighting only the required single tool signature. It’s like forcing someone to read the entire car owner’s manual when all they want to do is make a call. car.turn_on().

If the instructions for a skill start with “install this CLI first”, you have just added an unnecessary abstraction layer and extra steps. Why not just use the remote MCP instead?

Codex Phoenix is ​​loading a skill to understand collocated hooks

The codex uses a pure knowledge skill to learn how phoenix collocated hooks work. No CLI, no MCP, just references.

I don’t want skills to become the de facto way to connect LLMs to a service. We can explain API shapes in one skill so that LLM can curl This, but how is it better than providing a clean, strongly typed interface through MCP?

In my opinion the ecosystem should look like this:

When to use MCP:
There should be an MCP standard to provide an interface to connect LLMs. Some?: A website, a service, an application. The service itself should dictate the interface it displays.

  • Get Google Calendar. A gcal CLI is fine. The problem is the skill it takes to set up the LLM, manage the auth token, and pay for it. Google’s proprietary OAuth-supported remote MCP handles all this at the protocol level, and works from any client without any setup.
  • To control Chrome, the browser should expose an MCP endpoint for stateful control, rather than relying on Junkie chrome-cli.
  • To debug with Hopper, the current built-in MCP that allows LLM to run step() infinitely better than a different one hopper-cli.
  • Xcode should be shipped with a built-in MCP that handles authentication when LLM connects to a project.
  • there must be an assumption mcp.notion.so/mcp Natively available, rather than forcing me to download notion-cli And manage authentication status manually. (Actually they have a remote MCP now, which is absolutely the right call.)

When to use skills:
The skill must be “pure”. They should focus on knowledge and context.

  • Teaching existing devices: i like it .claude/skills Folder that teaches LLMs to use tools I already Have installed. a skill explaining how to use curl, git, ghOr gcloud Makes complete sense. We don’t need “curl mcp”. We just need to teach LLM how to build good curl Order. However, a dedicated remote GitHub to manage issues makes much more sense than relying on MCP gh CLI skills.
  • Standardization of Workflow: The skills are perfect for teaching the cloud your business jargon, internal communication style, or organizational structure.
  • Teaching to handle some things: This is another great example and what Anthropic also does with the PDF skill – it explains how to deal with PDF files and manipulate them with Python.
  • Secret Management Pattern: Having a skill that tells the cloud “use.” fnox For this repo, here’s how to use it” Just makes sense. The cloud enhances the skill every time we deal with secrets. This is much better than creating a custom MCP just to call get_secret().
ls -al .cloud/skills in react router repo

Skills residing directly in the repo. LLM automatically selects them while working in that project.

Connectors vs Manual#

Shaver thought: Maybe terminology is the problem. just call for skill LLM_MANUAL.mdAnd MCP should be called Connectors.

Both have their place.

I already do this for the services I have. some examples:

  • mcp-server-devonthink: A local MCP server that gives any LLM direct control over DEVONthink. No CLI wrappers, just a clean tool interface.
  • MicroFN: exposes a remote MCP mcp.microfn.dev So any MCP-enabled client can use it out of the box.
  • Kikuyo: Same story, remote MCP mcp.kikuyo.dev.
  • MCP Nest: Tunnels local MCP servers through the cloud so they can be accessed remotely mcp.mcpnest.dev/mcp. Created it because I wanted remote access to the local MCP without directly exposing my machine.

I also published skills for MicroFN and Kikuyo, but they cover CLI, not MCP. That said, writing this made me realize: a skill that explains how to use an MCP server actually matters a lot. Not to replace MCP, but to give LLM reference before starting to call tools. What the service does, how the tools relate to each other, when to use which one. A knowledge layer on top of the connector layer. That’s the combination I want.

And this is actually a pattern I’m using more and more in practice. When I’m working with MCP servers, I inevitably look for gotchas and non-obvious patterns: A date format that needs to be YYYY-MM-DD instead of YYYYMMDDA search function that narrows down results unless you add a parameter, a tool name that doesn’t do what you expect. Instead of reinventing these every session, I tell Claude to wrap up everything we’ve learned into one skill. The LLM already contains the context of our interactions, so it writes the skill with all the variations, common patterns, and correct assumptions.

Transforming Cloud Packaging into Reusable Skills Learned from a NotePlan MCP Session

After discovering backlink gotchas and date format quirks in NotePlan MCP, I asked Cloud to package everything into one skill. Now every future session starts with that knowledge.

The result is a skill that serves as a cheat sheet to MCP, not a replacement for it. MCP still handles the actual connection and tool execution. The skill only ensures that the LLM does not waste tokens struggling with the same problems that I have already solved. It’s the combination of the two that makes the experience truly seamless.

Also, I will continue to maintain my dotfiles repo full of skills for the processes I use most often, and I will keep dropping .claude/skills In my repository to guide the behavior of AI.

I just hope the industry doesn’t abandon the Model Context Protocol. The dream of seamless AI integration depends on standardized interfaces, not a fragmented landscape of hacky CLIs. I’m still hoping for official SkyScanner, Booking.com, Trip.com and agoda.com MCPs.

my two cents.


Talking about remote MCP: I created MCP Nest specifically for this problem. Many useful MCP servers are local by nature, such as Fastmail, Gmail, or anything else running on your machine. MCP Nest tunnels them through the cloud so they become remotely accessible, usable on all your devices from the cloud, ChatGPT, Perplexity, or any MCP-enabled client. If you want your local MCP to work everywhere without directly exposing your machine, this is what it’s for.



<a href

Leave a Comment