The idea like LLMs.txt was completely useless
These are all silly abstractions that AI doesn’t need because AIs are just as smart as humans so they can just use what already existed, which is APIs.
llms.txt is In fact Sucks, but it’s the only thing true about this statement. I’m here to rage once again to address yet more mindless views on social media. It’s about content optimization.
Concise and precise: You should optimize content for agents, just like you optimize things for people. How you do this is an ever-evolving topic, but there are some common things we see:
- order of contents
- material size
- depth of nodes
Frontier models and agents built upon them all behave similarly, with similar constraints and optimizations. For example, one thing they are known to do to avoid reference bloat is to only read parts of files. The first N lines, or bytes, or characters. They are also known to behave very differently when they are told that information exists somewhere, versus needing to find it themselves. Both of those concerns are really that LLMs.txt was a valuable idea, but it was the wrong implementation.
Today the implementation is simple: conversation over content. when a request comes in Accept: text/markdownYou can confidently assume you have an agent. This is your hook, and now it’s up to you how you customize it. I’m going to be brief and to the point and give you some examples of how we do this at Sentry.
#document
We’ve spent a lot of time optimizing our documents for agents, for obvious reasons. The primary customizations are mostly simple:
- Serve true Markdown content – massive token savings as well as better accuracy
- Remove things that only make sense in the context of the browser, especially navigation and JavaScript bits
- Optimize different pages to focus more on link hierarchy – our index, for example, is mostly a sitemap, which is completely different from non-Markdown
$ curl -H "Accept: text/markdown" https://docs.sentry.io/
---
title: "Sentry Documentation"
url: https://docs.sentry.io/
---
# Sentry Documentation
Sentry is a developer-first application monitoring platform that helps you identify and fix issues in real-time. It provides error tracking, performance monitoring, session replay, and more across all major platforms and frameworks.
## Key Features
* **Error Monitoring**: Capture and diagnose errors with full stack traces, breadcrumbs, and context
* **Tracing**: Track requests across services to identify performance bottlenecks
* **Session Replay**: Watch real user sessions to understand what led to errors
* **Profiling**: Identify slow functions and optimize application performance
* **Crons**: Monitor scheduled jobs and detect failures
* **Logs**: Collect and analyze application logs in context
...
In our case we actually use MDX to render these, so this involves a fair amount of parsing changes and overrides to allow some key pages to be rendered differently. The result: Agents deliver pages that are more actionable.
#sentry
If a headless bot is bringing up a website, the least useful thing you can do is provide it with authentication-requiring pages. In our case we use the opportunity to inform the agent that there are certain programmatic ways it can access application information (MCP, CLI, API, etc.):
$ curl -H "Accept: text/markdown" https://sentry.io
# Sentry
You've hit the web UI. It's HTML meant for humans, not machines.
Here's what you actually want:
## MCP Server (recommended)
The fastest way to give your agent structured access to Sentry.
OAuth-authenticated, HTTP streaming, no HTML parsing required.
```json
{
"mcpServers": {
"sentry": {
"url": "https://mcp.sentry.dev/mcp"
}
}
}
```
Docs: https://mcp.sentry.dev
## CLI
Query issues and analyze errors from the terminal.
https://cli.sentry.dev
...
#warden
For projects like Warden, we’ve actually set it up so that the agent can hit all the content right in Bootstrap:
Help me install warden.sentry.dev
curl -H "Accept: text/markdown" https://warden.sentry.dev
# Warden
> Agents that review your code. Locally or on every PR.
Warden watches over your code by running **skills** against your changes. Skills are prompts that define what to look for: security vulnerabilities, API design issues, performance problems, or anything else you want consistent coverage on.
Skills follow the [agentskills.io](https://agentskills.io) specification. They're markdown files with a prompt that tells the AI what to look for. You can use community skills, write your own, or combine both.
- Docs: https://warden.sentry.dev
- GitHub: https://github.com/getsentry/warden
- npm: https://www.npmjs.com/package/@sentry/warden
## How It Works
Every time you run Warden, it:
1. Identifies what changed (files, hunks, or entire directories)
2. Matches changes against configured triggers
3. Runs the appropriate skills against matching code
4. Reports findings with severity, location, and optional fixes
Warden works in two contexts:
- **Locally** - Review changes before you push, get instant feedback
- **In CI** - Automatically review pull requests, post findings as comments
## Quick Start
...
#That’s it
It’s simple and it works. You should do this. You should also pay attention to how patterns are changing with agents and update your optimizations as behavior changes.
<a href