The Emperor Has No Clothes: How to Code Claude Code in 200 Lines of Code



the emperor has no clothes

Today, AI coding assistants feel like magic. You describe what you want, sometimes in barely coherent English, and they read the files, edit your project, and write functional code.

But the thing is: the core of these tools is not magic. It’s about 200 lines of straight Python.

Let’s build a functional coding agent from scratch.

mental models

Before we write any code, let’s understand what’s actually happening when you use a coding agent. It’s essentially a conversation with a powerful LLM that has a toolbox.

  1. You Send a message (“Create a new file with the Hello World function”)
  2. llm Decides it needs a tool and responds with a structured tool call (or multiple tool calls)
  3. your program Executes that tool call locally (actually creates the file)
  4. outcome is sent back to LLM
  5. llm Uses that context to continue or respond

That’s the whole loop. LLM never actually touches your file system. it just invocation To make things happen, and your code makes them happen.

Three Tools You Need

Our coding agent basically needs three capabilities:

  • read files so that llm can see your code
  • list files so that it can navigate your project
  • edit files So it can give instructions on creating and modifying code

That’s it. Some more capabilities are included in production agents like cloud code grep, bash, websearchetc. But for our purposes we will see that three tools are enough to do incredible things.

Scaffolding installation

We start with basic imports and an API client. I’m using OpenAI here, but it works with any LLM provider:

import inspect
import json
import os

import anthropic
from dotenv import load_dotenv
from pathlib import Path
from typing import Any, Dict, List, Tuple

load_dotenv()

claude_client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

Some terminal colors to make the output readable:

you_color , "\u001b[94m"
ASSISTANT_COLOR = "\u001b[93m"
RESET_COLOR = "\u001b[0m"

And a utility to resolve file paths (so file.py becomes /Users/you/project/file.py):

def resolve_abs_path(path_str: str) -> Path:
    """
    file.py -> /Users/you/project/file.py
    """
    path = Path(path_str).expanduser()
    if not path.is_absolute():
        path = (Path.cwd() / path).resolve()
    return path

Implementing the Tools

Note you should be detailed about your tool function docstrings as they will be used by the LLM to reason about what tools should be called during the conversation. More on this below.

Tool 1: Read File

The simplest tool. Take a filename, return its contents:

def read_file_tool(filename: str) -> Dict[str, Any],
    """ Gets the full contents of a user-supplied file. :param filename: The name of the file to read. :return: The full contents of the file. """
    full_path , resolution_abs_path,filename,
    printing,full_path,
    with open,STR,full_path,, "R", As F,
        Material , F,Reading,,
    return ,
        "file path", STR,full_path,,
        "Material", Material
    ,

We return a dictionary because LLM needs structured context about what happened.

Tool 2: List files

Navigate directories by listing their contents:

def list_files_tool(path: str) -> Dict[str, Any]:
    """
    Lists the files in a directory provided by the user.
    :param path: The path to a directory to list files from.
    :return: A list of files in the directory.
    """
    full_path = resolve_abs_path(path)
    all_files = []
    for item in full_path.iterdir():
        all_files.append({
            "filename": item.name,
            "type": "file" if item.is_file() else "dir"
        })
    return {
        "path": str(full_path),
        "files": all_files
    }

Tool 3: Edit File

This is the most complex tool, but still straightforward. This handles two cases:

  • Creating a new file When? old_str is empty
  • change text by searching old_str and replace with new_str
def edit_file_tool(path: str, old_str: str, new_str: str) -> Dict[str, Any]:
    """
    Replaces first occurrence of old_str with new_str in file. If old_str is empty,
    create/overwrite file with new_str.
    :param path: The path to the file to edit.
    :param old_str: The string to replace.
    :param new_str: The string to replace with.
    :return: A dictionary with the path to the file and the action taken.
    """
    full_path = resolve_abs_path(path)
    if old_str == "":
        full_path.write_text(new_str, encoding="utf-8")
        return {
            "path": str(full_path),
            "action": "created_file"
        }
    original = full_path.read_text(encoding="utf-8")
    if original.find(old_str) == -1:
        return {
            "path": str(full_path),
            "action": "old_str not found"
        }
    edited = original.replace(old_str, new_str, 1)
    full_path.write_text(edited, encoding="utf-8")
    return {
        "path": str(full_path),
        "action": "edited"
    }

Conference here: empty old_str It means “create this file.” Otherwise, search and replace. Real IDEs add sophisticated fallback behavior when the string is not found, but this works.

tool registry

We need a way to look up tools by name:

TOOL_REGISTRY = {
    "read_file": read_file_tool,
    "list_files": list_files_tool,
    "edit_file": edit_file_tool 
}

Teaching LLMs about our equipment

LLMs need to know what tools exist and how to call them. We generate it dynamically from our function signatures and docstrings:

def get_tool_str_representation(tool_name: str) -> str:
    tool = TOOL_REGISTRY[tool_name]
    return f"""
    Name: {tool_name}
    Description: {tool.__doc__}
    Signature: {inspect.signature(tool)}
    """

def get_full_system_prompt():
    tool_str_repr = ""
    for tool_name in TOOL_REGISTRY:
        tool_str_repr += "TOOL\n===" + get_tool_str_representation(tool_name)
        tool_str_repr += f"\n{'='*15}\n"
    return SYSTEM_PROMPT.format(tool_list_repr=tool_str_repr)

And the system itself prompts:

SYSTEM_PROMPT = """
You are a coding assistant whose goal it is to help us solve coding tasks. 
You have access to a series of tools you can execute. Here are the tools you can execute:

{tool_list_repr}

When you want to use a tool, reply with exactly one line in the format: 'tool: TOOL_NAME({{JSON_ARGS}})' and nothing else.
Use compact single-line JSON with double quotes. After receiving a tool_result(...) message, continue the task.
If no tool is needed, respond normally.
"""

This is the key insight: we are simply telling the LLM “here are your tools, here is the format for calling them.” LLM figures out when and how to use them.

parsing tool call

When LLM responds, we need to find out if it is asking us to run any tools:

def extract_tool_invocations(text: str) -> List[Tuple[str, Dict[str, Any]]]:
    """
    Return list of (tool_name, args) requested in 'tool: name({...})' lines.
    The parser expects single-line, compact JSON in parentheses.
    """
    invocations = []
    for raw_line in text.splitlines():
        line = raw_line.strip()
        if not line.startswith("tool:"):
            continue
        try:
            after = line[len("tool:"):].strip()
            name, rest = after.split("(", 1)
            name = name.strip()
            if not rest.endswith(")"):
                continue
            json_str = rest[:-1].strip()
            args = json.loads(json_str)
            invocations.append((name, args))
        except Exception:
            continue
    return invocations

Simple text parsing. Find lines starting with tool:Extract function name and JSON arguments.

llm call

A thin wrapper around the API:

def execute_llm_call(conversation: List[Dict[str, str]]):
    system_content = ""
    messages = []
    
    for msg in conversation:
        if msg["role"] == "system":
            system_content = msg["content"]
        else:
            messages.append(msg)
    
    response = claude_client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=2000,
        system=system_content,
        messages=messages
    )
    return response.content[0].text

agent loop

Now we put it all together. This is where the magic happens:

def run_coding_agent_loop():
    print(get_full_system_prompt())
    conversation = [{
        "role": "system",
        "content": get_full_system_prompt()
    }]
    while True:
        try:
            user_input = input(f"{YOU_COLOR}You:{RESET_COLOR}:")
        except (KeyboardInterrupt, EOFError):
            break
        conversation.append({
            "role": "user",
            "content": user_input.strip()
        })
        while True:
            assistant_response = execute_llm_call(conversation)
            tool_invocations = extract_tool_invocations(assistant_response)
            if not tool_invocations:
                print(f"{ASSISTANT_COLOR}Assistant:{RESET_COLOR}: {assistant_response}")
                conversation.append({
                    "role": "assistant",
                    "content": assistant_response
                })
                break
            for name, args in tool_invocations:
                tool = TOOL_REGISTRY[name]
                resp = ""
                print(name, args)
                if name == "read_file":
                    resp = tool(args.get("filename", "."))
                elif name == "list_files":
                    resp = tool(args.get("path", "."))
                elif name == "edit_file":
                    resp = tool(args.get("path", "."), 
                                args.get("old_str", ""), 
                                args.get("new_str", ""))
                conversation.append({
                    "role": "user",
                    "content": f"tool_result({json.dumps(resp)})"
                })

Structure:

  1. outer loop: get user input, add to conversation
  2. inner loop: Call LLM, check tool invocation

    • If no tools are needed, print the response and break the inner loop
    • If tools are needed, execute them, add results to the conversation, loop again

The inner loop continues until the LLM responds without requesting any devices. This lets the agent call multiple tools (read the file, then edit it, then confirm the edits).

run it

if __name__ == "__main__":
    run_coding_agent_loop()

Now you can have conversations like this:

You: Create a new file for me called hello.py and implement helloworld in it

agent calls edit_file with path=”Hello.py”, old_str=””, new_str=”print(‘Hello World’)”

Assistant: Done! Created hello.py with Hello World implementation.

or multi-step interactions:

You: Edit hello.py and add a function to multiply two numbers

agent calls read_file To view current content. agent calls edit_file To add a function.

Assistant: Added a multiply function to hello.py.

What we made vs production equipment

This is approximately 200 lines. Cloud Code adds production tools like:

  • better error handling and fallback behavior
  • Streaming Reactions For better UX
  • better context management (Summary of long files etc.)
  • more tools (Run commands, search the codebase, etc.)
  • approval workflow for destructive actions

But the core loop? This is exactly what we have created here. The LLM decides what to do, your code executes it, results come back. This is complete architecture.

try it yourself

The entire source is approximately 200 lines long. Change your favorite LLM provider, adjust system prompts, add more tools as you practice. You’ll be surprised at how much this simple pattern is capable of.

If you are interested in learning cutting-edge AI software development techniques for professional engineers, check out my online course.



<a href

2 thoughts on “The Emperor Has No Clothes: How to Code Claude Code in 200 Lines of Code”

Leave a Comment