yes, as in singular One.
Anthropic made a lot of noise in the media in April 2026 when they concluded that their new AI model mythology Is dangerously good Finding security flaws in source code. Apparently Mythos was so good at it that Anthropic didn’t release this model to the public yet, but instead offered it to a select few companies for a period of time, so that some good companies (?) could get a head start and fix the most serious problems first, before the general public could get their hands on it.
It seemed as if the whole world was losing its mind. Is this the end of the world as we know it? Certainly a surprisingly successful marketing stunt.
my (non-)access
part of the deal with Project Glasswing It was then that Anthropic also offered access to its latest AI models for “open source projects” through the Linux Foundation. The Linux Foundation let their Project Alpha Omega handle this part, and their representatives contacted me. As a lead developer of Curl I was offered access to Magic Models and I graciously accepted the offer. Sure, I’d like to see what it can find in curls.
I signed a contract to get access, but then nothing happened. Several weeks passed and I was told there was a problem and access was being delayed.
Eventually, what I was offered instead was that someone else, who had access to the model, could run a scan and analysis on curl for me using Mythos and send me a report. To me, the distinction is not that important. Not that I’ll have a lot of time to explore lots of different prompts and go on deep dive adventures. It would be great to get the tools to generate the first proper scan and analysis, regardless of who did it. I gladly accepted this offer.
(I am intentionally leaving out the identities of the individuals involved in conducting the curl analysis as that is not the point of this blog post.)
curl’s ai scan
Before this first Mythos report, we had already scanned curl with several different very capable AI powered tools (I mean except for Running multiple “normal” static code analyzers all the time, using the best compiler options and fuzzing on it for years, etc.). AISLE, Zeropath and OpenAI’s Codex Security have mainly been used to check code with AI. These tools and the analysis they perform are triggered somewhere two and three hundred Bugfixes merged into curl over the last 8-10 months or so. A set of findings reported by these AI tools confirm the identified vulnerabilities and are published as CVEs. Possibly a dozen or more.
Nowadays we also use tools like GitHub’s Copilot and Augment Code to review pull requests, and their comments and complaints help us get better code and avoid merging new bugs. I mean, we still merge bugs but PR review bots regularly highlight issues that we fix: our merges would be worse without them. AI reviews are used Other than this For human reviews. They help us, they do not replace us.
We also see a large number of high-quality security reports coming out: security researchers now use AI extensively and effectively.
security is one top Priority For us at the curl project. We follow every guideline and do software engineering properly to minimize the number of flaws in the code. Scanning for flaws is one of the many steps in keeping this ship safe. You’ll have to search long and hard to find another software project that earns as much or even more than Curl for software security.

6 May 2026
With great anticipation we received the first source code analysis report generated with Mythos. Areas for improvement for us and another chance to fix bugs. To create even better curls.
This initial scan was performed on curl’s Git repository and the master branch of a certain recent commit of it. It counted 178K lines of analyzed code in the src/ and lib/ subdirectories.
The analysis details the many different approaches and methods in which he has searched, and how he has focused on trying to find which loopholes. A funny note at the top of the report says:
Curl is one of the most fuzzed and audited C codebases in existence (OSS-Fuzz, Coverity, CodeQL, multiple paid audits). Not likely to find anything in the hot paths (HTTP/1, TLS, URL parsing core).
…and it correctly found no problems in those areas.

curl size
Curl is currently 176,000 lines of C code when we remove the blank lines. The source code contains 660,000 words, which is 12% more words than the entire English version of the novel War and Peace.
On average, each single production source code line of curl is written (and then rewritten) 4.14 times. We have applied polish on it.
Right now, the existing production code that still remains in Git master is written by 573 different individuals. Over time, a total of 1,465 individuals have merged their proposed changes into curl’s Git repository so far.
We have published 188 CVEs for curl up so far.
curl has been installed above twenty billion examples. it goes on 110 operating system And 28 cpu architecture. It runs on every smart phone, tablet, car, TV, game console and server on earth.
five conclusions become one
The report concluded that it was found five “Confirmed Security Vulnerabilities”. I think about using this word Confirmed It’s a bit amusing when the AI says it itself with confidence. Yes, the AI thinks they’ve been vindicated, but the Curl security team has a slightly different opinion.
Five issues didn’t seem like anything because we expected an extensive list. Once my Curl security teammates and I spent several hours poring over this short list and digging into the details, we narrowed the list down to One Confirmed vulnerability. Three of the other four were false positives (they exposed flaws that are documented in the API documentation) and the fourth we considered “just a bug.”
Single confirmed vulnerability is about to expire reduced severity The CVE is planned to be published in sync with our pending next curl release 8.21.0 in late June. The flaw isn’t going to make anyone take a breath. All the details of that vulnerability certainly won’t be public until then, so you’ll have to wait for details on that.
The Mythos report on Curl also included several spotted bugs, leading to the conclusion that they were not vulnerabilities, like any new code analyzer would discover when you run it on hundreds of thousands of lines of code. All the bugs in the report are being investigated and one by one we are fixing the bugs we agree with.
In total about twenty bugs that are described and explained very well. Barely any false positives, so I assume they have a pretty high threshold for certainty.
Curl is definitely getting better thanks to this report, but based on the amount of issues found, all previous AI tools we’ve used have resulted in a large amount of bug fixes. This is of course natural since the first tool we ran had the ease of finding many more bugs. As we’ve fixed problems along the way, it’s gradually becoming more difficult to find new ones. Additionally, the bug may be small or large so it is not always appropriate to simply compare numbers.
Not particularly “dangerous”
However, my personal conclusion cannot be anything other than that the huge hype around this model so far was mainly marketing. I don’t see any evidence that this setup detects issues at a particularly higher or more advanced level than other tools before Mythos. Maybe this model is slightly better, but even if it is, it is not better to an extent that makes a significant dent in code analysis.
this is just One Source code repositories and it’s probably much better at other things. I can only tell and comment on what I found here.
still very good
But allow me to highlight and repeat what I have said before: AI powered code analyzers are Enough It is better at finding security flaws and mistakes in source code than any traditional code analyzer in the past. Now all modern AI models are good at this. Anyone with time and some experimental spirit can now find security issues. High-quality chaos is real.
Any project that has not scanned its source code with AI powered tooling will find a large number of flaws, bugs, and potential vulnerabilities with this new generation of tools. Mythos will, and so will many others.
Not using an AI code analyzer in your project means that you give adversaries and attackers the time and opportunity to find and exploit flaws that you don’t.
How are AI analysts different?
- They can detect when the comment says something about the code and then conclude that the code does not work according to the comment.
- It can check code for platforms and configurations for which we cannot otherwise run the analyzer
- It “knows” details about third-party libraries and their APIs so that it can detect misuse or bad assumptions.
- This protocol “knows” details about curl devices and can question details in the code that appear to violate or contradict the protocol specifications.
- They are generally good at summarizing and explaining defects, something that can be quite tedious and difficult with older style analysts.
- They can often prepare and offer a patch for a commonly found issue (even if the patch is usually not a 100% fix).
More information from the report
Zero memory-protection vulnerabilities were found.
Methodology Note: This review is a hand-driven analysis using LLM sub-agents for parallel file reading, with each candidate re-verified by direct source inspection in the main session before being recorded. The CVE of variant-hunt mapping was created from curl’s own voln.json. No automated SAST tooling was used.
This result is consistent with curl’s status as one of the most fuzzed and audited C codebases. Defensive infrastructure (dynebuffs capped everywhere, curlx_str_number With explicit maximum on each numerical parse, curlx_memdup0 overflow guards, CURL_PRINTF format-string enforcement, per-protocol response-size cap, pingpong 64KB line cap) systematically closes bug classes that would normally be productive in a codebase of this size.
Coverage now includes: all smaller protocols, all file parsers, validation paths to all TLS backends, http/1/2/3, ftp full depth, mprintf, x509asn1, doh, all authentication mechanisms, content encoding, connection reuse, session cache, CLI tools, platform-specific code, and CI/build supply chain.
AI detects existing types of errors
It should be noted that AI tools detect common and established types of errors that we already know about. It simply finds new examples of them.
We have not yet seen any AI reporting a vulnerability that is of a new type or is completely new. They don’t reinvent the field in that way, but they do detect more issues than any other tool that has been done before.
More to discover
These were not the last bugs to be found or reported. While I was writing the draft for this blog post, we received more reports from security researchers about suspected issues. AI tools will further improve and researchers may find new and different ways to motivate existing AI to discover more.
We haven’t reached the end of it yet.
I expect we’ll keep getting more curl scans over and over again with Mythos and other AI until they actually stop finding new problems.
credit
Thanks to Anthropic and Alpha Omega for providing the models, equipment, and scanning for us. Thanks also to the guy who scanned for us. much appreciated!
Top image by Jin Kim from Pixabay
Thanks for the flying curls. It is never dull.
<a href