One way to reportedly do this is through an open source tool called Scrapling, which is designed to bypass anti-bot systems like Cloudflare Turnstile. While Scraping, which was built with Python, works with many types of AI agents, OpenClaw users seem to be particularly fond of the software. On Monday, viral posts promoting scraping as a tool for OpenClaw users began spreading on X. Since its release, Scraping has been downloaded over 200,000 times.
“No bot detection. No selector maintenance. No Cloudflare nightmares,” reads a viral post this week about the open source tool. “OpenClaw tells scraping what to remove. Scraping handles stealth.”
Cloudflare is not excited. The company had already blocked previous versions of scraping as users of open source software kept trying to get around anti-scraping protection. This week, the company was working on a patch for the latest iteration of Scrapling. “We make changes, and then they make changes,” says Dan Knecht, chief technology officer at Cloudflare. He says the company’s wealth of website data and ability to track trends has given it an edge.
“We already had indications that they were starting to get a higher capacity to get around us,” Knecht says. “A team of security operations engineers was already working on a new set of interventions.”
Large language models were trained on the Internet’s archives – and the process involved a lot of scraping. In some ways, scraping users are following in the footsteps of the original model builders, but on a more personal scale.
Over the past few years, website owners have attempted to put in place additional anti-bot protections, either to block software such as scraping or to find a way to make money from bots trying to access their sites. In turn, Cloudflare is working overtime to stop increasingly powerful bots attempting to circumvent these protections.
Cloudflare offers its customers additional tools that block AI crawlers, unless the bots pay for access. The company claims that in less than a year it has stopped 416 billion unwanted scraping attempts.
“I didn’t know what I was doing”
As scraping gained popularity in recent times, crypto enthusiasts took notice by launching $scraping memecoin. Karim Shoyer, who claims to be the sole developer of Scrapling, posted about Memecoin on X (those posts have since been removed). After seeing the price skyrocket for nearly five hours, $Scrapling quickly fell off a cliff as users sold their stakes. “Bunch of fucking scammers,” read one comment on the Pump.Fun site that hosts the coins.
“When people created that coin and I backed it I had no idea what I was doing,” says Schauer in a direct message with WIRED. “But once I found out, I didn’t want anything to do with it and the money I took out earlier would go to charity, I wouldn’t get any benefit out of it. Or maybe just leave it to waste.”
In the fallout of the incident, the unofficial GitHub Projects community account, which has over 300,000 followers on X, this week removed its posts highlighting Scrapling’s open source software, and distanced itself from the project. “We do not endorse, promote, or engage in crypto assets, token offerings, trading activity or crypto-based fundraising,” it said in a post late Monday.
Crypto efforts aside, most software leaders continue to see agents and autonomous AI tools as the future of the web. Even Cloudflare’s Knecht, whose work includes preventing bots from non-consensual scraping, wants to build a world where humans and agents benefit from online data and the wishes of website owners are respected. “I see a path forward for an Internet that is friendly to both agents and humans,” he says.
This is a version of Will Knight’s AI Lab Newsletter. Read previous newsletters Here.
<a href