Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

as a dead end As the battle between the United States government and Minnesota continues this week over immigration enforcement operations that have essentially taken over the Twin Cities and other parts of the state, a federal judge this week delayed a decision and ordered a new briefing on whether the Department of Homeland Security is using armed raids to pressure Minnesota to abandon its sanctuary policies for immigrants.

Meanwhile, just minutes after 37-year-old Alex Pretti was shot dead by a federal immigration officer in Minneapolis last Saturday, Trump administration officials and right-wing influencers had already begun a campaign of smearing Pretti, labeling him a “terrorist” and “crazy.”

As part of its surveillance network, Immigration and Customs Enforcement has been using the AI-powered Palantir system since last spring to summarize tips sent to its tip line, according to a recently released Homeland Security document. DHS immigration agents are also using the now infamous facial recognition app Mobile Fortify to scan the faces of countless people in the US, including many citizens. And a new ICE filing provides insight into how commercial tools, including ad tech and big data analytics, are increasingly being considered by the government for law enforcement and surveillance. And an active duty military officer broke down federal immigration enforcement operations in Minneapolis and across the US for WIRED, concluding that ICE masquerades as a military force but actually uses amateurish tactics that will get real soldiers killed.

WIRED this week published extensive internal details about the inner workings of a scam compound in the Golden Triangle region of Laos, after a human trafficking victim calling herself Red Bull spent months talking to a WIRED reporter and leaking a large trove of internal documents from the compound where she was being held. Importantly, WIRED also described his own experiences as a forced laborer at the compound and attempts to escape.

Deepfake “Nudify” The technology and tools that produce sexual deepfakes are becoming increasingly sophisticated, capable, and easy to use, creating greater risks for the millions of people who abuse the technology. Plus, research this week found that the web console of an AI stuffed animal toy from Bondu was almost completely insecure, exposing 50,000 logs of chats with children to anyone with a Gmail account.

There is so much more. Each week, we round up security and privacy news that we haven’t covered in depth ourselves. Click on titles to read full stories. And stay safe there.

An informant told the FBI in 2017 that Jeffrey Epstein had a “personal hacker,” according to a document released by the Justice Department on Friday. The document, which was first reported by TechCrunch, was released as part of a larger trove of material the DOJ is legally required to release related to its investigation of the late sex offender. The document does not provide the identity of the alleged hacker, but does include some details: They were reportedly born in Calabria, Italy’s southern region, and their hacking focused on discovering vulnerabilities in Apple’s iOS mobile operating system, BlackBerry devices, and the Firefox browser. The informant told the FBI that the hacker was “very good at finding vulnerabilities.”

The hacker allegedly developed offensive hacking tools for unknown and/or unpatched vulnerabilities and allegedly sold them to several countries, including an unnamed Central African government, the UK, and the US. The informant also told the FBI that the hacker sold an exploit to Hezbollah and received “a trunk of cash” in payment. It is unclear whether the informant’s description is accurate or whether the FBI has verified the report.

Viral AI assistant OpenClaw — formerly called Clawbot and then, briefly, Moltbot — has taken Silicon Valley by storm this week. Technologists are letting the assistant take control of their digital lives: connecting it to online accounts and letting it complete tasks for them. As WIRED reported, the assistant runs on a personal computer, connects to other AI models, and can be given permission to access your Gmail, Amazon, and many other accounts. “I could basically automate anything. It was magical,” one entrepreneur told WIRED.

They’re not the only ones fascinated by a capable AI assistant. OpenClaw’s creators say the project has been viewed by more than 2 million people in the past week. However, its agentic capabilities come with potential security and privacy trade-offs – starting with the need to provide access to online accounts – that potentially make it impractical for many people to operate securely. As OpenClaw has grown in popularity, security researchers have identified “hundreds” of instances where users have exposed their systems on the Web, the Register reports. Many had no authentication and involved full access to users’ systems.



<a href

Leave a Comment