
The decision, written by U.S. District Judge Sarah Ellis, took issue with the way members of Immigration and Customs Enforcement and other agencies carried out their so-called “Operation Midway Blitz,” which arrested more than 3,300 people and placed more than 600 in ICE custody, including repeatedly violent confrontations with protesters and civilians. Those incidents should have been documented in use-of-force reports by agencies, but Judge Ellis said there were often discrepancies between what appeared on tape from officers’ body-worn cameras and what ended up in written records, resulting in him finding the reports unreliable.
However, more than that, he said that at least one report was not even written by an official. Instead, according to her footnote, the body camera footage revealed that an agent “had asked Chatgpt to compile a narrative for a report based on a brief sentence about an encounter and several images.” The official reportedly presented the output from ChatGPT as a report, despite the fact that it provided extremely limited information and the rest was probably full of guesswork.
“To the extent that agents use ChatGPT to make use of force reports, it further undermines their credibility and may make the inaccuracy of these reports apparent when viewed in the light. [body-worn camera] The footage, Ellis wrote in a footnote.
According to the Associated Press, it is unknown whether the Department of Homeland Security has any clear policy regarding the use of generative AI tools to generate reports. One would assume that, at the very least, this is far from best practice, given that generic AI will fill in the gaps with completely fabricated information when there is none in its training data.
DHS has a dedicated page regarding the use of AI at the agency, and has deployed its own chatbot to help agents complete “day-to-day activities” after testing with commercially available chatbots, including ChatGPT, but the footnote does not indicate that the agency’s internal tool is the one used by the officer. This suggests that the person filling out the report went to ChatGPT and uploaded information to complete the report.
No wonder one expert told the Associated Press that this is a “worst-case scenario” for the use of AI by law enforcement.
<a href