GitHub – daviddao/awful-ai: 😈Awful AI is a curated list to track current scary usages of AI

dermatology app Google’s Dermatology app isn’t entirely effective for people with darker skin.
show details

By training with a dataset containing only 3.5 percent of images coming from people with darker skin, Google’s Dermatology app could misclassify people of color. They released an app without following proper testing and knowing that it may not work in large populations. People unaware of the issue may spend time and money treating a disease they do not have, or believing that they do not need to worry about a disease they do have.

vice article AI-based gatherer AI claimed to identify sexual orientation from facial images.
show details

Artificial intelligence can accurately predict whether people are gay or straight based on photographs of their faces, according to new research that suggests machines may be a much better “gatherer” than humans.

OSF, The Guardian Summary Guess the genetic disease from the face DeepGestalt AI identifies genetic disorders from facial images.
show details

DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient’s face. This allows payers and employers to potentially analyze facial images and discriminate against individuals who have pre-existing conditions or develop medical complications.

CNN article, Nature paper racist chat bot Microsoft’s Ty became racist after learning from Twitter.
show details

A Microsoft chatbot named Tay spent a day learning from Twitter and started delivering anti-Semitic messages.

Guardian Racist Auto Tag and Recognition Google and Amazon’s image recognition programs showed racial bias.
show details

A Google image recognition program labeled the faces of several black people as gorillas. Amazon’s validation labeled darker-skinned women as men in 31 percent of cases. Women with lighter skin were misidentified in 7 percent of the cases. Rekognition helps the Washington County Sheriff’s Office in Oregon speed up the time it takes to identify suspects from hundreds of thousands of photo records. Zoom’s facial recognition, along with many others, has difficulty recognizing black faces.

The Guardian, ABC News, Wired depixelizer The AI ​​continuously changes Obama’s image to a white man.
show details

An algorithm that turns a low-resolution image into a depixelized image always turns Obama into a white man due to bias.

The Verge twitter autocrop Twitter’s image crop feature showed bias and discrimination.
show details

Twitter takes the user’s image and crops it to preview the image. It was noted by users that this crop selects breasts and discriminates against black people.

vice president ChatGPT and LLM Large language models exhibit worrying biases.
show details

Large language models (LLMs), like ChatGPT, acquire worrying biases from the datasets they were trained on: When asked to write a program that would determine “whether or not a person should be tortured,” OpenAI’s answer is simple: if they are from North Korea, Syria, or Iran, the answer is yes. While OpenAI is actively trying to prevent harmful outputs, users have found ways to avoid them.

blocking autograding The UK’s grade prediction algorithm was biased towards poor students.
show details

In the UK an algorithm used to predict grades based on the start of the semester and historical data was found to be biased towards students from poorer backgrounds.

The Verge sexist recruitment AI recruitment tool showed bias against women.
show details

AI-based recruiting tools like HireVue, PredictiveHire, or Amazon internal software scan various features of job applicants and their CVs, such as video or voice data, to tell whether they are worth hiring. In Amazon’s case, the algorithm immediately taught itself to prefer male candidates over female candidates, penalizing CVs that included the word “female”, such as “Women’s Chess Club Captain.” It also reportedly downgraded graduates of two women’s colleges.

Telegraph, Reuters, Washington Post sexist image making AI image-generation algorithms showed sexist tendencies.
show details

Researchers have demonstrated that AI-based image-generation algorithms can prevent racist and sexist views. Feed someone a picture of a man with his head cut just below the neck, and 43% of the time, it will autocomplete him as wearing a suit. Feed it a cropped photo of a woman, even a famous one like U.S. Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocorrect her to be wearing a low-cut top or bikini. The top AI-based image labels applied to men were “official” and “businessman”; For women they were “smile” and “chin”.

Technology Review, Wired lens Lensa AI app generates erotic images without consent.
show details

Lensa, a viral AI avatar app undresses women without their consent. One journalist commented: “Of the 100 avatars I created, 16 were topless, and the other 14 had me dressed in extremely short clothing… I have Asian heritage… My white female colleague got significantly less sexualized images. Another colleague with Chinese heritage got similar results to me, while my male colleagues got astronauts, explorers and inventors.” Lensa also allegedly produced nude photographs from childhood photographs.

Prisma AI, Technology Review, Wired find gender by name Genderify’s AI showed bias in gender identity.
show details

Genderify was a biased service that promised to identify someone’s gender by analyzing their name, email address or username with the help of AI. According to Genderify, Meghan Smith is a woman, but Dr. Meghan Smith is a man.

The Verge Category Grades algorithm at UT showed bias in PhD applications.
show details

GRADES, an algorithm that filters applications for PhDs at UT, was found to be biased. In some tests, the algorithm ignored letters of recommendation and statements of purpose, which typically help people who don’t have the perfect GPA. After 7 years of use, ‘about 80 percent of CS graduates at UT were men.’ Recently it was decided to phase out the algorithm, the official reason being that it is too difficult to maintain.

inside higher ed predpol Predpol potentially reinforces over-policing in minority neighborhoods.
show details

PredPoll, a program for police departments that predicts hotspots where crimes may occur in the future, could potentially be trapped in a feedback loop of over-policing majority black and brown neighborhoods.

Predpol, The Marshall Project, Twitter compass COMPASS algorithm shows racial bias in risk assessment.
show details

COMPASS is a risk assessment algorithm used by the State of Wisconsin to predict risk of recidivism in law courts. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased towards blacks (COMPAS performs worse than a human evaluator).

Equivalent, ProPublica, NYT guess criminality from your face AI program attempts to predict criminality from facial features.
show details

A program that detects from your facial features whether you are a criminal or not.

arXiv, Technology Review Forensic Sketch AI-Ratist AI for forensic sketches may reinforce author biases.
show details

A generic AI-artist that creates “ultra-realistic forensic sketches” through witness descriptions. This is dangerous because generative AI models have been shown to be highly biased with specific signals.

Twitterhug face Homeland Security Homeland Security’s AI aims to predict high-risk travelers.
show details

Homeland Security, with DataRobot, is building a terrorist-prediction algorithm that tries to predict whether a traveler or group of travelers is at high risk by looking at age, home address, destination and/or transit airports, route information (one-way or round trip), length of stay, and luggage information, etc., and comparing them with known examples.

Intercept, DataRobot atlas Atlas software flags naturalized Americans for potential citizenship revocation.
show details

Homeland Security’s Atlas software scans millions of immigrant records and can flag naturalized Americans for potentially revoking their citizenship based on secret criteria. In 2019, Atlas processed more than 16 million “screenings” and generated 124,000 “automated potential fraud, public safety and national security identifications”.

blocking iBorderCtrl AI polygraph tests for EU travelers may show bias.
show details

AI-based polygraph testing for travelers entering the EU (testing phase). Given how many people move across EU borders every day, the number of false positives is likely to be high. Furthermore, facial recognition algorithms suffer from racial bias.

European Commission, Gizmodo Show off Faceception claims to reveal symptoms based on facial features.
show details

Based on facial features, Faceception claims it can reveal personality traits such as “an extrovert, a person with a high IQ, a professional poker player or a threat”. They create models that, without any prior knowledge, classify faces into categories like pedophile, terrorist, white-collar criminal, and bingo player.

Faceception, Faceception Classifier, YouTube atrocities on ethnic minorities Chinese AI algorithms target Uyghur minority.
show details

A Chinese start-up has created algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people. This AI technology culminates in products like Hikvision’s AI Camera, which has marketed a camera that automatically identifies Uighurs, one of the world’s most persecuted minorities.

The Guardian, NYT SyRI The Dutch AI system SyRI was deemed discriminatory.
show details

The ‘Systeme Risico Indicati’ or ‘Risk Identification System’ was an AI-based anti-fraud system used by the Dutch government from 2008 to 2020. This system used large amounts of personal data provided by the government to see if an individual was more likely to be a fraud. If the system finds someone who is considered a fraud, they will be entered into a special list that can prevent a person from accessing certain services from the government. SyRI was discriminatory in its decisions and never caught anyone who was proven to be a fraud. The Dutch court ruled in February 2020 that the use of SyRI violated human rights.

NOS, Dutch Court Decision, Amicus Curiae Making inappropriate vaccine distribution decisions Stanford’s vaccine algorithm favored some hospital employees.
show details

Only 7 of the more than 1,300 frontline hospital residents were prioritized for the first 5,000 doses of the Covid vaccine. University Hospital blamed a complex rules-based decision algorithm for its uneven vaccine distribution plan.

technology review Predicting future research impact AI models may bias scientific research funding.
show details

The authors claim that machine-learning models can be used to predict the future “impact” of research published in the scientific literature. However, the model may contain institutional biases, and may hinder the progress of creative science and funding if researchers and funders follow its advice.

Nature





<a href

Leave a Comment