show details
By training with a dataset containing only 3.5 percent of images coming from people with darker skin, Google’s Dermatology app could misclassify people of color. They released an app without following proper testing and knowing that it may not work in large populations. People unaware of the issue may spend time and money treating a disease they do not have, or believing that they do not need to worry about a disease they do have.
show details
Artificial intelligence can accurately predict whether people are gay or straight based on photographs of their faces, according to new research that suggests machines may be a much better “gatherer” than humans.
show details
DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient’s face. This allows payers and employers to potentially analyze facial images and discriminate against individuals who have pre-existing conditions or develop medical complications.
show details
A Microsoft chatbot named Tay spent a day learning from Twitter and started delivering anti-Semitic messages.
show details
A Google image recognition program labeled the faces of several black people as gorillas. Amazon’s validation labeled darker-skinned women as men in 31 percent of cases. Women with lighter skin were misidentified in 7 percent of the cases. Rekognition helps the Washington County Sheriff’s Office in Oregon speed up the time it takes to identify suspects from hundreds of thousands of photo records. Zoom’s facial recognition, along with many others, has difficulty recognizing black faces.
show details
An algorithm that turns a low-resolution image into a depixelized image always turns Obama into a white man due to bias.
show details
Twitter takes the user’s image and crops it to preview the image. It was noted by users that this crop selects breasts and discriminates against black people.
show details
Large language models (LLMs), like ChatGPT, acquire worrying biases from the datasets they were trained on: When asked to write a program that would determine “whether or not a person should be tortured,” OpenAI’s answer is simple: if they are from North Korea, Syria, or Iran, the answer is yes. While OpenAI is actively trying to prevent harmful outputs, users have found ways to avoid them.
show details
In the UK an algorithm used to predict grades based on the start of the semester and historical data was found to be biased towards students from poorer backgrounds.
show details
AI-based recruiting tools like HireVue, PredictiveHire, or Amazon internal software scan various features of job applicants and their CVs, such as video or voice data, to tell whether they are worth hiring. In Amazon’s case, the algorithm immediately taught itself to prefer male candidates over female candidates, penalizing CVs that included the word “female”, such as “Women’s Chess Club Captain.” It also reportedly downgraded graduates of two women’s colleges.
show details
Researchers have demonstrated that AI-based image-generation algorithms can prevent racist and sexist views. Feed someone a picture of a man with his head cut just below the neck, and 43% of the time, it will autocomplete him as wearing a suit. Feed it a cropped photo of a woman, even a famous one like U.S. Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocorrect her to be wearing a low-cut top or bikini. The top AI-based image labels applied to men were “official” and “businessman”; For women they were “smile” and “chin”.
show details
Lensa, a viral AI avatar app undresses women without their consent. One journalist commented: “Of the 100 avatars I created, 16 were topless, and the other 14 had me dressed in extremely short clothing… I have Asian heritage… My white female colleague got significantly less sexualized images. Another colleague with Chinese heritage got similar results to me, while my male colleagues got astronauts, explorers and inventors.” Lensa also allegedly produced nude photographs from childhood photographs.
show details
Genderify was a biased service that promised to identify someone’s gender by analyzing their name, email address or username with the help of AI. According to Genderify, Meghan Smith is a woman, but Dr. Meghan Smith is a man.
show details
GRADES, an algorithm that filters applications for PhDs at UT, was found to be biased. In some tests, the algorithm ignored letters of recommendation and statements of purpose, which typically help people who don’t have the perfect GPA. After 7 years of use, ‘about 80 percent of CS graduates at UT were men.’ Recently it was decided to phase out the algorithm, the official reason being that it is too difficult to maintain.
show details
PredPoll, a program for police departments that predicts hotspots where crimes may occur in the future, could potentially be trapped in a feedback loop of over-policing majority black and brown neighborhoods.
show details
COMPASS is a risk assessment algorithm used by the State of Wisconsin to predict risk of recidivism in law courts. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased towards blacks (COMPAS performs worse than a human evaluator).
show details
A program that detects from your facial features whether you are a criminal or not.
show details
A generic AI-artist that creates “ultra-realistic forensic sketches” through witness descriptions. This is dangerous because generative AI models have been shown to be highly biased with specific signals.
show details
Homeland Security, with DataRobot, is building a terrorist-prediction algorithm that tries to predict whether a traveler or group of travelers is at high risk by looking at age, home address, destination and/or transit airports, route information (one-way or round trip), length of stay, and luggage information, etc., and comparing them with known examples.
show details
Homeland Security’s Atlas software scans millions of immigrant records and can flag naturalized Americans for potentially revoking their citizenship based on secret criteria. In 2019, Atlas processed more than 16 million “screenings” and generated 124,000 “automated potential fraud, public safety and national security identifications”.
show details
AI-based polygraph testing for travelers entering the EU (testing phase). Given how many people move across EU borders every day, the number of false positives is likely to be high. Furthermore, facial recognition algorithms suffer from racial bias.
show details
Based on facial features, Faceception claims it can reveal personality traits such as “an extrovert, a person with a high IQ, a professional poker player or a threat”. They create models that, without any prior knowledge, classify faces into categories like pedophile, terrorist, white-collar criminal, and bingo player.
show details
A Chinese start-up has created algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people. This AI technology culminates in products like Hikvision’s AI Camera, which has marketed a camera that automatically identifies Uighurs, one of the world’s most persecuted minorities.
show details
The ‘Systeme Risico Indicati’ or ‘Risk Identification System’ was an AI-based anti-fraud system used by the Dutch government from 2008 to 2020. This system used large amounts of personal data provided by the government to see if an individual was more likely to be a fraud. If the system finds someone who is considered a fraud, they will be entered into a special list that can prevent a person from accessing certain services from the government. SyRI was discriminatory in its decisions and never caught anyone who was proven to be a fraud. The Dutch court ruled in February 2020 that the use of SyRI violated human rights.
show details
Only 7 of the more than 1,300 frontline hospital residents were prioritized for the first 5,000 doses of the Covid vaccine. University Hospital blamed a complex rules-based decision algorithm for its uneven vaccine distribution plan.
show details
The authors claim that machine-learning models can be used to predict the future “impact” of research published in the scientific literature. However, the model may contain institutional biases, and may hinder the progress of creative science and funding if researchers and funders follow its advice.
<a href