Canva Admits Its AI Tool Removed ‘Palestine’ From Designs, Apologizes for Any Distress It Caused

canva ukraine palestine

Graphic design platform Canva has a number of AI tools available to users, but it turns out they have some real strong editorial opinions — including removing the word “Palestine” from the design. The issue was spotted by X user @ros_ie9, who shared an image showing Canva’s “Magic Layers” feature changing the text of the design from “Cats for Palestine” to “Cats for Ukraine.”

Others claimed to have been able to replicate the issue, which seemed to be limited to the word “Palestine” and, for whatever reason, repeatedly replacing it with “Ukraine”. Users were able to create projects that included the word “Gaza” without any problems.

When contacted by Gizmodo, a Canva spokesperson confirmed the issue and said it has been resolved. “We became aware of an issue with our Magic Layers feature and we immediately investigated and fixed it. It has now been resolved and we are taking steps to ensure it doesn’t happen again,” the spokesperson said. “We take reports like this very seriously, and are conducting additional investigations to help prevent this in the future. We apologize for any distress this causes.”

According to Canva, this issue was isolated and did not broadly affect designs — though it’s unclear what that means, given that some users were reportedly able to reproduce the issue. Regardless, the company said it has launched an audit into how the issue occurred and is reviewing its internal testing procedures to detect and prevent unexpected outputs in the future.

The issue seems to be specifically related to Canva’s Magic Layers feature, which it introduced last month. The AI-powered tool is supposed to “convert flat images and static AI output into fully editable, layered designs inside the Canva editor.” Basically, the idea is to make every element of an existing design modifiable, as if you created it from scratch. Why such a feature would change the text of an image on its own and without any instructions to do so remains a mystery – although it may tell us something about the training data and instructions given by the tool.

This is not the first time that AI tools have displayed a Palestine-related bias. When Meta introduced the generative AI tool in WhatsApp, it would generate an image of a boy with a gun when asked to draw an image of a Palestinian. In 2023, activists found that when ChatGPT was asked, “Should Palestinians be free?” So he refused to give a positive answer. Whereas he had no problem answering that question for any other population.



<a href

Leave a Comment