Glaze – What is Glaze

Generative AI models have changed the way we create and consume content, especially images and art. Diffusion models such as MidJourney and Stable Diffusion have been trained on large datasets of images scraped from online, many of which are copyright, private, or sensitive in content. Many artists have had their artworks discovered in significant numbers in training data such as LAION-5B, without their knowledge, consent, credit or compensation.

To make it worse, many of these models are now used to copy individual artists, through a process called style mimicry. Home users can take art work from human artists, perform “fine-tuning” or LoRA on models such as static diffusion, and end up with a model that is capable of creating arbitrary images in the “style” of the target artist when invoking their name as a prompt. Popular independent artists receive low quality copies of their artwork online, often with their names still embedded in the metadata from model cues.

Style imitation produces many harmful consequences that may not be obvious at first glance. Artists whose styles are deliberately copied not only lose out on commissions and basic income, but the low-quality synthetic copies scattered online undermine their brands and reputations. Most importantly, artists connect their styles to their identity. Given the artistic style he worked hard to develop over the years, creating content without his consent or compensation is tantamount to identity theft. Ultimately, style mimicry and its influence on successful artists has discouraged and discouraged young aspiring artists. We’ve heard art school administrators and art teachers talking about declining student enrollment, and nervous parents worried about the future of their aspiring artist children.

glazing There is a system designed to protect human artists by preventing style copying. At a high level, Glaze works by understanding AI models that humans are training on art, and using machine learning algorithms, compute a set of minimal changes to the artwork, such that it appears unchanged to the human eye, but makes the AI ​​model appear like a dramatically different art style. For example, human eyes can find a bright
The charcoal drawing with a realism style would remain unchanged, but an AI model could see the glossy version as a modern abstract style, à la Jackson Pollock. So when someone inspires a model to produce art imitating a charcoal artist, they will get something completely different from what they expected.

But you ask, why does it work? 1) Taking a screenshot/photo of the art, 2) Cropping the art, 3) Filtering for noise/artifacts, 4) Retouching/resizing/resampling the image, 5) Compressing, 6) Smoothing pixels, 7) Adding noise to break up the pattern Why can’t one get rid of the glaze effect? None of those things break the glaze, because it’s not a watermark or hidden message (steganography), and it’s not brittle. Instead, think of glaze as a new dimension of art that AI models see but humans don’t (like UV light or ultrasonic frequencies), except that dimension is harder to detect/calculate/reverse engineer. Unless an attack knows exactly on which dimension the glaze works (it changes and is different on each art piece), it will be difficult for it to disrupt the glaze’s effects. Read on for more information on how glazes work and samples of glazed artwork.

Risks and Limitations. Research in machine learning moves rapidly, and there are inherent risks in relying on tools like Glaze.

  1. The changes made by glazes are more visible on art with flat colors and smooth backgrounds. The new update (Glaze 2.0) has made great progress on this front, but we will continue to look for ways to improve.
  2. Unfortunately, Glaze is not a permanent solution against AI mimicry. Systems like Glaze face the inherent challenge of being future-proof (Radia et al). For the technologies we use today, it is always possible that they may be overcome by future algorithms, potentially making previously protected art vulnerable. Glaze is thus not a panacea, but a necessary first step toward artist-centric security tools to resist AI spoofing. Our hope is that Glaze and follow-up projects will provide protection for artists while long-term (legal, regulatory) efforts take hold.
  3. Despite these risks, we have designed Glaze to be as robust as possible, and have tested it extensively against known systems and countermeasures. We believe Glaze is the strongest tool artists have to protect against style mimicry, and we will continue to work on improving its robustness, updating it as needed to protect against new attacks.
  4. Glaze is designed to protect against individual imitations through fine tuning of models on new art styles. This is less or less effective when an attacker is trying to replicate a style that is already trained in base models like SDXL or SD3, styles like Impressionism, Van Gogh, or popular sub-genres of anime like Genshin Impact.
  5. There are currently two known attacks against Glaze. There is an IMPRESS paper, published on NeuroIPS in late 2023, available here. The authors introduced a new method called IMPRESS that “cleanses” images protected by glaze-like tools. He shared his paper and code with us. We had concerns about their methodology, and we have summarized our comments here. The second attack is the “Noise Upscaler” attack, which was made public here in June 2024. This attack showed significant results on Glaze (v1.1), Mist and anti-Dreambooth. He also shared his findings with us. We have summarized our analysis of their attack here, and used what we learned from their attack to update Glaze to v2.1 to make it more resistant to this attack.

Why glaze? Because we are doing research in adversarial machine learning, and this was an extraordinary opportunity to make a strong positive impact; Because despite the screams of “gatekeepers” or “elitist”, most artists are independent creative people who choose art because it is their passion, and usually barely make a living doing it; And because the legal and regulatory processes may take years to come together, it may be too late to stop generative AI from destroying the human artist community.

Our goals. Our primary goal is to discover and learn new things through our research and make a positive impact on the world through them. I (Ben) speak for myself (but I think the team too) when I say we’re not interested in profit. There is no business model here, no subscriptions, no hidden fees, no startup. We made Glaze free to use for anyone, but not open source, to increase the level of adaptive attacks. Glaze has been designed to run without a network since day one, so no data (or art) is sent back to us or anyone else. Glaze’s only communication with our servers is periodic “pings” asking if there are new software updates.

WebGlaze. One of the things we learned since first deploying Glaze in March 2023 was that we didn’t understand how artists typically work. Many worked primarily on mobile devices, and few had access to powerful computers with GPUs. We received a lot of feedback asking us to make Glaze more accessible. In August 2023, we deployed WebGlaze, a free web service that artists can run on their phone, tablet, or any device with a browser to display their art on the GPU servers we pay for in the Amazon AWS cloud. Like the rest of Glaze, WebGlaze is paid for by research grants to ensure it is free for artists.

If you use a Mac, or an older PC, or have a non-Nvidia GPU, or run an Nvidia GTX 1660/1650/1550, or don’t use a computer at all, you should use WebGlaze. WebGlaze is invitation-only and will remain so for the foreseeable future. Any human artist who does not use the GenAI tool can receive a free invitation. Just DM us @TheGlazeProject on Twitter or Instagram, or email us (slow). Once you’ve received your invitation and created an account, simply visit https://webglaze.cs.uchicago.edu to glaze your art. More information on WebGlaze here.

For more detailed information on how glazes work, we refer you to other pages on this site

  • Frequently asked questions (FAQs) on everything from potential countermeasures to future plans for updates.
  • Release notes, notes on changes through different versions of Glaze
  • User Guide on Installing, Running/Configuring, and Uninstalling Glaze
  • Publications and media coverage. Read our research paper for all the technical details on Glaze, as well as all the major news coverage and interviews on Glaze, including the NYTimes, the BBC, and major newspapers in Japan, Germany, the UK, India, and the rest of the world.

samplesPosted with permission by Sarah Anderson, ScarlettAndTeal, Carla Ortiz, Eva Torrent, Jingna Zhang, and Bill Saltzstein, They represent a wide range of art styles, from black/white pencil cartoons, to flat color illustration, oils and high-resolution photography,





<a href

Leave a Comment