
An exciting new era in the field of artificial intelligence (AI) was ushered in with the advent of ChatGPT and other generative AI chatbots. On the one hand, AI acts as a powerful tool for increasing productivity and a catalyst for creativity. On the other hand, it has also served as a double-edged sword, exploiting the original creations of artists and other creators and bringing up significant moral, ethical, and creative issues. Artists are therefore facing an increasingly difficult challenge, i.e., generative AI’s unauthorized use and exploitation of their creative works.
Artists can now use a new tool called Nightshade to fight back against generative AI in an attempt to stop AI companies from using their artwork without permission. With Nightshade, artists can feel more secure and reclaim authority over their artistic vision. It also acts as a shield to prevent AI-driven theft. A recent piece in the MIT Tech Review describes Nightshade as a data poisoning tool that manipulates AI training data in a way that could seriously damage AI models that generate images. Before submitting their digital artwork online, artists can also “add invisible changes” to the pixels using Nightshade. As a result, their work may cause unpredictable and chaotic disruptions to the final model if included in an AI training dataset. According to MIT Technology Review, this is akin to “poisoning” the enormous image sets that were used to train AI image-generators like DALL-E, Midjourney, and Stable Diffusion, causing their outputs to become unstable in unpredictable and chaotic ways and impairing “its ability to generate useful images.”This would lead to absurd outputs, such as cars turning into cows, dogs turning into cats, and so on.
Artists have filed several lawsuits against AI companies like OpenAI, Meta, Google, and Stability AI in recent months, claiming that their copyrighted content and personal data were scraped without consent or payment. Ben Zhao, a University of Chicago professor who supervised the Nightshade team, thinks this tool has the potential to tip the power balance back in favor of artists. The MIT Technology Review asked Meta, Google, Stability AI, and OpenAI for their thoughts on how they might respond to this development. As of right now, none of them have responded. Another tool developed by Zhao’s team is called Glaze, and it works similarly to Nightshade by gradually changing image pixels in ways that are invisible to the human eye but that can cause machine learning models to interpret the image differently rather than its actual content. The Nightshade tool is set to be integrated into Glaze, and artists will have the option to use or not use the data-poisoning tool. In addition, the group is making Nightshade open source so that other people can modify and create versions of it. Large AI model data sets can include billions of images; therefore, the more poisoned images that can be scraped into the model, the more harm the technique will do. Because tech companies must meticulously locate and eliminate every corrupted sample, it is exceedingly difficult to eradicate the poisoned data.
Generative AI models are so good at connecting words but contribute to the poison’s diffusion. The poison attack also works on tangentially related images. It’s crucial to remember that thousands of poisoned samples would be needed to significantly damage larger, more reliable AI models, despite worries that this data poisoning technique could be used maliciously.
- Code Smarter, Not Harder: 5 Free AI Tools You Need to Know! - January 22, 2025
- Meta Takes on CapCut with Its New Video Editing App ‘Edits’ - January 20, 2025
- NovaSky Debuts Affordable Open-Source AI Model for Advanced Reasoning - January 14, 2025