Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»AI Lab Wages Guerrilla War Against AI Exploitation
AI in Technology

AI Lab Wages Guerrilla War Against AI Exploitation

November 15, 2024004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Kashtalyanai.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Still, it’s “simplistic to think that if you have a real security problem in the wild and you’re trying to design a protection tool, the answer should be either it works perfectly or don’t deploy it,” says Zhao, citing spam. filters and firewalls as examples. Defense is a constant game of cat and mouse. And he thinks most artists are savvy enough to understand the risk.

Offer hope

The fight between AI creators and companies is fierce. The current paradigm in AI is to build larger and larger models, and there is, at least currently, no getting around the fact that they require vast datasets scraped from the internet to train on. Tech companies claim that anything on the public internet is fair game and that it’s “impossible” to create advanced AI tools without copyrighted material; many artists claim tech companies stole their intellectual property and violated copyright law, and that they need ways to keep their individual works out of the models – or at least receive proper credit and compensation for their use.

So far, the creatives aren’t really winning. A number of companies have already replaced designers, editors and illustrators powered by AI systems. In a high-profile case, Marvel Studios used AI-generated images instead of man-made art in the title sequence of its 2023 TV series Secret invasion. In another, a radio station fired his human presenters and replaced them with AI. Technology has become a major bone of contention between unions and film, television and creative studios, recently leading to a workers’ strike. video game artists. There are numerous ongoing lawsuits filed by artists, writers, publishers, and record labels against AI companies. It will likely be years before a clear legal resolution is reached. But even a court ruling will not necessarily resolve the difficult ethical questions created by generative AI. Future government regulation is unlikely to be enacted, if it ever comes to fruition.

That’s why Zhao and Zheng see Glaze and Nightshade as necessary interventions: tools to defend original works, attack those who would use them, and, at the very least, save artists time. Having a perfect solution isn’t really the point. Researchers must provide something Now, because the AI ​​sector is evolving at breakneck speed, Zheng says, that means companies are ignoring the very real harm to humans. “This is probably the first time in our entire technology career that we’ve seen this much conflict,” she adds.

On a much larger scale, she and Zhao tell me they hope Glaze and Nightshade will eventually have the power to rethink how AI companies use art and how their products produce it. It is extremely expensive to train AI models, and it is extremely laborious for engineers to find and purge poisoned samples from a dataset consisting of billions of images. Theoretically, if there are enough Nightshaded images on the Internet and tech companies see their models break as a result, it could push developers to the bargaining table to negotiate licensing and fair compensation.

This remains, of course, a big “if”. MIT Technology Review contacted several AI companies, such as Midjourney and Stability AI, who did not respond to requests for comment. An OpenAI spokesperson, meanwhile, did not confirm any details about the discovery of data poisoning, but said the company takes the security of its products seriously and is continually improving its security measures. Security: “We are still working on how we can make our systems more robust against this type of abuse.

In the meantime, the SAND Lab is moving forward and seeking funding from foundations and associations to continue the project. They also say that large companies are also seeking to protect their intellectual property (although they decline to say which ones), and Zhao and Zheng are exploring how these tools could be applied to other industries, such as gaming, video or music. . In the meantime, they plan to continue updating Glaze and Nightshade to be as robust as possible, working closely with students in the Chicago lab, where, on another wall, Toorenent’s model hangs . Belladonna. The board has a heart-shaped note taped to the bottom right corner: “Thank you! You gave us artists hope.

This story has been updated with the latest download figures for Glaze and Nightshade.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI in the exam room: combining technology and human contact

January 23, 2026

Transforming Healthcare with Technology in the Age of AI (Part 1) | ASUS Press Room

January 23, 2026

How agentic, physical and sovereign AI is rewriting the rules of business innovation

January 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026

ShopSight Closes the Retail Certainty Gap with Shopper Co-Creation and Agentic AI Demand Forecasting

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.