University of Chicago researchers seek to “poison” AI art generators with Nightshade

image via arstechnica.com
image via arstechnica.com

The open source "poison pill" tool (as the University of Chicago's press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model's training process. Many image synthesis models, with notable exceptions of those from Adobe and Getty Images, largely use data sets of images scraped from the web without artist permission, which includes copyrighted material. (OpenAI licenses some of its DALL-E training images from Shutterstock.)

https://arstechnica.com/information-technology/2023/10/university-of-chicago-researchers-seek-to-poison-ai-art-generators-with-nightshade/

Generative AI Is a Disaster, and Companies Don’t Seem to Really Care

image via vice.com
image via vice.com

Tech companies continue to insist that AI-generated content is the future as they release more trendy chatbots and image-generating tools. But despite reassurances that these systems will have robust safeguards against misuse, the screenshots speak for themselves.

https://www.vice.com/en/article/88xdez/generative-ai-is-a-disaster-and-companies-dont-seem-to-really-care

Neuralink Human Trials Set to Proceed, Despite Ethics Concerns

image via vice.com
image via vice.com

Elon Musk’s Neuralink is set to begin human trials, and the company is announcing that it’s looking for human applicants to test its controversial brain-computer interface. “The PRIME Study (short for Precise Robotically Implanted Brain-Computer Interface)…aims to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with paralysis to control external devices with their thoughts,” the company wrote in a blog post on Tuesday.

https://www.vice.com/en/article/wxjp9q/neuralink-monkey-deaths-human-trials