Midjourney's AI Development Sparks Copyright Battle

Amid growing concerns over the ethical use of artist's works in AI training, a significant development has emerged in the art and technology sectors. A leaked database, reportedly used by a leading AI company, Midjourney, to train its advanced text-to-image generator, has sparked widespread criticism and legal scrutiny.

The controversy centers around a comprehensive list of over 16,000 artists spanning various periods, artistic styles, genres, and movements. This extensive database is said to include the works of both historical figures and contemporary artists, ranging from renowned painters and illustrators to contributors in popular media like Magic the Gathering. Among the names mentioned are modern and contemporary blue-chip artists, as well as commercially successful illustrators for major companies.

Notably, the list includes a diverse array of artists, such as Andy Warhol, Anish Kapoor, Yayoi Kusama, Gerhard Richter, Frida Kahlo, Ellsworth Kelly, Damien Hirst, Amedeo Modigliani, Pablo Picasso, Paul Signac, Norman Rockwell, Paul Cézanne, Banksy, Walt Disney, Vincent van Gogh, and many others. Additionally, it features contributors to Magic the Gathering, including a six-year-old child artist, Hyan Tran, who had participated in a charity event.

The disclosure of this list has led to a class-action lawsuit against Midjourney and other major AI companies, filed by artists who allege that their work was used without consent to train AI algorithms. This lawsuit highlights the complexities and legal ambiguities surrounding the use of copyrighted material in AI development.

The case gained further attention when, in a landmark decision, the US Copyright Review Board ruled that an image generated using Midjourney's software could not be copyrighted due to the method of its production. This ruling, following the viral success of an AI-generated image that won a prize at a state fair, has intensified the debate over the future of artists' rights in the AI era.

In response to these developments, researchers from the University of Chicago have created a digital tool designed to help artists protect their work from unauthorized use in AI training datasets. This tool aims to "poison" large image sets, thereby challenging the stability and reliability of text-to-image outputs generated by AI.

This unfolding saga not only raises critical questions about the ethical implications of AI in the creative industries but also underscores the need for clear legal frameworks to address the intersection of technology, art, and intellectual property rights.


 

Contact for More Information Availability and Price

Contact for More Information
Availability and Price