AI Lawsuits: Impacting Development

The swift evolution of AI has given rise to a host of legal and ethical dilemmas. In this blog post, we delve into how the current wave of lawsuits against AI companies is influencing the growth and implementation of AI technologies.

ai lawsuits
Photo by Rock’n Roll Monkey on Unsplash

The Collision of AI and Copyright Law

One of the most pressing issues in the AI landscape is the use of copyrighted material for training AI models. A recent case that brought this issue to the forefront involves Stability AI, a company that created image generators Stable Diffusion, Midjourney, and DreamUp. Several visual artists discovered that their artwork had been used without their permission to train these AI systems. This led to a class-action lawsuit against Stability AI, marking a significant moment in the intersection of AI and copyright law.

Here are some recent copyright lawsuits involving use of AI technology:

  1. Doe 1 et al. v. GitHub et al. (November, 2022): This class-action lawsuit was filed by several coders against GitHub, Microsoft, and OpenAI. The lawsuit revolves around GitHub Copilot, an AI tool that converts commands written in plain English into computer code in dozens of different coding languages. The plaintiffs allege that GitHub and OpenAI used their materials to develop Copilot and charged users a fee without giving any attribution to the plaintiffs, in violation of the GitHub licenses.
  2. Andersen et al. v. Stability AI et al. (January 13, 2023): Several visual artists filed a class-action lawsuit against the companies that created the image generators Stable Diffusion, Midjourney, and DreamUp. The artists allege that their artwork had been used without their permission to train these AI systems.
  3. Getty Images v. Stability AI: Getty Images alleges that Stable Diffusion’s use of its images to train models infringes on copyrights. Getty accused Stability of copying millions of its photos without a license and using them to train Stable Diffusion to generate more accurate depictions based on user prompts.
  4. Flora et al. v. Prisma Labs (February 15, 2023): This lawsuit involves Prisma Labs, the developer of the Lensa app. The plaintiffs allege that Prisma failed to meet all of the Illinois Biometric Privacy Act (BIPA)’s procedural requirements before collecting Lensa users’ selfies, pulling their facial geometry information from those images, and training the model on those geometries to output personalized avatars. **Update: Case compelled to arbitration. See Flora, et al., v. Prisma Labs, Inc., 2023 WL 5061955 (N.D. Cal. Aug. 8, 2023) **

The Ripple Effects of Lawsuits on AI Development

These lawsuits have far-reaching implications for the development of AI, including:

  1. Legal threats could force creators of AI systems to think more carefully about what data sets they train their models on. Perhaps, this could lead to more ethical and responsible practices in data collection and usage.
  2. Lawsuits could slow down the adoption of AI technology as companies assess the risks. This could potentially hinder the growth and development of AI.
  3. If these lawsuits prove successful, they could force AI companies to change the way AI is built, trained, and deployed. This could inevitably slow-down innovation of technology.
  4. Successful lawsuits could also create new ways for artists, authors, and others to be compensated for having their work used as training data for AI models, through a system of licensing and royalties.
  5. If courts and governments decide that AI-made inventions cannot be patented, the implications would be massive. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Yet, again this could stifle innovation in this impressive field.

Conclusion

As AI continues to evolve, it’s crucial that legal frameworks adapt to ensure ethical and responsible use of this technology. The current wave of lawsuits against AI firms is testing the boundaries of existing laws and regulations, potentially paving the way for new legal norms in the AI landscape. As we move forward, it’s essential to strike a balance between fostering innovation and protecting the rights of individuals and organizations.