This article appeared on the online news platform, Daily Business, and can also be viewed, here.

Artificial intelligence (AI) has woven itself into, and transformed, various aspects of our lives, from health care and finance to social media and transportation. As generative AI tools continue to evolve, most recently with the upgraded and rebranded Microsoft Copilot generative AI service (previously Bing Chat), we are all grappling with a number of issues.

One particularly thorny issue is the complex, and headline grabbing, intellectual property landscape that surrounds generative AI, particularly copyright and the balance to be struck between protecting AI developers’ innovations and the existing rights of creators.

Understanding AI and IP

Copyright law protects original works of authorship, such as music, literature, and visual art. However, determining ownership of AI-generated content can be contentious given the apparent lack of human authorship, and questions of IP infringement arise especially when AI systems generate outputs based on large datasets and pre-existing works.

Of course, generative AI systems encompass a broad spectrum of technologies, including machine learning algorithms, natural language processing systems, computer vision applications, and more. These systems need to be trained on a vast amount of data to make them effective, making data an essential component of AI innovation.

There is a growing concern as to where this source data comes from.  As we see the number of copyright infringement claims continues to rise across various jurisdictions, the recurring theme is that some of this data, including IP-protected content, may have been scraped from the internet, without the consent of the relevant rights holders, not only to train the AI model but also to create potentially competing content.

On the one hand, rights holders may claim this is an infringement and, on the other hand, AI developers may maintain that this use is fair and transformative. And of course, it is very difficult for a rights holder to prove infringement.

Generative AI in the Courts: Getty Images v Stability AI

In January 2023, legal proceedings were brought by Getty Images against Stability AI.  According to Getty, Stability AI had scraped millions of images from Getty’s library of stock images without consent, and unlawfully used those images to train and develop its image generator called Stable Diffusion.

It is also claimed that the output of Stable Diffusion (synthetic images that can be accessed by users in the UK) infringes intellectual property rights by reproducing substantial parts of works in which copyright subsists and/or bears a UK registered trade mark. For example, some Stable Diffusion output included reproductions of Getty Images’ watermark.

At the end of last year, Stability AI tried to have certain of Getty’s claims thrown out on the basis that the Stable Diffusion system was trained solely outside of the UK (as copyright is a territorial right).

The CEO of the company told the court the case should not be heard in the UK because no employees there had ever worked on Stable Diffusion. That argument failed, because the CEO had made statements to the media that the company had brought software developers to the UK to work on its products, potentially including Stable Diffusion.

So, the case rumbles on and is expected to go to trial later this year when the High Court will also consider various other claims, including whether the importation and use of the pre-trained Stable Diffusion software in the UK amounts to copyright infringement.

The court’s decision is likely to have far-reaching consequences for rights holders and AI developers and may go some way to answering the wider, and potentially troublesome, question of whether AI developers should be held accountable for the actions of AI end users (the repercussions from which would be significant).

Clarity on the relationship between IP law and AI

On the regulatory front, HM Government published an AI White Paper in March last year and undertook to produce, by last summer, a code of practice which would be developed in collaboration with the AI and creative sectors to give clarity on the relationship between IP law and AI.

Silence followed until earlier this month when the Government published its delayed response to the AI regulation consultation and confirmed that although the Government is committed to supporting both the AI technology sector and creative sectors, no effective voluntary code could be agreed, perhaps demonstrating the complex challenge that regulating AI poses.

Instead, the Government has stated that further proposals are in the pipeline and that it intends to explore mechanisms for providing greater transparency so that rightsholders can better understand whether content they produce is used as an input into AI models.

We also know from the response earlier this month that the Government is in no rush to regulate. So, no further clarity for now and we must wait and see how these issues play out through the court process.

This leaves a lot of legal uncertainty and whilst AI is without a doubt a helpful tool, it must be used with caution and awareness of the risks that continue to surround it.