Getty’s UK suit leaves Stable Diffusion mostly intact
The UK High Court ruled that Stability AI's Stable Diffusion model is not an "infringing copy" of copyrighted works under English law, dismissing Getty Images' core copyright and database right claims in the first UK judgment on AI training. The court did find limited trademark infringement where the model generated synthetic versions of Getty's watermarks, leaving Stability liable on that narrower ground. The ruling exposed a jurisdictional gap: training happened outside the UK, and UK law had no good mechanism to reach it.
Incident Details
Tech Stack
References
The Lawsuit
Getty Images filed suit against Stability AI in the UK High Court in January 2023. The claims were broad: primary and secondary copyright infringement, database right infringement, trademark infringement, and passing off. Getty alleged that Stability used millions of Getty's images and associated metadata to train the various versions of Stable Diffusion, and that the model's outputs - some of which resembled Getty content and even reproduced versions of Getty's watermark - infringed Getty's intellectual property rights.
It was the first case in the UK to test whether training an AI model on copyrighted works, and distributing that model, constitutes infringement under English law. Getty also had a parallel lawsuit in the United States (filed in Delaware), but the UK case was expected to set the precedent for how European-style copyright frameworks handle generative AI.
The judgment, handed down on November 4, 2025 by Mrs Justice Joanna Smith DBE, ran to hundreds of pages. Getty lost on nearly every significant claim.
Copyright: Model Weights Are Not Copies
The central copyright question was whether Stable Diffusion, as a set of trained model weights, qualified as an "infringing copy" of Getty's copyrighted works under the Copyright, Designs and Patents Act 1988 (CDPA). If the model counted as an infringing copy, then distributing it in the UK would amount to secondary copyright infringement under Sections 22 and 23 of the CDPA - importing or dealing in infringing copies.
The court ruled that it was not. Mrs Justice Joanna Smith held that an AI model which does not store or reproduce copyrighted works is not an "infringing copy" under UK law. The model's parameters are mathematical values derived from a training process. They encode statistical patterns and learned features, not copies of individual images. As the judgment put it, the outputs are "purely the product of the patterns and features which they have learnt over time during the training process."
This was a critical distinction. The court acknowledged that an "article" under UK copyright law can be intangible - so model weights downloaded into the UK could, in theory, constitute an article that was "imported." But even accepting that the model weights were an importable article, the weights didn't contain copies of Getty's works. A prior case had found that a RAM chip briefly storing a copy of a copyrighted work was an infringing copy "for a short time," but Stable Diffusion's weights didn't work that way. No individual Getty image could be extracted from the parameters.
Getty had originally claimed infringement across millions of works used in training. When pressed by Stability's lawyers to explain how their output examples had been selected, Getty narrowed their copyright claim to just 17 specific works. The narrowing didn't help. The court found no secondary copyright infringement for any of them.
The Obiter Warning
Mrs Justice Joanna Smith included a significant aside in her judgment. She said that if she was wrong - if Stable Diffusion did constitute an infringing copy - then Stability AI would have been liable, because the company knew or had reason to believe it was infringing copyright.
The evidence for this was damning. Stability's own staff knew that works had been scraped from the web without copyright holders' consent. Internal chat exchanges discussed how to get rid of watermarks in the training dataset. Staff communications acknowledged the potential that the training process was illegal.
None of this affected the actual ruling, because the premise (that the model was an infringing copy) failed. But the judge went out of her way to signal that if the law were different, or if a future court reached a different conclusion on the "infringing copy" question, the knowledge element would already be established by Stability's own internal communications.
The Jurisdictional Gap
Getty's primary copyright infringement claim - that the actual act of training Stable Diffusion on Getty images was infringing - wasn't really adjudicated on its merits. The training happened outside the UK, most likely in the United States using cloud computing infrastructure. UK copyright law governs acts that take place in the UK. Since the training happened elsewhere, the UK court couldn't reach it through a primary infringement claim.
This left Getty pursuing secondary infringement: the distribution and availability of the trained model in the UK. But secondary infringement requires the model to be an "infringing copy," which the court found it wasn't. The result was a jurisdictional gap. The training was done abroad, beyond the reach of UK copyright law. The distribution happened in the UK, but the product being distributed (the model weights) didn't qualify as a copy.
The gap is structural. Any AI company that trains its models outside the UK - which is most of them, since major training runs happen in US or international data centers - can distribute the resulting model in the UK without facing secondary copyright infringement claims, at least under this court's reasoning.
Database Right: Also Dismissed
Getty also claimed that Stable Diffusion infringed its database rights in the Getty Images database. This claim was dismissed as well. Getty had positioned the database right claim as a backup to the copyright claim, and it fell for substantially similar reasons. The model weights didn't constitute an extraction or reutilization of the database contents in the way required by the relevant law.
Trademark: The One Getty Won
Where Getty did succeed was on trademark infringement - specifically, the watermarks. Some versions of Stable Diffusion, when generating images, would produce outputs that contained synthetic reproductions of Getty's distinctive watermark. The watermarks were artifacts of the training process: the model had seen enough Getty-watermarked images that it learned to reproduce the watermark pattern as part of its image generation.
The court held that Stability was "using" Getty's trade marks on synthetic image outputs, and that this use was identical to Getty's specifications for its marks (which covered digital imaging services, downloadable digital illustrations and graphics, and digital media). The trademark claims had to be assessed separately for each version of Stability's model, since different model versions had different propensities for generating watermarked outputs.
The court found that all three categories of consumer it examined would see the presence of Getty watermarks on Stable Diffusion outputs as indicating some sort of commercial connection between Stability and Getty - perhaps a licensing arrangement. Less technically sophisticated users of the web-based DreamStudio interface (Stability's consumer-facing product at the time) might go further and assume the outputs actually originated from Getty.
The trademark finding was real, but it was also limited. It applied to specific model versions and access pathways, and only to outputs that actually contained the watermark artifacts. Later model versions that had been cleaned up to remove watermark generation tendencies would presumably not trigger the same infringement. The trademark win gave Getty an injunction against the specific infringing conduct, but it didn't address Getty's fundamental complaint about training on its images.
The Precedent Problem
The Getty ruling left copyright holders in an awkward position. Under the court's reasoning, the act that arguably infringes copyright - training the model - happens outside UK jurisdiction. The product that enters the UK - the trained model - isn't a copy. The outputs that sometimes reproduce protected content are addressed through trademark law (for watermarks) but not through copyright (for the underlying images).
Getty called for legislative reform. The UK government's Data (Use and Access) Act was already under discussion, and the Secretary of State was required to make proposals on several issues, including AI systems developed outside the UK. Whether those proposals would close the gap identified by the Getty ruling remained an open question.
Stability AI won the copyright battle but still faced trademark liability. Their model had literally learned to forge another company's brand mark onto its outputs - a result that was both technically predictable (train on watermarked data, get watermarked outputs) and commercially embarrassing. Internal communications showed staff were aware of the watermark problem during training and discussed removal strategies, which suggests the issue was understood to be a risk well before the lawsuit.
The parallel US case continued separately in Delaware, under a different legal framework where the training act itself could be at issue. The UK ruling didn't bind American courts, but it established the first major common-law precedent on AI training and copyright - and that precedent favored the AI companies on the copyright claim while still finding liability on the narrower trademark question.
For a company that scraped millions of copyrighted images, discussed internally how to remove the copyright holder's watermarks from the training data, acknowledged the potential illegality of their training process in staff communications, and then released a model that sometimes forged the copyright holder's brand mark onto its outputs, the UK ruling was largely a win. Whether that says more about Stability's legal strategy or about the gaps in UK copyright law is left as an exercise for Parliament.
Discussion