Featured Article : NY Times Sues OpenAI And Microsoft Over Alleged Copyright

Written by Pronetic

Pronetic is a leading provider of core IT support for ISO 27001, Cyber Essentials and Cyber Essentials Plus compliance.

January 4, 2024

It’s been reported that The New York Times has sued OpenAI and Microsoft, alleging that they used millions of its articles without permission to help train chatbots. 

The First 

It’s understood that the New York Times (NYT) is the first major US media organisation to sue ChatGPT’s creator OpenAI, plus tech giant Microsoft (which is also an OpenAI investor and creator of Copilot), over copyright issues associated with its works. 

Main Allegations 

The crux of the NYT’s argument appears to be that the use of its work to create GenAI tools should come with permission and an agreement that reflects the fair value of the work. Also, it’s important in this case to note that the NYT relies on digital subscriptions rather than physical newspaper subscriptions, of which it now has 9 million+ subscribers (the relevance of which will be clear below). 

With this in mind, in addition to the main allegation of training AI on its articles without permission (for free), other main allegations made by the NYT about OpenAI and Microsoft in relation to the lawsuit include : 

– OpenAI and Microsoft may be trying to get a “free-ride on The Times’s massive investment in its journalism” by using it to provide another way to deliver information to readers, i.e. a way around its payment wall. For example, the NYT alleges that OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. The NYT’s legal team have given examples of these, such as restaurant critic Pete Wells’ 2012 review of Guy Fieri’s (of Diners, Drive-Ins, and Dives fame) “Guy’s American Kitchen & Bar”. The NYT argues that this threatens its high-quality journalism by reducing readers’ perceived need to visit its website, thereby reducing its web traffic, and potentially reducing its revenue from advertising and from the digital subscriptions that now make up most of its readership. 

– Misinformation from OpenAI’s (and Microsoft’s) chatbots, in the form of errors and so-called ‘AI hallucinations’ make it harder for readers to tell fact from fiction, including when their technology falsely attributes information to the newspaper. The NYT’s legal team cite examples of where this may be the case, such as ChatGPT once falsely attributing two recommendations for office chairs to its Wirecutter product review website. 

“Fair Use” And Transformative 

In their defence, Open AI and Microsoft appear likely to be relying mainly on the arguments that the training of AI on NYT’s content amounts to “fair use” and the outputs of the chatbots are “transformative.”   

For example, under US law, “fair use” is a doctrine that allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research. Determining whether a specific use qualifies as fair use, however, will involve considering factors like the purpose and character of the usage. For example, the use must be “transformative”, i.e. adding something new or altering the original work in a significant way (often for a different purpose). OpenAI and Microsoft may therefore argue that training their AI products could potentially be seen as transformative as the AI uses the newspaper content in a way that is different from the original purpose of news reporting or commentary. However, the NYT has already stated that: “There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it”. Any evidence of verbatim outputs may also damage the ‘transformative’ argument for OpenAI and Microsoft. 

Complicated 

Although these sound like relatively clear arguments either way, there are several factors that add to the complication of this case. These include: 

– The fact that OpenAI altered its products following copyright issues, thereby making it difficult to decide whether its outputs are currently enough to find liability. 

– Many possible questions about the journalistic, financial, and legal implications of generative AI for news organisations. 

– Broader ethical and practical dilemmas facing media companies in the age of AI. 

What Is It Going To Cost? 

Given reports that talks between all three companies to avert the lawsuit have failed to resolve the matter, what the NYT wants is: 

Damages of an as yet undisclosed sum, which some say could be in the $billions (given that OpenAI is valued at $80 billion and Microsoft has invested $13 billion in a for-profit subsidiary). 

For OpenAI and Microsoft to destroy the chatbot models and training sets that incorporate the NYT’s material. 

Many Other Examples 

AI companies like OpenAI are now facing many legal challenges of a similar nature, e.g. the scraping/automatic collection of online content/data by AI without compensation, and for other related reasons. For example: 

– A class action lawsuit filed in the Northern District of California accuses OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claim that this practice violates the Computer Fraud and Abuse Act (CFAA). 

– Google has been accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems. This case raises issues about the boundaries of data use and copyright infringement in the context of AI training. 

– A Stability AI, Midjourney, and DeviantArt class action claims that these companies used copyrighted images to train their AI systems without permission. The key issue in this lawsuit is likely to be whether the training of AI models with copyrighted content, particularly visual art, constitutes copyright infringement. The challenge lies in proving infringement, as the generated art may not directly resemble the training images. The involvement of Large-scale Artificial Intelligence Open Network (LAION) in compiling images used for training adds another layer of complexity to the case. 

– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation. 

The Actors and Writers Strike 

The recent strike by Hollywood actors and writers is another example of how fears about AI, consent, and copyright, plus the possible effects of AI on eroding the value of people’s work and jeopardising their income are now of real concern. For example, the strike was primarily focused on concerns regarding the use of AI in the entertainment industry. Writers, represented by the Writers Guild of America, were worried about AI being used to write or complete scripts, potentially affecting their jobs and pay. Actors, under SAG-AFTRA, protested against proposals to use AI to scan and use their likenesses indefinitely without ongoing consent or compensation.  

Disputes like this, and the many lawsuits against AI companies highlight the urgent need for clear policies and regulations on AI’s use, and the fear that AI’s advance is fast outstripping the ability for laws to keep up. 

What Does This Mean For Your Business? 

We’re still very much at the beginning of a fast-evolving generative AI revolution. As such, lawsuits against AI companies like Google, Meta, Microsoft, and OpenAI are now challenging the legal limits of gathering training material for AI models from public databases. These types of cases are likely to help to shape the legal framework around what is permissible in the realm of data-scraping for AI purposes going forward.

The NYT/OpenAI/Microsoft lawsuit and other examples, therefore, demonstrate the evolving legal landscape as courts now try to grapple with the implications of AI technology on copyright, privacy, and data use laws, and its complexities. Each case will contribute to defining the boundaries and acceptable practices in the use of online content for AI training purposes, and it will be very interesting to see whether arguments like “fair use” are enough to stand up to the pressure from multiple companies and industries. It will also be interesting to see what penalties (if things go the wrong way for OpenAI and others) will be deemed suitable, both in terms of possible compensation and/or the destruction of whole models and training sets.  

For businesses (who are now able to create their own specialised, tailored chatbots), these major lawsuits should serve as a warning to be very careful in the training of their chatbots and to think carefully about any legal implications, and to focus on creating chatbots that are not just effective but are also likely to be compliant.

You May Also Like…

0 Comments

Why Choose Pronetic

We Are ISO 27001 & Cyber Essentials Plus Certified

Be reassured that we have been externally audited. You can have complete peace of mind that the team managing your IT systems and safeguarding your data are independently vetted annually.

Seamless & Comprehensive IT Support

Our investment in people, tools and processes, continuously improved, ensures that we don’t just deliver exceptional I.T. support but include your compliance to Cyber Essentials or ISO 27001 “baked-in”. Yes, that means no more annual headaches and stress when your certification comes round.

Expert Support Money Back Guarantee

We're confident in the value we deliver. That's why we offer a 90-day, no-quibble money-back guarantee. If, for any reason, you're not completely satisfied with our IT support services, we'll provide a full refund and cancel your contract without any hassle.

Book Your Free IT Strategy Call Now!

Simply Fill In The Form Below To Receive Your Free IT Strategy Call:

By submitting this form, you consent to us using your personal information to contact you. For more information please see our privacy policy.