AI And The Law – Updates From Around The World

In an era dominated by unprecedented technological strides, the rapid development of artificial intelligence (AI) has emerged as a transformative force, reshaping industries and challenging traditional legal frameworks. 

The swift proliferation of AI alongside heightened accessibility through platforms like OpenAI’s ‘ChatGPT’ and Googles ‘Bard’ have resulted in unique and unprecedented challenges that require immediate legal responses. In contemplating AI in the legal context, jurisdictions are grappling with the following challenges:

  • the scope and manner of AI regulation; and 
  • the legal considerations surrounding the AI process, such as the intersection between AI and intellectual property (IP) rights.

This article explores these two challenges by considering the most recent legal AI world developments: the European Union’s Artificial Intelligence Act (AI Act), and The New York Times IP copyright lawsuit against OpenAI and Microsoft Over AI (AI Lawsuit).

The AI Act and the AI Lawsuit are two of the largest AI developments to date and, in this early stage of AI proliferation, all legal AI developments irrespective of their original jurisdictions will influence how local AI legal frameworks are drafted and influence the broader global legal response to AI.

Given the widespread and undeniable impact of AI on industries, companies and people’s personal lives, close attention should be given to global AI developments as these will determine the first limitations on AI systems and define AI social norms which are likely to be adopted in Australia.

European Union’s Artificial Intelligence Act

On 8 December 2023, a European Union (EU) political agreement was reached regarding the AI Act.

The AI Act is the first and most comprehensive legal framework on AI, promoting safety and fundamental rights of citizens whilst supporting innovation. It addresses the key issues that are at the heart of the current AI debate, such as transparency of AI models and accountability of developers, rules for safety of foundation models, definitions of high-risk applications and prohibited applications, implementation, and enforcement of the rules.

The AI Act provides the first iteration of limitations on AI processes and uses, thus reflecting what society currently considers to be an acceptable scope for AI. The AI Act imposes limitations by placing AI processes and uses into four different categories:  

  1. Prohibited – unacceptable risk;
  2. High Risk – permitted subject to compliance with AI requirements and ex-ante conformity assessment;
  3. Transparency risk – permitted but subject to information/ transparency obligations;
  4. Minimal or no risk – permitted with no restrictions, voluntary codes of conduct possible.

Importantly, prohibited AI processes and uses include:

  • social scoring, being the act of observing persons living their daily lives and scoring their behaviours to give them an overall ranking;
  • real-time remote biometric identification, except in relation to specific crimes and/or narrowly defined circumstances (prior authorisation by a judicial or independent administrative authority required);
  • individual predictive policing, being the act of assessing or predicting the risks of a natural person committing a criminal offence;
  • emotion recognition, particularly in workplace and education institutions (unless for medical or safety reasons); and
  • untargeted scraping, particularly of internet or CCTV for facial images to build-up or expand databases.

Alongside prohibitions arises the question of enforcement, how best to monitor AI to ensure prohibited uses and processes are not employed. The AI Act contemplates several enforcement mechanisms including:

  1. National competent authorities – to supervise high right conformity and market surveillance;
  2. AI Office – to conduct evaluations, request measures and issue fines. To supervise general purpose AI. Includes European Commission’s internal implementation body;
  3. European Artificial Intelligence Board – composed of high-level representatives of competent national authorities to advise and assist the European Commission and AI Office;
  4. Advisory forum – varied selection of stakeholders to provide insight and advice;
  5. Scientific panel – to support the implementation and enforcement of regulation as regards general purpose AI models.

As the implementation of the AI Act will take place over the next three years, these enforcement mechanisms will develop and become important world resources for understanding and managing AI development.

There is no doubt that over the next several years other jurisdictions will review and adopt aspects of the AI Act into their own AI legislations, and take inspiration and guidance from the work/advice of the various enforcement bodies under the AI Act.

The Lawsuit

On 27 December 2023 the New York Times filed a lawsuit against OpenAI and Microsoft alleging copyright infringement.

The New York Times alleges that OpenAI and Microsoft used millions of New York Times articles to train and develop AI systems (including ChatGPT), and that many AI generated articles generate verbatim excerpts from New York Times articles without authorisation or proper author attribution. The AI Lawsuit is said to be worth millions of US dollars.

This legal action between the New York Times, OpenAI, and Microsoft holds significant implications for the ongoing development of AI regulations. It brings to the forefront the urgent need for clear guidelines and legal frameworks governing the usage and ownership of AI-generated content. As one of the first cases of its time, the outcome of the AI Lawsuit is likely to set precedents in defining the scope of intellectual property rights in the evolving landscape of artificial intelligence.

Policymakers and industry stakeholders are closely watching, as decisions made in the AI Lawsuit are likely to determine what society considers an acceptable balance between innovation and the protection of intellectual property.

Lavan Comment

In these early stages of AI proliferation, all AI legal developments are worth considering as they will influence the development of comprehensive AI regulations around the world.

Given the prominence of AI and its undeniable impact on all aspects of society moving forward, IA regulations will affect, to various degrees, all industries, companies and persons. It is therefore cautious to monitor AI developments and get an early understanding of what AI processes and uses are deemed acceptable at any given point in time. The social attitude towards AI, its uses and processes, will be continually involving, particularly as more industries and persons are exposed to AI and the benefits (or challenges) it can cause.

If you have any concerns in relation to your liability for your use of AI systems, or AI generally, please contact Iain Freeman, Partner, Litigation and Dispute Resolution Team.

Disclaimer – the information contained in this publication does not constitute legal advice and should not be relied upon as such. You should seek legal advice in relation to any particular matter you may have before relying or acting on this information. The Lavan team are here to assist.