Artificial Intelligence: what is it and should it be regulated?

 

Over the course of the last 50 years, artificial intelligence (“AI”) has developed significantly, with many companies increasingly embedding AI in their products, services, processes and decision-making. As the use of AI has become more widespread, it has highlighted more than ever the need for regulatory accountability and guidance. Yet, whereas AI is developing continuously at rapid speed, regulators across jurisdictions are struggling to keep up. One of the sectors where these shortcomings have become glaringly obvious is the art industry, where issues related to intellectual property rights and attribution are on the rise. This article considers the current regulatory framework (or lack thereof) around AI and how to navigate through these unchartered waters.

Historical Development of AI

The development of AI can be traced back to the 1950s, when Alan Turing published “Computing Machinery and Intelligence,” with a test on how to identify a “thinking” machine, opening the doors to what would be known as AI. This was followed by the development of the first artificial neural network (ANN) by Marvin Minsky and Dean Edmonds. ANN being a method in AI that teaches computers to process data in a way that is inspired by the human brain (i.e., deep learning).

Through the 1980s there was the development of machine learning, where AI was taught to use models and data to make decisions and predictions. However, the lack of computational power and ability to solve complex problems created a barrier to AI’s progress at that time.

In the last 15 years, researchers and developers focused on mimicking the way the human brain processes information, which led to significant advancements. As a result of increased technological development, large databases, and breakthroughs in machine learning (particularly deep learning), there have been advancements in speech recognition, computer vision, language processing and autonomous vehicles. Today AI has been integrated into everyday life, from virtual personal assistants in our mobile devices to chatbots to autonomous drones. 

The United Kingdom’s Approach to AI

The UK Government has taken a positive approach to AI, seeking to encourage AI businesses in the UK with the aim to “make the UK a science and technology superpower by 2030”[1].

In March 2023, the Government set out its current position by publishing a white paper titled "A pro-innovation approach to AI regulation" (“White Paper”), detailing a proposed 'light touch' framework that seeks to balance regulation with promoting responsible AI innovation. By doing so, it takes a principles-based approach rather than proposing legislative oversight.

The Government proposes to work with regulators in each sector to produce specific governing principles. The burden to collaborate cross sectionally would be on the regulatory bodies. Although some bodies readily and easily collaborate currently, others have limited knowledge on AI, which poses a significant hurdle to achieving consistency across the board. Whilst there has been no reference to a unified centralised regulatory body, the Government did identify the need to oversee the regulation of each sector and facilitate collaboration without being repetitive in policy creation.

The White Paper highlights the UK’s prioritization of promoting technological development as an economic stimulus for the UK. However, in doing so provides little tangible guidance. But what does that mean in practice? The White Paper does not address practical risks in relation to data access, sustainability or allocation of liability. Indeed, it even refrains from providing a concrete definition of AI, instead broadly describing it as being “adaptive” and “autonomous”. These framework gaps could easily lead to inconsistencies in the application of the principles.   

The consultation under the White Paper closed on 21 June 2023 and within the next few months we should expect a more detailed response and potential guidance on the implementation of the White Paper principles and proposed framework. There has been plenty of discussions in various sectors of the need for internationally coordinated approaches and we hope this will be taken into consideration by the Government to provide more certainty to businesses and users.

The European Union’s Approach to AI

By comparison, the EU has taken a rules-based approach to the government of AI and is in the final stages of drafting its Artificial Intelligence Act (AI Act). The AI Act itself is fairly comprehensive, banning certain practices, and those which are not banned will be subject to stringent due diligence measures depending on the level of risk they pose. The AI Act will encompass a scale (unacceptable, high, limited, and minimal) on which AI can be placed that will be determined on the risk it poses to the health and safety or fundamental rights of a person. Systems with limited risks (such as video games) will fall outside the scope of legislation but will still have to adhere to transparency principles. Unacceptable/high risk AI (such as bio-metric software, job recruitment systems and autonomous vehicles) will trigger rigorous regulation, including in relation to testing, documentation of data quality and accountability on human oversight.

Steep non-compliance penalties will be introduced as part of the legislation, with the last proposed fines being up to €40 million or 7% of total worldwide annual turnover for the previous financial year, whichever is higher. Unlike the UK, the EU aims to establish a ‘European Artificial Intelligence Board’ to oversee the regulation and implementation of the AI Act to ensure uniform deployment of its principles.

Art Market Concerns

One of the most vocal sectors regarding AI regulation is the art industry. Artists and creatives are rightly concerned by AI’s effects on intellectual property rights. There are a number of potential issues requiring legal clarity, such as copyright ownership arising from AI generated work or copyright infringement arising from AI’s unauthorised use of original works. The pro-innovative approach taken by the UK Government has already highlighted the potential competing interests between supporting the tech sector versus protecting the rights of artists and creatives.

Under the UK’s Copyright, Designs and Patents Act 1998, a copyright owner has the inherent and exclusive right to use and reproduce their works, with limited exceptions to copyright infringement of those rights. One of the few permitted exceptions allows text and data mining (TDM) of copyrighted works for non-commercial purposes, provided that the user has lawful access to the work (e.g., a licence or subscription). TDM is a key part of the AI process, as text and data is input into the computer system to grow the machines intelligence and knowledge.

In June 2022, the UK government piloted the expansion of TDM for commercial purposes to drive AI businesses to the UK. This was heavily opposed by the art and creative industries, criticising the proposal for its failure to protect the integrity and originality of their work. The TDM expansion proposal was later withdrawn. Whilst the decision brought a collective sigh to the sector, AI-generated content continues to raise legal issues. Questions such as when a work would be infringing and who has ownership of the artwork when similar artworks based on similar or the same AI prompts can be generated. AI-related copyright claims are now slowly making their way through the courts but will likely take years to resolve.

Conclusion

It is not clear how intellectual property law will adapt to AI, but the technology is clearly going to transform the ways in which people create and consume art. What is clear is that AI is here to stay and therefore the regulation of AI is needed. Doing so can provide clear guidance to businesses, which will facilitate technological advancement and increase stakeholders’ trust, whilst improving fairness and protections for consumers. Ultimately, whilst taking a positive approach to AI can be pragmatic, regulators should not turn a blind eye to the challenges and risks inherent with AI and must take an active role in addressing these to safeguard the public.  

For more bespoke advice on intellectual property rights and art law generally, please contact our specialist Margherita Barbagallo, Head of Litigation, IP & Art Law by scheduling a discovery call.

[1] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach#:~:text=This%20white%20paper%20sets%20out,and%20technology%20superpower%20by%202030.

Book a call with our Intellectual Property Solicitor today ↓


Author

Margherita Barbagallo

Head of Litigation, IP & Art Law

Email - margherita.barbagallo@dragonargent.com

LinkedIn


Co-author

Sara Maghouz

Trainee Solicitor

Email - sara.maghouz@dragonargent.com

LinkedIn

 

 

Join our network of entrepreneurs and benefit from a range of support including an ecosystem of trusted partners, our weekly blog, plus webinars and whitepapers on the leading challenges founders face when scaling businesses.

 

Margherita Barbagallo

Head of Commercial Litigation

Previous
Previous

Do you know your duties as a director?

Next
Next

Employer’s Guidance to Unfair & Wrongful Dismissal