Search for:

The Office for Product Safety and Standards (OPSS) published a report on 23 May 2022 which considered the impact of artificial intelligence (AI) on product safety. This issue is also being considered in a number of other jurisdictions (see, for example, the EU’s Proposal for a Regulation laying down harmonised rules on AI).

The report provides a framework for considering the impact of AI consumer products on existing product safety and liability policy. This framework seeks to support the work of policymakers by highlighting the main considerations that should be taken into account when evaluating and developing product safety and liability policy for AI consumer products. No timeline is stated in the report for that evaluation/ development to take place, but the report makes clear the view that work is needed to ensure the UK’s product safety and liability regime can deal with AI developments.

  • Potential negative implications of AI

The report considers the potential negative implications of AI use on the safety of consumer products. In particular:

  1. Complexity – the characteristics of AI (those identified in the report include mutability, opacity, data needs, and autonomy) can translate into errors or challenges for AI systems that have the potential to cause harm. Further, there is often a need for integration or interoperability between AI products, leading to a complex supply chain, with many different economic operators directly or indirectly involved in product development, bringing increased complexity into the product lifecycle.
  2. Machine learning (ML) – ML models can give a product the ability to learn and change its actions on the basis of new data without human oversight, changing a products characteristics, including safety features and resulting in unpredictability.
  3. Robustness and predictability – challenges can occur because of the need for a significant amount of data to assist AI with decision making / functioning, and there is also a risk that biases may be inbuilt into a dataset used by AI to learn.
  4. Transparency and explainability – the complexity of AI, and the ML capabilities, can impact the ability to understand the reasons for an error or malfunction.
  5. Fairness and discrimination – if AI relies on biased data to aid its decision making, its behaviour could change from individual to individual, leading to discrimination (and possibly discrimination claims).
  • Product safety opportunities brought by AI

The report also considers the ways in which the incorporation of AI systems into manufactured consumer products can be of benefit. More specifically:

  1. Enhanced safety outcomes for consumers AI-led improvements in the design and manufacturing processes, and the use of AI in customer service (i.e. virtual assistants) to answer queries and provide recommendation on safe-usage to optimise product performance can ensure greater safety outcomes for consumers.
  2. prevention of product safety issues – products can provide real-life insights on product use and can give critical information to manufacturers on when a product embedded with AI might need repairs, before any safety issue arises.
  3. Preventing mass recalls – AI can enhance the data collection processes during industrial assembly enabling the discovery of non-conforming events on a product line, improving inspection, and monitoring post-purchase data to reduce the possibility of the need for a future recall.
  4. Protecting consumer safety and privacy – AI can be used to detect, analyse and prevent cyber-attacks.
  • Regulatory challenges resulting from AI driven consumer products

The report opines that the current legal framework is insufficient in many ways to deal with AI. In particular there are various shortcomings from a product safety / liability perspective:

  1. Definitions – It is not clear to what extent more complex AI systems fall within the existing definitions of product, producer and placing on the market, as well as the related concepts of safety, harm, damages, and defects. For example, the definition of “product” stipulated in the GPSR does not explicitly include or exclude software, leaving the position uncertain.
  2. Placing on the market – the current legislative focus on ensuring compliance at the point at which a product is placed on the market may no longer be sufficient / appropriate in situations where a product has the potential to change autonomously once in the hands of a consumer.
  3. Liability – the lack of transparency and explainability of AI models (i.e. the use of algorithms and ML) can impact the ability to understand reasons for an error or malfunction. If physical harm is caused, this has implications for assigning liability and may impact the ability for those that have suffered harm to obtain compensation. Further, the possibility of products undergoing changes after market placement, for example through software updates or ML produces a complex picture of liability – liability in such situations will be difficult to understand/ predict.
  4. Types of harm – AI consumer products may pose risks of immaterial harms (i.e. psychological harm or harm to one’s privacy and reputation) or indirect harms from cyber security vulnerabilities, which are not currently addressed in the GPSR.
  • Future outlook

The report notes that the hypothetical application of the UK’s product liability rules to AI products is a challenge, and that It remains unclear how product safety rules will apply to AI products.

At the moment, there are two core ways in which challenges brought by AI are being addressed:

  1. Standardisation –  AI standards could be developed by industry as a tool for self-regulation so that they can themselves define the requirements for product development. Standards may allow transparency and trust in the application of technologies, and at the same time support communication between all parties involved by using uniform terms and concepts.
  2. Industry and non-legislative approaches to tackling AI challenges – professional associations and consortia publish corresponding specifications or recommendations on AI. Many of the initiatives to tackle AI related challenges have been driven by industry, NGOs or consumer groups.

The inevitability of future AI developments is one of the factors driving likely reform at a UK level.

Author

Kate Corby is a partner in Baker McKenzie’s Dispute Resolution team in London. Kate has substantial experience of representing clients in complex litigation and arbitration, with a focus on construction and engineering disputes. She also has significant experience in advising on product liability, safety and regulatory compliance. Kate is a member of the firm's EMEA Dispute Resolution Steering Committee, and various of the firm's diversity related working groups at a local and global level. Kate is ranked as a Next Generation Partner in Legal 500 UK, noted for her "strategic thinking”, as being “excellent, smart, focused and very adaptable” and "highly regarded". Kate has also been ranked in Chambers UK and described as an adviser "who has impressed both clients and peers. Sources say: "She has great business acumen in addition to great legal knowledge. This was a tremendous help in maintaining and improving our relationships with our strategic partners in a very delicate moment."

Author

Jo is a senior associate in Baker McKenzie's Dispute Resolution team in London. Jo advises clients in a wide range of industries on complex commercial disputes and investigations. She also regularly provides specialist product safety and regulatory compliance advice and acts for clients in product liability disputes. One of Jo's other areas of specialism is advising clients on a wide range of regulatory, public and administrative law issues, including judicial review, consultations, freedom of information and public procurement. Jo's practice often involves drawing on crisis management experience to help clients protect their reputations and shareholder value when dealing with urgent, time pressured issues and/or intense public scrutiny. Jo was ranked as a Next Generation Lawyer in the Legal 500 Product liability: defendant category in 2017. Jo has participated in the UK Government's Working Group on product safety and recalls and has assisted with the development of the Government's training programme for Trading Standards Officers on the new UK Code of Practice for Product Recalls.

Author

Lauren is an associate in the Baker McKenzie Dispute Resolution team in London. Lauren maintains a diverse range of matters throughout the department's practice areas, spanning commercial litigation to investigations and arbitration cases. During her training contract, Lauren spent three months on secondment in the Dispute Resolution department of the Firm's Hong Kong office, primarily advising clients in respect of compliance and investigations in the Asia-Pacific region.

Write A Comment