Jusletter IT

AI as a violator of patent rights: determining liability

  • Author: Anna Didyk
  • Category of articles: IP-Law
  • Region: EU
  • Field of law: IP-Law
  • Collection: Conference proceedings IRIS 2023
  • DOI: 10.38023/261f037f-a256-4fd6-86a2-5497d6b31fed
  • Citation: Anna Didyk, AI as a violator of patent rights: determining liability, in: Jusletter IT 27 April 2023
Due to the quick technological progress the AI which is capable of autonomous decision-making is becoming more prevalent. As AI is now capable of self-learning, the legal system needs to address the questions previously not considered. One of such questions is the potential of self-learning AI to develop in such a way that it infringes on IPR, including potential patent infringement. This infringement does not have to be foreseeable by any human involved in the making and using the AI, therefore sometimes, there will be no human to provide compensation. Without any laws or standards in place applicable to AI directly, current liability rules need to be changed to address this challenge.

Table of contents

  • 1. AI and Patent Infringement
  • 2. Progression of AI – self-learning
  • 3. AI and Multitude of Actors
  • 4. Determining Liability of AI – EU Response
  • 4.1. Liability Proportionate to Training
  • 4.2. Strict Liability of Operator
  • 4.3. AI Liability Directive
  • 5. What next?
  • 6. Conclusion
  • 7. Literature and other sources

1.

AI and Patent Infringement ^

[1]

The rules on liability as well as the rules governing the protection of intellectual property rights (IPR) have developed with human in mind. Nobody would expect that at some point, there will be technological advancement of such degree that the world will see the AI creating results comparable to the inventions of a human mind. Now, there is struggle as the world is trying to tweak the current regulatory scheme to address the rapid technological advancement. New questions arise as the AI advances challenge the pre-established order of things, including the rules relating to the patent systems. From the essential question gnawing at the very basics of the patent system (can AI be an inventor?) to practical questions (are the disclosure requirements sufficient with regards to rapid advancements in AI technology?), the ongoing debate highlights that there are many areas in the patent law of today that need rethinking.

[2]

One of such issues is a question of patent infringement. As AI technology is capable of “learning” (adapting through elaborate learning algorithms), and so attempting actions that the person creating AI did not anticipate,1 there is a risk that there will be cases of patent infringements where the infringer was the AI rather than a person. But who (or what) should be held liable when AI happens to infringe a patent – the developer, the manufacturer, the owner, the AI itself? AI is unaware that it is infringing a patent (not unlike a person who commits a criminal act when suffering from a mental disorder), and it may perform acts which the developer did not account for, unexpectedly infringing a patent while being in use after autonomously modifying itself. Who is responsible for AI’s actions in case of unforeseen patent infringement and to what degree should (or could) the infringement be attributable to AI? For example, what if the developed AI could not infringe a patent, but it was capable of doing so after later training or prolonged use by the owner? To what extent should the human input even be acknowledged when inventions rely on machine learning and neural networks? The law does not provide definite answers to the abovementioned questions as of today. However, these issues cannot be left unanswered as technical advances of AI are accelerating rapidly, requiring an adjustment to the current policy framework.

[3]

This article will address the question as to how to best approach the liability in case of patent infringement occurring due to automated decision-making of AI as well as recent relevant updates in the scheme of European regulatory framework.

2.

Progression of AI – self-learning ^

[4]

When discussions on AI occur between lawmakers and the academic sphere, the detailed understanding as to how exactly AI works is always necessary in order to subject oneself to musings about the future of legal rules surrounding the AI. The European Union (EU) has progressed somewhat in introducing the definition in the proposed AI Act2, however since the AI Act is not yet in force, it is not unreasonable to believe that the definition might still be updated. In any way, it is necessary to understand exactly what kind of AI is addressed when speaking about any related particular issue. Below, a few types of AI that are applicable to this article shall be described and the ways the use of AI could constitute patent infringement.

[5]

Due to the technological advancements, some AI systems are now capable of problem solving on their own, generating results that a human simply would not be able to come to due to his limited brain processing power. For example, evolutionary computing, used for optimization of complex problems by mimicking the process of natural evolution,3 employs algorithms in such a way that it is capable of producing incredible results in the area of R&D, that do not only equal human-produced solutions, but can supersede those due to the their learning capabilities and advantages such as advances in network compatibility (meaning that it is possible to share information instantaneously with other computers) and rapidly increasing speed of computer processing power, leading to a number of potential discoveries being so high that no human could ever dream of being as productive.4 In a study led by John Koza, a computer scientist working closely with genetic programming (a variant of evolutionary algorithm), a computer was able, by applying the principles of evolution to a problem given by Koza, to produce human-competitive results that infringed a previously patented invention.5

[6]

As another example, if in machine learning (a process in which computers arrive at solutions autonomously after being fed certain data) the algorithms which are unsupervised are used, meaning that there is almost no human involvement in the process, that also creates a problem – the results that are thus produced are created without any natural person involved, or almost any, as they develop without needing any further interventions from the outside.6

[7]

Further, computational processes involved in training and using neural networks could in some circumstances constitute infringement.7 Neural networks (NNs), or artificial neural networks (ANNs), are part of machine learning processes and are employed in deep learning algorithms.8 Mimicking the workings of human brain, NNs are made of several layers, through which NNs produce outputs after being fed data by a human operator.9 If a NN, after training (finding appropriate weights of the neural connections to fit the set objectives, to map the inputs to outputs in best possible way)10 produces an output that mimics perfectly the results of a process that is already patented, that might constitute infringement.11 If a NN was trained with the help of data that were processed using a patented method, later on this NN would be potentially able to mimic the results of such patented method – it would provide results identical to a situation if a patented method was used to produce the results, thus raising a question of potential infringement, be it literal or through the doctrine of equivalents, if the results produced by a trained NN would only come reasonably close to the results produced by a method already patented.12 And if there would be a patented method that would be made out of only one element and such element would be replaced by a NN trained as described above and that would be able to produce similar output, such an action would without doubt constitute infringement.13 Such an infringement might be very well unforeseeable by the producer, trainer and operator of such NN.

[8]

In the area of patent infringement, this rising autonomy of AI forces to question the accepted dogmas around allocation of liability for actions of computers that can be autonomous to a certain degree.14 If an infringing output was developed and an infringement claim was asserted against a human, what should be done, and what solution would be fair, if the invention concerned was produced without human input and was beyond anybody’s control?15

3.

AI and Multitude of Actors ^

[9]

Now that it was shown how the AI could in some cases be capable of producing human-competitive results autonomously, it is obvious that numerous challenges rise with attribution of responsibility. In the area of patent infringement, who should bear the responsibility if the AI autonomously produces a n infringing output?

[10]

When thinking about the allocation of liability for the actions produced by autonomous AI which could potentially constitute patent infringement, the following problem is encountered. AI could be developed by one entity, trained by another, and finally operated by a third. As an example, a following real-world scenario – a system is created by a company, which was developed in order to be able to predict the quickest path from point A to point B. Afterwards, the system developed would be sold to another company, what would train the system by feeding it the necessary data in order for the system to function properly, such as satellite data and information about conditions of particular roads.16 Without that information, the system would not be able to function, so the role of the trainer here is essential. Next, the system would be sold to a user, and during the use the system would calculate a procedure to determine the quickest path between two points, and such procedure used would be the same as already claimed by an existing patent.17 Imagining that such an end result was not foreseen by any of the actors involved and each person responsible for development and training the system performed all that is normally expected of them, i.e. they exercised all necessary duty of care, the answer to the question as to who should be held liable for such infringement is not obvious. The situation grows even more complex when the degree to which the ability of AI to learn from its experience and thus develop autonomous and cognitive features will grow in the future is taken into account.

4.

Determining Liability of AI – EU Response ^

[11]

The problems that were described in the text above, were not developing unnoticed by the legislators. The European Parliament introduced two resolutions in the past years dealing with the new legal issues rising with the brisk evolution of AI. Considering a degree to which AI is used in today’s environment, be it healthcare, military, education of recruitment of employees, it was obvious that the current legislative scheme does not provide enough safeguards to ensure the wellbeing and protection of citizens and does not provide sufficient means for persons suffering harm due to acts driven by or performed by the AI.

4.1.

Liability Proportionate to Training ^

[12]

In 2017, the European Parliament (EP) introduced a resolution dealing with civil rules on robotics.18 Acknowledging the need for a unified approach of the European Union in the area of development of regulatory standards for robotics and AI,19 the EP called for, among others, implementation of the new rules on liability.20 The EP was keeping in mind the autonomy and the ability of AI to make quasi-independent decisions, without any outside influence of a human agent and situations when the acts and omissions of AI cannot be traced back to a person.21 The solutions proposed by EP in regards to this problem were of a rather innovative nature, proposing that in cases when it is not feasible to clearly identify a party that is responsible for making good the damage it has caused due to the AI’s learning abilities, thus bringing specific unpredictability to actions of AI,22 the following approach should be applied: “...once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot’s learning capability or autonomy, and the longer a robot’s training, the greater the responsibility of its trainer should be...23, effectively suggesting that the autonomy should be assessed proportionally, taking into account a degree of autonomy of a robot24. Further, even a possibility of allocating a specific legal status to robots in the future is mentioned in the text of this 2017 EP Resolution, meaning giving electronic personhood to the most sophisticated machines able to behave autonomously, thus making them responsible for the damage they cause.25

[13]

Whilst principle of proportionality have been long present in the context of the intellectual property law and it being generally accepted that to balance all the interests when applying measures and remedies it is always necessary to balance interests of all the parties involved,26 in this case it would seem rather far-fetched that it would be always possible to decipher the actual degree of autonomy that was involved in the AI’s decisionmaking process. Particularly if the so-called “black box” effect is considered, meaning that the AI algorithm and its inner workings are not always revealed to tis users,27 and so it is not possible to correctly assess which part of the AI system made the problematic decision that led to an undesirable output (no clear understanding of AI workings). Further, considering the self-learning capacity of AI and thus their ability to make decisions independently, like humans, means that certain AI can be a “black box” even to the person behind its creation.28

[14]

Therefore, such an assessment of liability in case of autonomous AI “thinking” is not practically possible. The EP adjusted its position soon itself, proposing a different resolution in 2020.

4.2.

Strict Liability of Operator ^

[15]

In its new resolution29 from 2020 the European Parliament proposed a scheme much more detailed than the one previously suggested in 2017. Mentioning the black box problem and the acknowledging again the necessity for the update of the liability framework scheme,30 the EP proposed the introduction of strict liability of operator for specific cases of AI-caused harm.31 Operator (meaning a person who is “controlling a risk associated with the AI-system, comparable to an owner of a car”32) of any high-risk AI systems should be “strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system33. The EP went even further, stating that “Operators of high-risk AI-systems shall not be able to exonerate themselves from liability by arguing that they acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by their AI-system...34, thus making it impossible for operators to defend themselves stating that they exercised all the due care that is humanly possible and that the action of the machine was absolutely not foreseeable.

4.3.

AI Liability Directive ^

[16]

The efforts on the European level to set new rules relating to liability and AI culminated in Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive).35 However, the proposed document focused not on how to solve the question of AI performing autonomous actions, but rather on how to alleviate those persons who suffer harm from the actions of AI, provided that it is much more burdensome for victims of AI actions to prove the damage caused to them by AI and thus have a successful liability claim.36 Introducing rebuttable presumption of causality37 and increased transparency obligations38 of persons supposedly liable for the actions of AI leading to damage, the proposal is significantly alleviating the burden of claimants, ignoring the other challenges arising with the AI gaining the power to be autonomous and act in a way that it is impossible for its developers or operators to foresee any potential problems ex ante. If accepted and put into force, this AI Liability Directive would indeed help the victims as they would only need to prove the fault of the defendant (for example, noncompliance with the obligations stated in the upcoming AI Act), the fact that it is reasonably likely that such fault has led to a certain output or failure to produce an output by AI, and that such output or failure to produce output has led to damage,39 eliminating the need of a victim to explain in detail how exactly the harm was caused, which can be potentially rendered impossible by the complexity of AI. Complemented by the new obligation of a potentially liable person to disclose evidence that can help the claimant if ordered by court, as otherwise the claimant would not be able to prove the facts necessary as they do not possess the documentation relating to the AI concerned, the proposal addresses the issues of victims, unfortunately skimming over the issues of the self-learning capabilities of AI. It is only proposed that after the transposition period, it shall be examined whether there is a need for strict liability rules for claims against operators of AI.40

5.

What next? ^

[17]

The AI Liability Directive was the culmination of the debate on liability in relation to AI in the EU for the present moment. However, as was mentioned in the text above, the question posed at the beginning of this article was not in any way addressed by the upcoming legislation. It was shown without doubt that the AI is fully capable of infringing patent claims, and that some of the more complex AI might develop and learn to such a degree that they might produce solutions similar to humans, with no actual human involvement. Allocation of liability for these cases is still unclear. And while this issue could be approached from many potential angles, strict liability of operator proposed by the European Parliament in its Resolution from the year 2020 seems to be the only solution that will show some degree of usability as well as practicality.

[18]

Maybe it would indeed be a perfect solution to tweak the final attribution of responsibility for compensation of the damages so that the liability would be proportional to the degree of autonomy of AI, supervision of a human agent as well as training. However, the practical aspects of such an approach should be considered as well as the complexities of AI. Already now the EU is introducing new rules in order to ease the burden of proof resting with defendants because of how challenging it is to prove the harm occurred when it originated with the AI, especially considering a black box effect of some AI. It would get increasingly more complex if the precise proportion of degree of autonomy of AI versus supervision and/or training would have to be established. And so, in this situation there seems to be no other approach to adopt besides strict liability of operators. Strict liability for the harm caused by AI has, without doubt, drawbacks and it can be debatable whether this is an appropriate or fair solution for the allocation of liability. The truth is that it is always a risk that holding persons accountable for actions they could not foresee nor predict can hinder innovation as those persons would try to minimise their risks, especially when we are talking about AI – a driving force behind many businesses today and an integral part of a multitude of sectors. Having strict liability for operators would mean that some persons would potentially face enormous risk, especially when SMEs are concerned.

[19]

However, it is true that ubi emolumentum ibi onus – where there is the benefit, there must also be a burden. The actors involved in developing and using AI today, those that would be most likely to face the risks were the strict liability be implemented for operators, are also collecting all the benefits relating to it, especially monetary. There cannot be AI running operations freely “in the wild”, and not a single human actor to take the blame shall it develop to such a degree that it arrives at dynamic yet infringing solutions. Should there be no regulation to govern the allocation of liability, it is very likely that the AI would be used for infringement, as there would be no consequences.41

[20]

It is common in the different jurisdictions to impose strict liability on persons when it comes to the actions made by animals, the beings over which relevant persons have no control, if the actions of animals result in harm. Legal systems as a rule provide for strict liability for damage caused by animals because animals are considered to be an uncontrollable risk for which, nevertheless, liability must be allocated.42 A person who keeps the animal for his or her use and benefits from it is usually held liable, meaning a person who has actual control over the animal and not necessarily an owner, thus making it possible to easily trace such person in case of any problem.43 This makes sense due to the fact that animals cannot be perfectly controlled by humans. Neither can AI. And while the comparison between animals and AI ends there, the similarities in a sense that it is an object which cannot provide compensation for damages it causes are sufficient to substantiate the conclusions of this article.

[21]

In case the operators (persons exercising risk control over AI) would bear the ultimate responsibility for the actions of AI that shall produce human-competitive, unforeseeable solutions with no human input, there would also need to be an insurance scheme in place that would supplement the liability. Such an insurance scheme was as well part of the 2020 European Parliament proposal.44 The services of an operator should be covered by a compulsory insurance, similar to compulsory insurance of vehicles, in order not to hinder innovation and to keep trust in the new technology, as any progress always comes with risks.

6.

Conclusion ^

[22]

Due to the rapid technological advancement of the past decades, the AI has evolved to such a degree that it is now capable of autonomous decision-making. Next to the numerous benefits that the society is achieving due to this progress and other advantages that will undoubtedly come in the future for humanity, a problem of how to allocate liability for the actions of such autonomous AI should be solved, among them a problem on who should be held liable for patent infringement and who should be responsible to pay the compensation for such infringement, such as lost profits or royalties. Since the electronic personhood is not yet on the horizon, there is still a human agent that needs to be identified and traced in order to allocate responsibility.

[23]

It is necessary to find balance between supporting innovation and protecting the trust of citizens in the technological advancement. Without any regulatory standards in place that would apply to AI itself, the current rules on liability will have to search for a human agent. To ensure legal certainty and that the transparent rules are in place, strict liability for operators coupled with the compulsory insurance scheme seems to be a good answer as to how to address this issue in the future. The harm caused by autonomous activity of AI should be treated similarly to the situations when strict liability is imposed on persons responsible for animals, as in both cases there is a scenario when an object outside of human’s control can cause serious harm.

7.

Literature and other sources ^

Afori, Orit Frischman, Proportionality – A New Mega Standard in European Copyright Law. International Review of Intellectual Property and Competition Law, Munich, 2014, p. 911, pp. 89–911, DOI 10.1007/s40319-014-0272-1, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2500232

Bathaee, Yavar, The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, 31(2), 2018, pp. 890–938, available at https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf

Dam, Cees van. European Tort Law. 2nd ed. Oxford University Press, Oxford, 2013, 656 pp. ISBN 9780199672264

Eiben, A.E., Smith, J.E., Introduction to Evolutionary Computing. 2nd ed., Springer Berlin, Heidelberg, 2015, 287 pp., DOI 10.1007/978-3-662-44874-8, ISBN 978-3-662-44874-8

European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM/2021/206 final) (AI Act)

European Parliament. Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))

European Parliament. Civil liability regime for artificial intelligence. European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))

Foss-Solbrekk, Katarina, Ring, Caoimhe, IPO Artificial Intelligence and Intellectual Property: Call for Views (Patents). Oxford Intellectual Property Research Centre, p. 15, available at https://www.law.ox.ac.uk/sites/default/files/migrated/oiprc-ai-report_2021.pdf

Griva, Anastasia, To “Black Box” or not to “Black Box”? University of Galway Views and Opinions, 2022, available at https://impact.universityofgalway.ie/ai-and-creativity/to-black-box-or-not-to-black-box/

Hill, Alex. What’s the Difference Between Robotics and Artificial Intelligence? Robotiq, 2021, available at https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence

IBM Cloud Education. Neural Networks. IBM, 2020, available at: https://www.ibm.com/cloud/learn/neural-networks

Koza, John R. Human-Competitive Results Produced by Genetic Programming. Genetic Programming and Evolvable Machines, 11, 2010, pp. 251–284, DOI 10.1007/s10710-010-9112-3, available at: https://link.springer.com/article/10.1007/s10710-010-9112-3

Queguiner, Jean-Louis, What Does Training Neural Networks Mean? OVHcloud, 2020, available at https://blog.ovhcloud.com/what-does-training-neural-networks-mean/

Rice, Todd M., Vertinsky, Liza, Thinking About Thinking Machines: Implications of Machine Inventors for Patent Law. Boston University, Journal of Science & Technology Law, 8(2), 2002, pp. 574–613, available at:. https://www.bu.edu/law/journals-archive/scitech/volume82/vertinsky%26rice.pdf

Vesala, Juha Tuomas, Ballardini, Rosa Maria, AI and IPR Infringement: a Case Study in Training and Using Neural Networks. In: Ballardini, Rosa Maria, Kuoppämäki, Petri, Pitkänen, Olli (eds.): Regulating Industrial Internet through IPR, Data Protection and Competition Law. Kluwer Law International, Alphen aan den Rijn, 2019, pp. 99–114, available at: http://hdl.handle.net/10138/312572

Watson, Bridget. A Mind of its Own – Direct Infringement by Users of Artificial Intelligence Systems. IDEA: The Intellectual Property Law Review. 2018, 58(1), pp. 65–93, available at: https://www.ipmall.info/sites/default/files/hosted_resources/IDEA/a_mind_of_its_own_direct_infringement_by_users_of_artificial_intelligence_systems_-_watson.pdf

  1. 1 Bathaee, Yavar, The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Tech­nology, 31(2), 2018, p. 891, available at https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf.
  2. 2 Art. 3(1) of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM/2021/206 final) (AI Act).
  3. 3 Eiben, A.E., Smith, J.E., Introduction to Evolutionary Computing. 2nd ed., Springer Berlin, Heidelberg, 2015, pp. 13–14, DOI 10.1007/978-3-662-44874-8, ISBN 978-3-662-44874-8.
  4. 4 Rice, Todd M., Vertinsky, Liza, Thinking About Thinking Machines: Implications of Machine Inventors for Patent Law. Boston University, Journal of Science & Technology Law, 8(2), 2002, pp. 579–583, available at:. https://www.bu.edu/law/journals-archive/scitech/volume82/vertinsky%26rice.pdf.
  5. 5 Koza, John R. Human-Competitive Results Produced by Genetic Programming. Genetic Programming and Evolvable Machines, 11, 2010, pp. 265–266, DOI 10.1007/s10710-010-9112-3, available at: https://link.springer.com/article/10.1007/s10710-010-9112-3.
  6. 6 Foss-Solbrekk, Katarina, Ring, Caoimhe, IPO Artificial Intelligence and Intellectual Property: Call for Views (Patents). Oxford Intellectual Property Research Centre, p. 15, available at https://www.law.ox.ac.uk/sites/default/files/migrated/oiprc-ai-report_2021.pdf.
  7. 7 Vesala, Juha Tuomas, Ballardini, Rosa Maria, AI and IPR Infringement: A Case Study in Training and Using Neural Networks. In: Ballardini, Rosa Maria, Kuoppämäki, Petri, Pitkänen, Olli (eds.): Regulating Industrial Internet through IPR, Data Protection and Competition Law. Kluwer Law International, Alphen aan den Rijn, 2019, pp. 102–103, available at: http://hdl.handle.net/10138/312572.
  8. 8 IBM Cloud Education. Neural Networks. IBM, 2020, available at: https://www.ibm.com/cloud/learn/neural-networks.
  9. 9 Ibid.
  10. 10 Queguiner, Jean-Louis, What Does Training Neural Networks Mean? OVHcloud, 2020, available at https://blog.ovhcloud.com/what-does-training-neural-networks-mean/.
  11. 11 Vesala, Ballardini, op. cit. No. 7, pp. 109–113.
  12. 12 Ibid.
  13. 13 Ibid.
  14. 14 Rice, Vertinsky, op. cit. No. 4, pp. 599–602.
  15. 15 Ibid.
  16. 16 Watson, Bridget. A Mind of its Own – Direct Infringement by Users of Artificial Intelligence Systems. IDEA: The Intellectual Property Law Review. 2018, 58(1), pp. 77–79, available at: https://www.ipmall.info/sites/default/files/hosted_resources/IDEA/a_mind_of_its_own_direct_infringement_by_users_of_artificial_intelligence_systems_-_watson.pdf.
  17. 17 Ibid.
  18. 18 European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
  19. 19 Rec. S of the European Parliament Resolution 2015/2013.
  20. 20 Rec. Z-AI of the European Parliament Resolution 2015/2013.
  21. 21 Ibid.
  22. 22 Ibid.
  23. 23 Art. 26. “Liability” of the European Parliament Resolution 2015/2013.
  24. 24 It is important to note that in the text of the EP Resolution 2015/2013, the definition of AI is supposed to also refer to robots, which are considered to be a “manifestation” of AI (See Rec. B. of the European Parliament Resolution 2015/2013), even though robotics and AI are separate things, robots being a physical creation and AI programmed intelligence (the exception being software robots – a separate category). (Hill, Alex. What’s the Difference Between Robotics and Artificial Intelligence? Robotiq, 2021, available at https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence). For the sake of this article the technical specification in the Resolution 2015/2013 will be ignored, with the focus being the practical concepts relating to allocation of liability.
  25. 25 Rec. 59 f) of the European Parliament Resolution 2015/2013.
  26. 26 Afori, Orit Frischman, Proportionality – A New Mega Standard in European Copyright Law. International Review of Intellectual Property and Competition Law, Munich, 2014, p. 911, DOI 10.1007/s40319-014-0272-1, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2500232.
  27. 27 Griva, Anastasia, To “Black Box” or not to “Black Box”? University of Galway Views and Opinions, 2022, available at https://impact.universityofgalway.ie/ai-and-creativity/to-black-box-or-not-to-black-box/.
  28. 28 Bathaee, op. cit No. 1, pp. 891–892.
  29. 29 Civil liability regime for artificial intelligence European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).
  30. 30 Rec. 6 of the Resolution 2020/2014.
  31. 31 Rec. 11–13 of the Resolution 2020/2014.
  32. 32 Rec. 10 of the Resolution 2020/2014.
  33. 33 Art. 4(1) of proposed Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems.
  34. 34 Art. 4(3) of proposed Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems.
  35. 35 Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) (COM/2022/496 final).
  36. 36 Proposal for AI Liability Directive, pp. 1–2.
  37. 37 Art. 4 of proposed AI Liability Directive.
  38. 38 Art. 3 of proposed AI Liability Directive.
  39. 39 Art. 4 of proposed AI Liability Directive.
  40. 40 Rec. 31 of Proposal for AI Liability Directive
  41. 41 Watson, op. cit. No. 16, p. 70.
  42. 42 Dam, Cees van. European Tort Law. 2nd ed. Oxford University Press, Oxford, 2013, pp. 401–404. ISBN 9780199672264.
  43. 43 Ibid.
  44. 44 Art. 23–25 of the Resolution 2020/2014.