Jusletter IT

Regulating Robotics in Europe: a Perplexed View

  • Author: Erica Palmerini
  • Category: Articles
  • Region: Italy
  • Field of law: AI & Law, Robotics
  • Citation: Erica Palmerini, Regulating Robotics in Europe: a Perplexed View, in: Jusletter IT 23 November 2017
This article provides a general overview of the recent regulatory activities that have been launched at the European level with regards to the topics of robotics and artificial intelligence. The scope and approach of the EU Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics is discussed, in the light of the current debate within the social sciences about the scientific and technological advancements in these fields.

Table of contents

  • 1. Introduction
  • 2. Robotics and Regulation: State-of-the-art and the Way Forward
  • 3. A Matter of European Law?
  • 4. The Regulatory Toolbox and the EU Approach
  • 5. What is a Robot? In Search of a Definition
  • 6. (Robotic) Bodies of Law
  • 7. The RoboLaw Project and the «Robotic Exceptionalism» Critique
  • 8. Possible Approaches to Regulating Robotics
  • 8.1. Filling in Legal Gaps
  • 8.2. The Case for a «Horizontal» Approach
  • 8.3. Sector Specific Regulation and the Adequacy of Current Frameworks
  • 8.4. Robotic Activities and Liability for Damages

1.

Introduction ^

[1]
This paper analyses the recent attempt at introducing a piece of legislation in order to regulate robotics at the European level. On 20 January 2015, the Committee for Legal Affairs (IURI) of the European Parliament established a Working Group on legal questions related to the development of Robotics and Artificial Intelligence (AI) in the European Union. Following the analysis undertaken within the working group and a study released by the Scientific Foresight Unit (STOA),1 a Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics was approved by the IURI Committee on 12 January 2017. The final text of the Resolution was adopted by the plenary sitting of the European Parliament on 16 February 2017. This document identifies and puts forth several issues that deserve attention by policy makers, and provides an excellent synthesis of the undergoing debate in the social sciences about advancements in the field of robotics.
[2]

While commenting on the main points stressed in the Resolution, this article highlights three basic questions that emerge from the regulatory endeavour: the reasons put forward for a regulatory intervention, the regulatory tools to be deployed, and the possible methodological and substantive approaches that can be applied to a complex domain such as robotics.

2.

Robotics and Regulation: State-of-the-art and the Way Forward ^

[3]
The EU Parliament Resolution adopted on 16 February 20172 is rooted in factual premises, enumerated in detail in the preamble of the document, that elicit a wide array of legal and policy issues. Spurring from the initial statement that robotics and artificial intelligence «seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched» (Introduction, sub B), the following considerations are meant to demonstrate the importance of intervening timely and in a proactive manner in order to steer the current and future developments in these fields.
[4]
The first assertion echoes a recurrent claim that we can find elsewhere within the social sciences’ discussion about robotics and mirrors the deep attention the public is devoting to the phenomenon of emerging technologies. Robotics is described as a «disruptive technology» 3 that may «transform lives and work practices» (Introduction, sub E), affect the job market and the levels of employment (Introduction, sub I and J),4 and eventually will have a huge impact on all spheres of society.
[5]
On the one hand, robotic applications seem to encompass many other technologies such as ICT, nanotechnologies, neuroscience-related technologies, basically leading to embody the most advanced frontier of the concept of converging technologies that is currently debated from a policy perspective in numerous fora.5
[6]
On the other hand, robots are extremely flexible and versatile instruments with a great potential in several domains: in terms of enhancing the safety of road traffic, thus saving lives, while reducing the environmental impact (autonomous vehicles), improving dramatically the quality of healthcare and patient outcomes (surgical robots and expert systems), helping people with disabilities to recover lost functions thanks to advanced bionic prostheses and exoskeletons, reducing the problems due to population ageing and shortage of professional caregivers (care and companion robots), and more in general performing so called «3D» (dirty, dull and dangerous) tasks in effective ways (Introduction, sub E and F).
[7]
All these beliefs support the view that a sound legal framework is needed to hasten the development of an advanced market of robotic products and services, by removing uncertainties and gaps that could act as a non-technological roadblock, and to make it grow according to the values and principles enshrined in the European legal order.
[8]
Together with the enthusiasm for the promises of this emerging field of technology, a twofold concern arises: for the erosion of core values such as dignity, autonomy, privacy, and non-discrimination, that could be undermined by said developments (Introduction, sub H, N and O); and for ensuring the safety of the products available in the European market and the ensuing guaranties for the consumers, including appropriate rules for the allocation of liability in case of damages.
[9]
These arguments point to the reasons why a EU-wide intervention is considered appropriate, despite the fact that robotic products can be deployed in different context and be devoted to multiple functions, often falling outside the EU reach or being entrenched in a bundle of European and national competences.

3.

A Matter of European Law? ^

[10]
As stated in the Resolution (§ 65), the normative ground for a regulatory intervention of the EU Commission would be Article 114 TFEU on the common market.
[11]
The transnational level of governance would enhance the positive effects of regulation. Ensuring safety rules, common operating standards, and a clear liability framework for robotic products would avoid a fragmented market and enable cross-border activities that entail the use of robotic applications, for instance the circulation of autonomous vehicles within the EU territory (§ 4).
[12]
All the potential of a strategic economic sector could be caught with a timely intervention, also considering that other jurisdictions, such as US, Japan, China and South Korea have already undertaken regulatory actions (Introduction, sub R). At the same time a high level of safety, fair practices for the consumers, and predictable requirements for the enterprises to fine-tune their business plans (Introduction, sub S) would be guaranteed.
[13]
As a matter of fact, European citizens will be consumers of robotic devices. By regulating in an anticipatory manner, policy makers have the opportunity to influence how these technologies shall be designed. Effective regulation can create trust in the safety and security of robotics devices, in data protection, in a fair market, which is very important for the robotic industry to grow and develop.
[14]
The attention devoted to the ethical implications of the advancements in the robotic field is also intended to guarantee that these are aligned with the principle of Responsible Research and Innovation (RRI).6 Two main pillars of the notion are ethical acceptability and orientation towards societal needs. Technological advances, together with the economic power of companies and research institutions, are often held responsible for producing knowledge and industrial applications without any concern for the exposure at risk of democratic values and human rights. On the contrary, the concern for the protection of fundamental rights potentially undermined by technological developments has recently become a characteristic feature of European science-making. An early regulation can ensure that robotic applications are devised and designed in such a manner that allows to protect principles like human dignity, identity, health and privacy, and can favour the emergence and diffusion of those technologies that are thought to positively enhance existing values (Introduction, sub U and V; § 6, 10, 11, 13). Not only do robotic products and applications have to comply with the core ideals embraced in the European context, but also those technologies that respond to societal needs should be fostered in order to achieve normative goals such as equality of opportunities, justice, and solidarity. Such technologies that substantially improve the quality of life of the European citizens, especially the more deprived and vulnerable, may well be favoured through the adoption of concrete legislative measures that create the right incentives to their development.
[15]
The concern for the protection of fundamental rights is particularly related to robotic applications to be deployed in the fields of human care and companionship and in the context of healthcare as medical aids or enhancers (Introduction, sub O; § 3). On the one hand, the perspective of using robots as carers, in education or in the entertainment of children and the elderly has led to devote special attention to the risk of dehumanizing caring practices or prompting emotional responses from vulnerable people (§ 32).7
[16]
On the other hand, the rights of people with disabilities have to be regarded as the prime issue in the development of assistive technologies and medical robots, but applications such as prostheses and other types of bodily implants require attention also for their enhancement potential (§ 36-40). Robotics qualifies in fact as one the most powerful means to achieve the enhancement of the human being, and the dilemmas that this perspective opens up should be dealt with in connection with the undergoing broader debate about human enhancement.8

4.

The Regulatory Toolbox and the EU Approach ^

[17]
Different options can be explored in terms of the optimal approach to the regulation of robotics and the normative tools that are best suited to this task.9 The position adopted in the EU Resolution is mixed, since the document seems to resort to several strategies, each one with advantages and shortcomings.
[18]
First of all, it rests heavily on hard law: the main goal of the proposal is precisely the enactment of a directive that deals with the key issues highlighted in the Resolution. This type of approach has its limitations when the target of regulation are the domains of science and technology. The main hindrance is the clash between the fast pace of scientific developments and the slow process of lawmaking, which has led to devise proactive solutions in order to counteract this problem.10 While technology changes rapidly and evolves, regulators could face hurdles in «staying connected».11 The gap between technological innovation and legal change may affect legal certainty and create an ambiguous environment where rights and responsibilities cannot be clearly acknowledged or predicted. The need to intervene timely, before the technology spreads, generates needs and users’ behaviors, thus triggers a market demand, requires regulators to employ other instruments than old-style law. In this vein, the EU Resolution resorts quite heavily to soft law, in the form of codes of conduct for researchers, designers, users. The Annex to the Resolution contains a Charter on robotics, that consists of three parts: a code of ethical conduct for robotics engineers, a code for research ethics committees, a licence for designers and a license for users (see also § 51). In this sense, the document seems to endorse an earlier attempt at self-regulation that emerged within the community of researchers operating in this field. The idea of roboethics was coined by an engineer and roboticist – Gianmarco Veruggio – to underline the need for an ethics that could guide the development of robotics products from the very beginning.12
[19]
The purpose of these instruments is not to address all relevant legal issues, rather to perform a «complementary function», enabling «the ethical categorization of robotics» and promoting «responsible innovation efforts» in this field (Annex to the Resolution: Recommendations as to the contents of the proposal requested). This explanation is remarkable, since it alleviates the risks inherent in any regulation that stems from a perspective close or internal to the domain to be regulated. First of all, these risks have been phrased in terms of lack of competence: scientist and technologists «are uniquely competent to address scientific/factual issues … [however,] their special competence does not extend to value choices».13 In other words, science and industry experts can assess the feasibility of certain technological goals and identify the means to achieve them, but they cannot decide whether these ends ought to be pursued, the priorities between them, the resources to commit to this endeavor and the limits to respect. These issues «are social and ethical questions to be addressed in the public and political sphere».14
[20]

Another criticism is resumed in the concept of «technological solutionism», which points to the fact that certain technological outputs are solutions in search of a problem.15 A purely scientific view may focus on identifying a technological possibility and miss a wider picture, including the search for, or the current availability of, other types of remedies, perhaps less costly, less disruptive of social and professional practices,16 or more valuable in other respects.17

[21]
In order to handle complex technological matters, regulators often resort to technical delegation. This tool enables to regulate fields characterized by a strong technological dimension by combining formal law and technical standard. Standardization is voluntary and is not legally binding unless national governments decide to incorporate the standards into their domestic legislation. However, technical and safety norms and standards, formulated by administrative agencies, non-governmental agencies, technical standard-setting bodies and professional associations, have increasingly been used in many sectors. In the field of robotics, for instance, the International Standard Organization (ISO) has released standards for industrial robots and for personal care robots.18
[22]
The devolution of technical rule-making to independent agencies or standard-setting bodies is deemed to ensure the continuous adaptation of norms and a high-level of product safety. The EU Resolution includes standardization among the soft-law tools to be used in order to manage robot regulation, and approves in particular the setting up of special technical committees, that are exclusively devoted to developing standards for robotics (§ 22). In fact, the adoption of specific standards could be an efficient solution to address safety issues. The legal regimes that are currently in place have a very general approach and are not tailored upon different kinds of products; in the case of complex and technologically advanced systems, technical standards could complement these regimes without burdening excessively the regulatory process.19
[23]
The governance of the sector could also be entrusted to a European Agency for robotics and artificial intelligence with expertise in the various domains entwined in robotics (§ 15-17). The Agency would be in charge of providing «the technical, ethical and regulatory expertise needed to support the relevant public actors» (§ 16), «monitoring … robotics-based applications, identifying standards for best practice, and … recommending regulatory measures» (§ 17). The establishment of an independent authority echoes the proposal for a US Federal Robotics Commission.20 A newly established regulatory body with a specific mission may have a strong expertise in the field, but also face the problem of regulatory capture.21 In addition, given the compound nature of the robotic research and industry, and the wide variety of robotic applications, the role of the Agency could overlap with that of other entities with advisory and regulatory competences.
[24]
Finally, another instrument in the assorted regulatory toolbox envisioned by the EU Parliament Resolution is regulating by design or by code,22 which entails incorporating legal norms, and compliance to them, into the technology itself (Introduction, sub M and Q). Technology can be used to steer robots and users’ behavior, thanks to rules embedded in the technical set-up.23 In this respect, the claim the Resolution makes that «Asimov’s Laws … cannot be converted into machine code» (Introduction, sub T) is not completely true from a technical point of view. In fact, an entire new field of research – so called machine ethics – investigates whether and how ethical and legal rules should be included in the algorithm that directs the robot, and which kind of contents these rules should have.24 The discussion is especially flourishing with regard to autonomous vehicles and the ethical dilemmas they may face while circulating in the traffic.25 Unlike with humans, the problem of devising alternative scenarios and deciding which criteria the car should obey is considered something to be decided in advance, by setting the driving algorithm accordingly.
[25]
Both theoretical and practical objections to implementing rules in code have been pushed forward: on the one hand, the potential de-moralising effect of design-based instruments, that exclude the possibility of non-compliance;26 on the other hand, the unfeasibility of translating all legal rules into technical solutions and the clash between the inflexibility of code and the flexibility and openness to interpretation of the law.27 However, already the data protection reform enforced the concept of «privacy by design» (art. 25, Regulation 679/2016), which implies to have data-protection requirements embedded in the design of information systems. This rule will have implications for all companies that produce products for the processing of personal data, including robotics companies, obliging producers to ensure compliance in the design and set-up of automatic data processing or filing systems.

5.

What is a Robot? In Search of a Definition ^

[26]
A key problem at the heart of the EU proposal for a regulation of robotics is that of providing a definition of robot. The need for «a generally accepted definition of robot and AI» that is «flexible» enough to encompass the great variety of applications dealt with in the document, without «hindering innovation», is stressed at the onset in the Introduction (sub C). The Resolution then points to the importance of «common Union definitions» of all the types of robotic systems that could fall within its domain of application (§ 1).
[27]
Calling for the introduction of definitions of the matter to be regulated – according to an approach that we can find in many EU pieces of legislation –, confronts us with the difficulty of reconciling the great variety of robotic applications with a meaningful and encompassing notion.
[28]
Robots can be extremely different from one another, in terms of shape, material that they are made from, mode of functioning and type of control, level of interaction with humans, and tasks they can perform. The diverse applications range from softbots and expert systems to personal care robots, from industrial robots to robots for search and rescue, from vacuum cleaner to advanced bionic prostheses, from drones to self-driving cars. This variety of forms in which a robotic device can appear reflects the many contexts of use and the numerous activities that can be executed by a robot or with the assistance of a robot.
[29]
Before such a wide array of robotic systems, it is difficult to find a definition which is inclusive and suitable in a legal perspective for the purpose of identifying the applicable regime; but, as a matter of fact, this very same complexity also raises an issue about the feasibility of regulating robotics as a homogenous phenomenon. In other words, whether a unique piece of legislation can address all the relevant legal problems that robotic systems elicit is questionable, and such a regulatory endeavour could reveal both unnecessary and impractical.

6.

(Robotic) Bodies of Law28 ^

[30]
Notwithstanding the frequent claims about legal gaps and, more generally, the unsuitability of the current legal framework to cover novel robotic applications, these are not developed in a legal vacuum. The existing norms approach robotics from different angles, regulating various aspects such as safety standards, liability for damages, data treatment and protection.
[31]
Because most robots are intrinsically mechanical objects, they fall under the broad definition of machinery set forth by the Machinery Directive in art. 2(a);29 therefore they will have to comply with the following norms and with the national laws implementing the Directive, and be designed and assembled in compliance with the standards and safety measures provided therein.
[32]
In addition, robots that qualify as products intended for the consumer market are subject to the requirements established by the General Product Safety Directive (GPSD).30 However, this directive does not apply when more specific rules governing the safety of certain products exist, since sector-specific legislation prevails over the more general regulation. For instance, machinery such as surgical robots, robotic capsules for diagnostic or therapeutic purposes, cochlear or visual implants, advanced robotic prostheses and exoskeletons, and other brain-computer interfaces qualify as medical devices, therefore are covered by the Medical Devices and Active Implantable Medical Devices Directives (MDD and AIMDD).31 Thus a detailed regulation exists at the European and national level, that applies to most new and sophisticated products such as different types of bodily implants and robotic limbs interfaced with the neural system.
[33]
But a robot is also a machine equipped with sensors to perceive the environment; the raw data collected are used to extract information and plan an action to reach an objective or react to external events; the robot can then operate in the real world through the actuators. In addition, robots can be, and often are, connected devices. Technical connectivity enables robots to be made lighter, cheaper, and less power-consuming, thanks to limited on-board computers, and at the same time improves their functionality. Robots in fact share knowledge, learn from other robots’ experiences, and draw from extensive databases for tasks such as object recognition and navigation planning. In addition, because robots will increasingly inhabit private spaces or be in close contact with humans in other spaces, they will eventually collect, analyze, store, and potentially share personal data. It follows that robotic systems have to be regarded also against the backdrop of the regulatory schemes on privacy and data protection.32
[34]
As far as robots will cause economic or non-pecuniary losses to others, one or more liability regimes are doomed to apply, depending on the type of conduct that has determined the event and the causal link with the damage. The important discussion on robots and liability for damages that is taking place at the academic and policy level does not refer to a truly legal void, on the contrary, it is about how current rules are effective and fit the needs of this context of application.
[35]
Robotic applications already fall within the boundaries of diverse law regimes that qualify them – as products, machinery, medical devices, data collectors – and thus regulate. However, these regimes may not be exhaustive and therefore not be able to cover, or properly solve, new legal issues robots will bring about.

7.

The RoboLaw Project and the «Robotic Exceptionalism» Critique ^

[36]
«RoboLaw» is the fortunate acronym of a European funded research project whose main goal was to investigate all the possible ethical and legal implications of robotics.33At first, a broad and encompassing approach was necessary, as we were just starting to explore the new domain that is resumed by the «law and robotics» binomial. Quite soon, however, the research team realised that such an undertaking was very useful mainly for descriptive purposes, in order to clarify what types of norms address robotic applications and pave the way for an in-depth assessment.34 Further research led to the conclusion that any general evaluation is doomed to be unspecific and superficial, while a robust analysis requires to restrict the field of enquiry and concentrate on robotic applications that share homogeneous features endowed with legal significance. Therefore, the final deliverable containing policy guidelines addressed to the European Commission was articulated in four main chapters focusing on as many applications – namely surgical robots, bionic prostheses and exoskeletons, autonomous vehicles, and care robots –, in the effort to set up a sound and pragmatic regulatory exercise.35
[37]
In fact, regulating robotics cannot mean to adopt a unique body of laws to address all robotic applications just because they pertain to the same technological domain. Such an endeavour would not escape the famous «law of the horse» critique, which the American Judge Frank H Easterbrook formulated to depict the early attempts at conceiving the so called cyberlaw.36 He notably observed in the introductory remarks to a conference on that subject, that a law of the cyberworld was needed as much as a law of the horses: these are commonly negotiated in contractual agreements, can give raise to compensatory obligations if they hurt a bystander with a kick, and have to be cured with due diligence by a veterinary. Nevertheless, horses do not deserve special rules, nor the legal relevance of these examples can give rise to a new branch of the law.
[38]
Although robotics has distinctive features, compared to other technologies, which require attention, we should not overindulge in an attitude of robotic exceptionalism.37 There seems to be no need to reshape the entire legal environment in order to accommodate robotics, nor to react to the numerous challenges arising in this field with a comprehensive and homogeneous set of laws covering them all. On the one hand, as already observed, many robotic applications can easily be situated and regulated in the current frameworks; on the other hand, it would be impossible to develop rules that are applicable to all robotic applications indifferently.

8.

Possible Approaches to Regulating Robotics ^

[39]

Seemingly the objective of the EU Parliament Resolution is that of regulating «robotics» as such. The document is very comprehensive, enumerating many different robotic systems, from autonomous vehicles (§§ 24–29) to drones (§ 30), from care robots (§ 31–32) to medical robots (§ 33–35), hence being almost exhaustive of the applications at the stage of development today. In this sense, its merit is to draw attention to a strategic technological field and market. This strength, however, could be also a downside, because such a holistic approach ends up to be generic, unspecific, and possibly inoperable.

[40]
As a matter of fact, many regimes that apply to robotics are already in place, that tackle aspects such as safety, security, liability, treatment of personal data, thus new regulations are not needed or at least should not start from scratch. Secondly, a single piece of legislation that deals with many or all types of robotic objects and activities simply because they belong to the same technological domain appears to clash with the principle of technological neutrality, and could have a discriminatory effect both in terms of hampering advancements in this field or unduly favoring this sector above other technology-centred domains.38 Thirdly, the effort to cover all different facets of very complex products with a sole regulatory action is unrealistic and misplaced.
[41]
What ought to be, then, the right approach to the regulation of robotics? The diverse issues at stake can probably find distinct responses, that not necessarily require to enact new legislation. Specific solutions will have to be formulated either to address an application which does not fall, at the moment, in any existing framework, or to mitigate undesirable effects deriving from the unsuitability of the current rules in certain regards and, possibly, to provide incentives, through regulation, to the development of valuable technologies that may otherwise be lagging behind.

8.1.

Filling in Legal Gaps ^

[42]
In the domain of robotics for autonomous transport, a legal action is certainly necessary primarily in order to update the notion of motorvehicle adopted in traffic law. Currently, a driverless vehicle is unknown to the legal systems, hence it is difficult even to precisely identify its legal status and regime. The Vienna Convention on road traffic39 seems to indicate that autonomous cars should not circulate on public roads, since it states that «Every moving vehicle or combination of vehicles shall have a driver» (art. 8.1) and that «Every driver shall at all times be able to control his vehicle …» (art. 8.5). The fully automated driving mode would not be consistent with this formulation, which could on the contrary allow highly automated vehicles where a person on board is constantly monitoring the traffic situation and is able to resume control and override the automatic system. 40 At present, standards set by UNECE would also affect the legality of self-driving vehicles, whenever they require a feature that involves a human driver (e.g. that brakes are activated by muscular energy) or that indirectly rules out automatic control.41 Several traffic laws of many EU countries have similar definition of the vehicles that are allowed to circulate on public roads, provided they respect all other requirements set by the law; in case they do not provide a notion of vehicle, they simply assume that it is always driven by a person and regulate it accordingly.42,43
[43]
This kind of rules prevents the free circulation of driverless vehicles. It follows that a legal intervention is needed in order firstly to permit the circulation of self- driving car and, secondly, to adjust the present regime of road traffic to the features of the new technology. In fact, fully autonomous vehicles would require an adaptation of the law, in term of safety requirements and rules of traffic that such systems could not possibly fulfill.

8.2.

The Case for a «Horizontal» Approach ^

[44]
Some issues that robotic products raise could be dealt horizontally. More precisely, a horizontal approach is preferable, when the target of regulation is a technological feature that is shared by different types of emerging technologies, as it happens with technical connectivity.
[45]
Very often robots will be connected devices and a flow of information will take place between the robotic system, digital networks and the producer or seller. Data flows are generated in many ways: by remote software updates, telematic services agreement, cloud robotics. Outsourcing part of the robot’s data processing to remote servers renders the system more efficient, while it allows to exploit the internet for computation and processing of vast amount of data. For instance, automated vehicles will rely on detailed and up-to-date navigation services; integrate vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) communication technologies to transmit basic safety information, such as warnings to drivers concerning impending crashes; and will be outfitted with in-vehicle «infotainment» systems, which provide both information and entertainment services. These systems will increase the collection and use of vehicle data, especially geo-location data, generated and transmitted to and from motor vehicles.
[46]
Robotic medical devices also can have wireless capabilities that are commonly used to send the data to the patient’s physician.
[47]
More generally, robots will be able to collect data and to contextualise it, on a scale never attained before. This information can be exploited in a number of ways, and especially in influencing and steering consumers’ choices: robots that operate in the market for consumers will enable to offer new kinds of services that filter choices, interact sympathetically, and basically reduce complexity.
[48]
In addition, robotic technologies can be deployed in physical environments and obtain information on personal interests and habits by interacting with or simply observing people, with no need for the consumers to connect to the internet or be reached through the phone.
[49]
Technical connectivity, however, is a feature that robots share with other smart devices and connected objects that belong to the sphere of the Internet of things. Being robots extremely versatile and ubiquitous, they simply amplify an issue that is already at stake. Since the demand for regulation depends on a peculiar technological feature that spans across the market of different products, a horizontal approach is to be preferred above a sector-specific intervention. 44 In addition, such an approach should connect this issue to the EU regulatory strategies about the digital economy45 and the free flow of data.46 These actions have at their core the concepts of fair practices,47 interoperability, and cybersecurity that appear extremely relevant for a domain with such a great potential in terms of intrusiveness and penetration.
[50]
A similar discourse holds true for the regulation of privacy and data protection. Personal robots, as well as other smart devices that are portable, can be worn or can be located inside the home, where they will sense and record the environment, and possibly share that information with third parties (for instance, service providers) or store it in the cloud in order to carry out different type of functions.48 Privacy concerns arise also with regard to the technology of drones, that can be equipped with high-resolution cameras, thermo-cameras for night vision and microphones, can be very small and fly almost unseen in otherwise inaccessible sites. 49
[51]
A wide-reaching and detailed regulatory scheme already exists, that represents a solid framework for all activities in which the collection and treatment of personal data take place.50 It is important to stress that this Regulation entails embedding privacy (and security) safeguards in the design of products and network architectures, and the concept of «privacy by design» 51 is meaningful especially for all types of systems that perform data treatments in an automatic guise, without human mediation.

8.3.

Sector Specific Regulation and the Adequacy of Current Frameworks ^

[52]
As already observed, many robotic applications can easily be situated and regulated in the current frameworks; however, these regimes are not exhaustive and may not cover, or not adequately solve new legal issues robots will bring about. For instance, healthcare robotics falls within the medical devices regulatory scheme. This regulation, however, is extremely broad in scope and lacks the specificity needed to meet diverse levels of technical complexity and risk. The two directives currently in place cover a wide range of devices and the resulting discipline is not tailored for particularly dangerous or ethically sensitive devices such as body implants or brain implants. As a consequence, various types of robotic applications remain poorly regulated. 52
[53]
The most important issues concern the under-regulation of the clinical investigation phase, devices aimed at enhancement, which could fall outside the scope of this regulatory scheme, and the risks in terms of cybersecurity that connected devices present. Firstly, BCIs, advanced prosthetics, and other neuro-robotic technologies are still largely at a research stage. Existing regulations are very basic and simply recall general principles on human experimentation, which are not always suitable for innovative products and systems. This regulatory gap is regarded as one of the main hindrances to the flow of new devices along the innovation trajectory.53
[54]
Secondly, the medical devices regulation does not seem to address additional risks, in terms of safety and security, which are determined by the use of robotic systems, sometimes even implanted in the body, with data processing capabilities, real-time communication with external sources and direct connection to the web. Hacking attempts on software-controlled, internet-connected medical devices are a contingency that the current medical device regulation regime does not consider, despite the fact that vulnerabilities to external interferences of ICT devices are regarded as one of the most pressing legal issues within this field.54 Thirdly, MDD and AIMDD only concern devices that are meant for diagnostic or therapeutic purposes. Robotic applications aimed at human enhancement therefore do not fall within the MDD regime, although they appear to be very similar to devices already in use in clinical environments for the treatment of different types of conditions and diseases, and present the same features and risks.
[55]
The Regulation 2017/745, which will start applying in 2020, seems to have eased some of the problems underlined, by strengthening controls and procedures for high risk and implanted devices, and improving the discipline of the clinical investigation phase.
[56]
Another robotic application that is considered a «truly transformational technology» is unmanned aerial systems (UAS).55 Drones are a new type of aerial vehicle, thus falling within the remit of regulation bodies such as the national aviation authorities and the European Aviation Safety Agency (EASA). Regulatory efforts have been undertaken both at the national and at the European level, in order to facilitate with common rules the development of products and economic activities in this area, while guaranteeing a high level of safety.56 UASs represent a homogeneous compound and most appropriately the regulation is directed at adapting the existing regimes of safety in aviation in order to encompass the new products. Technical boards are dealing with different problems, moving from their specific perspective and competence; the main actors in the process are the International Civil Aviation Organization (ICAO),57 the European Aviation Safety Agency (EASA), that has recently promoted a Prototype legislation,58 and the Joint Authorities for Rulemaking on Unmanned Systems (JARUS). Conversely, «horizontal» issues such as privacy and data protection are dealt with in cooperation with the competent regulatory bodies.59
[57]
The field of autonomous vehicles is also characterized by specific features that make it a distinctive target for regulators, completely detached from other robotic applications and conversely linked to the regimes of vehicles certification (so called EU-type approval)60 and road traffic. Both at the political61 and the technical level,62 many actors are involved in the task of issuing regulations that can facilitate the advent of semi-autonomous and self-driving vehicles.
[58]
A meaningful regulatory approach needs to focus on this specific robotic application that does not share almost any trait, endowed with legal relevance, with other robotic technologies; on the contrary, the domain of autonomous cars can be traced back to transport law. The legislature that deals with car transport and road circulation will have to accommodate the new prototypes, thus be adapted in the relevant parts. In addition, this sector is prominently interested by the regulatory developments in the area of connectivity. The expression «connected and automated driving», which is frequently used by policy documents on the topic, reveals the importance of equipping vehicles with information and communication technologies, both to improve their performance and to offer supplementary services to the users. Such an understanding of the complementarity between automation technologies and ICT explains the regulatory strategy that matches the two domain in a wider and more complex programme.63 In turn, this program is horizontally linked with the goals of creating a reliable, technology-neutral and full coverage network pursued within the Digital Single Market Strategy.64

8.4.

Robotic Activities and Liability for Damages ^

[59]
The introduction of alternative regulatory models is a plausible option whenever a problem cannot be dealt with effectively within the existing frameworks. A good candidate for this approach seems to be the issue of liability for damages caused by robots, and especially by autonomous robots.65
[60]
Robots can be extremely complex machines; depending on the actions the robot must perform, the programme that guides it can be very elaborate, and this adds to the sophisticated machinery needed to assemble it. Unexpected behaviour can emerge from the interactions between the components or with the environment. In addition, robots are conceived to be multipurpose machines without fully predefined tasks. This characteristic means that robots are extremely versatile, thus making it hard to identify the different uses robots could be put at, and the various contexts in which robots will be deployed.
[61]
Being most robotic applications still in the experimental phase, data about their performance and real-world behaviour are missing,66 that are key to assess the risks and types of damages inherent in the deployment of said systems.
[62]
In turn, these factual uncertainties that are related to the novelty and complexity of robotic applications, influence the possibility to establish a causal link between the robot’s action and the damage it may have provoked. More precisely, it could be difficult to trace back the harmful event to the precise reason that triggered it, thus to identify a responsible person among the multiple actors involved in the production and distribution chain for robotic applications.67 Insurability of liability towards third parties would also be affected, due to the uncertainties in terms of the risks entwined in the activities performed by robots, that prevent a reliable estimation and consequently the pricing of the insurance.
[63]
Another reason of complexity stems from the presence and potential overlapping of different liability regimes, that could apply to the same incident. In the context of growing automation, we could expect a shift from the liability of the owner (natural person or organization) to defective product liability claims. This shift could be observed in different areas such as robotic surgery and circulation of vehicles, as the human contribution to a certain activity lessens, while the robotic system’s control increases.
[64]
To call upon product liability appears quite obvious, but this solution has its drawbacks too. First of all, it requires that the product is defective, which entails it has a manufacturing or a design defect. However, often it may be hard to identify what went wrong and why it went wrong, therefore to single out a specific failure. Due to the technological complexity of the device, the victim would face difficulties also in showing the defectiveness of the design and the existence of a causal nexus between the defect and the damage. On the side of the producer, the same uncertainty and the lack of data about the risks of innovative and sophisticated products may undermine the rationale on which products liability rests, that is assess those risks and insure against them. In addition, a systematic shift from the liability of owners or users to the liability of the producers could have a «technology chilling effect», thus delaying or hampering the emergence of innovative products.
[65]
With autonomous robots, that act without human supervision, have self-learning capacities and can display emergent behavior, the challenges to the current liability rules could deepen. Broadly speaking, the rationale underpinning these rules is the «control» that someone has on his own behavior, or over the things he produces and sells, or owns or uses, or over other persons whose action he is deemed responsible for. However, such a control cannot always be exerted, due to the robots’ peculiar features. More precisely, the autonomy and emergent behavior robots display is deemed to conflict with the usual criteria that serve to allocate liability.68
[66]
Alternative solutions are therefore envisioned, that take into account this problem and, more in general, define a plausible framework where the damages and injuries caused by a robot can be compensated without generating an over-deterrence effect, that would prevent valuable technologies from becoming marketable products.
[67]
In the attempt of not hindering the developments of the robotic industry with an aggrieved responsibility – real or perceived – in case of accidents involving a robotic product, the EU Resolution mentions the possibility of limiting the liability of manufacturers, as well as programmers, owners or users who contribute to a compensation fund for the victims (§ 59.c). This solution, while weighs up the need to support emerging robotic industries against the fears of liability-related costs, does not amount to adopting an immunity for manufacturers and sellers. Such an approach was proposed as a compromise between the need to foster innovation and the need to incentivize safety.69 However, unless this idea is supplemented by other compensatory remedies, it would just subsidize innovation by leaving the victims to bear the costs. The EU Resolution draws a more comprehensive proposal, that combines liability caps with a compulsory insurance scheme (§ 59.a) or a compensation fund (§ 59.b), taking into account the goal of compensating the victims as well as that of enabling the developers of robotic products to anticipate and internalize the costs associated with their activities.
[68]

Another possible way of dealing with accountability issues is attributing legal personhood to the robot and equipping it with an asset in order to compensate for damages or fulfill other obligations (§ 59.f). How this financial basis is funded would reflect the role that different players have in the market for robotics (producers, programmers, owners, even the State when robots can serve qualified social needs).

[69]
Building on the issue of liability for damages brought about by robots, a more general discourse on robots as legal subjects is developed: «electronic personhood»70 is considered a plausible approach for embodied robots (or software agents) that display a certain degree of autonomy and interact with people. In fact, the thinking underlying this regulatory suggestion hints to the agency of the robot, «an entity that seems to be capable of expressing embryonic but growing levels of autonomy and subjectivity».71
[70]
However, leaving unprejudiced the contested issue of robots’ autonomy in technological and philosophical terms, it is important to stress that at the present moment the legal discourse conceptualizes robots as subjects in a purely functional perspective. Far from being acknowledged as agents in a strong sense, robots could be endowed with legal personality if this proved to be an efficient solution to fully compensate the victims of accidents.

 

Erica Palmerini is Associate Professor of Private Law at the Scuola Superiore Sant’Anna in Pisa. Coordinator of the European project RoboLaw, funded within the Seventh Framework Program (2012–2014), she is currently continuing her research at the intersection of law and emerging technologies.

 

 

This article was originally prepared on the occasion of the conference «L’intelligence artificielle et le droit», organised by the Centre de recherche Information, Droit et Societé (CRIDS) at the University of Namur on 20 October 2017. A preliminary version, with the title «Towards a Robotics Law at the EU level?», has been published in the volume Lintelligence artificielle et le droit, edited by Hervé Jacquemin and Alexandre de Streel, Larcier, Bruxelles, 2017. The Author wishes to thank Hervé Jacquemin for the invitation to the conference and for including my study in the proceedings, as well as for the insightful presentations and contributions given therein.

  1. 1 EPRS Scientific Foresight Unit (STOA), Ethical Aspects of Cyber-Physical Systems, Brussels, June 2016 http://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/EPRS_STU%282016%29563501_EN.pdf (all websites last accessed on 31 October 2017).
  2. 2 European Parliament, Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)), P8_TA-PROV(2017)0051.
  3. 3 James Manyika/Michael Chui/Jacques Bughin/Richard Dobbs/Peter Bisson/Alex Marrs, Disruptive technologies. Advances that will transform life, business, and the global economy, McKinsey Global Institute, 2013, www.mckinsey.com/insights/business-technology/disruptive_technologies. See also UK Robotics and Autonomous Systems Special Interest Group, RAS 2020. Robotic and autonomous systems, July 2014, https://connect.innovateuk.org/documents/2903012/16074728/RAS%20UK%20Strategy.
  4. 4 The impact of robotics and automation on the job market, which is not further addressed in this paper, seems to be one of the most significant worries among European citizens. For a recent survey on the attitudes towards robots see TNS Opinion & Social, Special Eurobarometer 427 «Autonomous Systems» Report, June 2015, http://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_427_en.pdf.
  5. 5 See Mihail C. Roco/William Sims Bainbridge (eds.), Converging technologies for improving human performance: Nanotechnology, biotechnology, information technology and cognitive science, Arlington (VA), National Science Foundation (NSF), Department of Commerce (DOC), 2002; Alfred Nordmann, Converging Technologies: Shaping the Future of Our Societies, Report from the High Level Expert Group on Foresighting the New Technology Wave, European Commission, Luxemburg, 2004; Rinie Van Est/Dirk Stemerding/Virgil Rerimassie/Mirjam Schuijff/Jelte Timmer/Frans Brom, De BIO à la convergence NBIC. De la pratique médicale à la vie quotidienne, Rapport écrit pour le Conseil de l’Europe, Comité de Bioéthique, Institut Rathenau, La Haye, 2014.
  6. 6 For a general overview of the notion of RRI in the context of the European policies in support of science and technology, see European Commission, Options for Strengthening Responsible Research and Innovation, Luxembourg, 2013. For an analysis of the penetration of RRI canons in the field of robotics see Bert-Jaap Koops/Erica Palmerini/Pericle Salvini, Responsible Innovation and Robotics, in: René von Schomberg (ed.), Handbook – Responsible Innovation: A Global Resource, forthcoming with Edward Elgar Publishing.
  7. 7 See Commission de réflexion sur lÉthique de la Recherche en sciences et technologies du Numerique dAllistene (CERNA), Éthique de la recherche en robotique, 2014, 35 ff.; Matthias Scheutz, The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots, in: Patrick Lin/Keith Abney/George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics, Cambridge (MA), 2012, 205 ff.; Amanda Sharkey, Robots and human dignity: a consideration of the effects of robot care on the dignity of older people, in Ethics and Information Technology, 2014, 16, 63–75; Yorick Wilks (ed.), Close Engagements with Artificial Companions. Key social, psychological, ethical and design issues, Amsterdam-Philadelphia, 2010.
  8. 8 Indeed, the discussion is already developing not only on a theoretical level, but it has invested political institutions and ethics committees: see, for instance, British Medical Association, Boosting your brainpower: ethical aspects of cognitive enhancement, London, 2007; Danish Council of Ethics, Medical Enhancement. English Summary, Copenhagen, 2011, available at http://www.etiskraad.dk/~/media/Etisk-Raad/en/Publications/Medical-enhancement-2011.pdf?la=da; Presidents Council on Bioethics, Beyond Therapy. Biotechnology and the Pursuit of Happiness, 2003; Theo Boer/Richard Fischer (eds.), Human Enhancement. Scientific, Ethical and Theological Aspects from a European Perspective, Church and Society Commission of the Conference of European Churches (CEC), Strasbourg, 2013; Nordmann (note 5); Christopher Coenen/Mirjam Schuijff/Martinijntje Smits/Pim Klaassen/Leonhard Hennen/Michael Rader/Gregor Wolbring, Human Enhancement Science and Technology Options Assessment on Human Enhancement, 2009, https://www.itas.kit.edu/downloads/etag_coua09a.pdf.
  9. 9 See, in general terms, Ronald Leenes/Erica Palmerini/Bert-Jaap Koops/Andrea Bertolini/Pericle Salvini/Federica Lucivero, Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues, in (2017) 9(1) Law, Innovation & Technology, 1–44.
  10. 10 Instead of a «hard law» approach, legal systems are steered towards adopting «prospective and homeostatic» instruments, capable of adapting themselves to a changing landscape, which cannot be managed through statutory law: for an overview see, for instance, Stefano Rodotà, Diritto, scienza, tecnologia: modelli e scelte di regolamentazione, in: Giovanni Comandé/Giulio Ponzanelli (eds.), Scienza e diritto nel prisma del diritto comparato, Torino, 2004, 397 ff.
  11. 11 The concept of «regulatory connection» and its three phases − «getting connected», «staying connected» and «dealing with disconnection» − are explained and thoroughly discussed in Roger Brownsword/Morag Goodwin, Law and the Technologies of the Twenty-First Century. Texts and Materials, Cambridge-New York, 2012, 63 ff., 371 ff.
  12. 12 Gianmarco Veruggio, The birth of roboethics, ICRA 2005, IEEE International Conference on Robotics and Automation, http://www.roboethics.org/icra2005/veruggio.pdf. See also Gianmarco Veruggio/Fiorella Operto, Roboethics: Social and Ethical Implications of Robotics, in: Bruno Siciliano/Oussama Khatib (eds.), Handbook of Robotics, 2016, Springer, 2135 ff.
  13. 13 David L. BazelonCoping With Technology Through the Legal Process, 62 Cornell Law Review 817 (1977), 826 f.
  14. 14 Ronald Sandler, GM Food and Nanotechnology, in: Bert Gordijn/Anthony MarkCutter (eds.), In Pursuit of Nanoethics, Dordrecht, 2014, 46 f.
  15. 15 For instance, a radical critique of the internet and ICT technologies condemns precisely the tendency: Evgeny Morozov, To Save Everything, Click Here. Technology, Solutionism and the Urge to Fix Problems That Dont Exist, New York, 2013.
  16. 16 Another risk that sociologists have addressed is reverse adaptation. This expression, taken from biological sciences, means that technology shapes human activities. Once the technology is introduced, social practices change in order to accommodate its own operative requirements and suit its limitations. Take for instance the case of robotic surgery; the new technology leads to changes in clinical and surgical practices needed in order to accommodate the new technique, from over prescription due to hospitals having to use the expensive machines they have bought; to surgeons becoming deskilled because of the use of robotic tools.
  17. 17 The debate about companion robots and the ethical and social meaning of care highlights this problem: cfr. Mark Coeckelbergh, Artificial agents, good care, and modernity, in Theoretical Medicine and Bioethics, 2015(36), 265–277; Amanda Sharkey/Noel Sharkey, Granny and the robots: Ethical issues in robot care for the elderly, in: Ethics and Information Technology, 2012, 14 (1), 27–40.
  18. 18 ISO 10218-1/2, Robots and robotic devices. Safety requirements for industrial robots (Part 1: Robots; Part 2: Robot Systems and Integration), Ginevra, 2011; ISO 13482, Robots and robotic devices – Safety requirements for service robots – Personal care robot, Ginevra, 2014.
  19. 19 Andrea Bertolini/Erica Palmerini, Regulating robotics: A challenge for Europe, in: Workshop for the IURI Committee of the European Parliament on «Upcoming issues of EU law», Compilation of in-depth analyses, Session II, Brussels, 2014, 94–129, http://www.europarl.europa.eu/studies, at 119 ff.
  20. 20 Ryan Calo, The Case for a Federal Robotics Commission, Center for Technology Innovation at Brookings, September 2014.
  21. 21 Michael E. Levine/Jennifer L. Forrence, Regulatory Capture, Public Interest, and the Public Agenda: Toward a Synthesis, Journal of Law, Economics, & Organization, vol. 6 (1990), 167–198. For a recent discussion of the problem, see Daniel Carpenter/David A. Moss (eds.), Preventing Capture: Special Interest Influence and How to Limit It, New York, 2014; David Freeman Engstrom, Corralling Capture, Harvard Journal of Law & Public Policy, vol. 36 (2013), 31 ff.
  22. 22 Besides the seminal work of Lawrence Lessig, Code and Other Laws of the Cyberspace, New York, 1999 and its second edition Code: version 2.0, New York, 2006; see Karen Yeung, Towards an Understanding of Regulation By Design, in: Roger Brownsword/Karen Yeung (eds.), Regulating Technologies, Oxford, 2008, 79 ff.; Bert-Jaap Koops, Criteria for Normative Technology: The Acceptability of «Code as law» in Light of Democratic and Constitutional Values, ibidem, 157 ff.; and the proceedings of the Symposium Technology: Transforming the Regulatory Endeavour (Berkeley, 3 March 2011), published in (2011) 26 Berkeley Technology Law Journal 1315.
  23. 23 Ronald Leenes/Federica Lucivero, Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design, (2014) 6(2) Law, Innovation and Technology, 194–222.
  24. 24 Michael Anderson/Susan Leigh Anderson (eds.), Machine Ethics, Cambridge, 2011; David J. Gunkel, The Machine Question. Critical Perspectives on AI, Robots, and Ethics, Cambridge (Mass.), 2012; Wendel Wallach/Colin Allen, Moral machines: Teaching robots right from wrong, Oxford-New York, 2009; Bertram F. Malle/Matthias Scheutz/Joseph L. Austerweil, Networks of Social and Moral Norms in Human and Robot Agents, International Conference on Robot Ethics, Lisbon, 2015; Bertram F. Malle/Matthias Scheutz, Moral Competence in Social Robots, IEEE International Symposium on Ethics in Engineering, Science, and Technology, Chicago (IL), 2014.
  25. 25 Jean-François Bonnefon/Azim Shariff/Iyad Rahwan, The social dilemma of autonomous vehicles, Science, vol. 352 (2016), 1573–1576. See also the report released by the Ethics Commission appointed by the German Federal Minister of Transport and Digital Infrastructure, Automated and Connected Driving, June 2017.
  26. 26 Roger Brownsword, Lost in Translation: Legality, Regulatory Margins, and Technological Management, (2011) 26 Berkeley Technology Law Journal 1321–1365, at 1323 f.; Roger Brownsword, In the year 2061: from law to technological management, (2015) 7(1) Law, Innovation and Technology, 1–51.
  27. 27 Bert-Jaap Koops/Ronald Leenes, Privacy regulation cannot be hardcoded. A critical comment on the «privacy by design» provision in data-protection law, International Review of Law, Computers & Technology, (2014) 28(2), 159–171.
  28. 28 Paraphrasing the title of the book by Alan HydeBodies of law, Princeton, 1997.
  29. 29 Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on Machinery, and amending Directive 95/16/EC, OJ L 157/24.
  30. 30 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety, OJ L 11/4.
  31. 31 Council Directive 93/42/EC of 14 June 1993 concerning medical devices, OJ L 169/1; Council Directive 90/385/EC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices, OJ L 189/17. On 5 April, 2017 a new regulation has been adopted, that will apply after a transitional period of three years: Regulation 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ L 117/1.
  32. 32 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data ; Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119/1.
  33. 33 The research project RoboLaw – Regulating Emerging Robotic Technologies: Robotics Facing Law and Ethics was funded within the Seventh Framework Programme (G.A. n. 289092), 2012–2014, www.robolaw.eu.
  34. 34 The results of this analysis are reported in Ronald Leenes et al., Deliverable D3.1 – Inventory of current state of robolaw.
  35. 35 Erica Palmerini et al., Deliverable D6.2 – Guidelines on regulating robotics. A thorough synthesis of the results is presented also in Bertolini/Palmerini (note 19).
  36. 36 Cfr. the academic dispute between Frank H. Easterbrook, Cyberspace and the Law of the Horse, University of Chicago Legal Forum, 1996, 2017, and Lawrence Lessig, The Law of the Horse: What CyberLaw Might Teach, 113 Harvard Law Review 1999, 501.
  37. 37 Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 California Law Review 2015, 513 ff., spec. 553 ss., discusses robotic exceptionalism.
  38. 38 Here we take the principle of technological neutrality to mean that the regulation should not discriminate against certain technologies, but also that it should not create favourable conditions exclusively for a given technology, unless there are serious reasons to do so and these have been carefully assessed. For a discussion of the principle of technology neutrality and its multiple meanings see Bert-Jaap Koops, Should ICT Regulation be Technology-Neutral?, in: Bert-Jaap Koops/Miriam Lips/Corien Prin/Maurice Schellekens (eds.), Starting Points for ICT Regulation. Deconstructing Prevalent Policy One-Liners, The Hague, 2006, 77–108; Chris Reed, Taking Sides on Technology Neutrality, 2007 4(3) SCRIPT-ed, 263–284.
  39. 39 Convention on Road Traffic, Vienna, 8 November 1968.
  40. 40 See the amendment to Article 8 introduced by UNECE, which entered into force on March 23, 2106: Inland Transport Committee – Working Party on Road Traffic Safety, Report of the Sixty-eighth Session of the Working Party on Road Traffic Safety, 17 April 2014.
  41. 41 For some examples, see Nikolaus Lang/Antonella Mei-Pochtler/Michael Rüßmann/Jan-Hinnerk Mohr, Revolution Versus Regulation. The Make-Or-Break Questions About Autonomous Vehicles, The Boston Consulting Group, 2015, 20.
  42. 42 See, for example, Article 46 of the Italian Traffic Code that describes «vehicles» as «all machines of any type, that circulate on the roads driven by a human».
  43. 43 On the contrary, a detailed analysis of both federal legislation and national US laws concluded that the circulation of self-driving cars is lawful in the USA because they would fall within the notion of vehicle adopted by the relevant laws, which at the same time do not explicitly require that a driver must be in constant control of the driving activity and constantly aware of traffic conditions: Bryant Walker Smith, Automated Vehicles Are Probably Legal in the United States, (2014) 1 Texas A&M Law Review 411. However, several statutes have been enacted in order to experimentally allow the circulation of automated vehicles in a few states (e.g. Nevada, California, Florida), and to regulate some aspects such as safety requirements, insurance coverage, special plates, notification of crashes to competent authorities, and restrictions to circulation.
  44. 44 Again the concept of technological neutrality is at stake, although in a different meaning: that is, regulation should focus on the effects and not on the (technological) means, this being particularly relevant when the purpose of regulation is to protect fundamental rights and consumers’ rights: Koops (note 38).
  45. 45 European Commission, Communication «A Digital Single Market Strategy for Europe», COM(2015) 192 final, Bruxelles, 6 May 2015; European Commission, Communication on the Mid-Term Review on the implementation of the Digital Single Market Strategy. A connected Digital Single Market for All, COM(2017) 228 final, 10 May 2017.
  46. 46 European Commission, Towards a data-driven economy, COM(2014) 442 final, Bruxelles, 2 July 2014; European Commission, Communication «Building a European Data Economy», COM(2017) 9 final, Bruxelles, 10 January 2017; European Commission, Commission Staff Working Document on the free flow of data and emerging issues of the European data economy accompanying the document Communication Building a European data economy, COM(2017) 2 final, Bruxelles, 10 January 2017.
  47. 47 About cloud robotics and the legal challenges for consumer law in the US setting, see Andrew A. Proia/Drew Simshaw/Kris Hauser, Consumer Cloud Robotics and the Fair Information Practice Principles: Recognizing the Challenges and Opportunities Ahead, (2015) 16 Minnesota Journal of Law, Science & Technology 145.
  48. 48 Margot E. Kaminsky, Robots in the Home: What Will We Have Agreed to?, (2015) 51 Idaho Law Review 661; Ryan Calo, Robots and Privacy, in: Lin/Abney/Bekey (note 7).
  49. 49 «They represent the cold, technological embodiment of observation» according to Ryan Calo, The Drone as Privacy Catalyst, Stanford Law Review online, (2011) 64, 29–33, at 33.
  50. 50 Koops (note 38), explains precisely that «all privacy rules are included in a general privacy statute and not in various technology-based, sector-specific laws». However, the actual implementation of the European data protection regime to robots and IoT could prouve difficult. See Chris Holder/Vikram Khurana/Faye Harrison/Louisa Jacobs, Robotics and law: Key legal and regulatory implications of the robotics age (Part I of II), Computer Law & Security Review 32 (2016), 391 ss.
  51. 51 See especially art. 25 of Regulation 2016/679.
  52. 52 For a general overview, see Erica Palmerini, A legal perspective on bodily implants for therapy and enhancement, (2015) 29 (2–3) International Review of Law, Computers & Technology, 226–244.
  53. 53 Maurits Butter et al., Robotics for Healthcare. Final Report, 3 October 2008.
  54. 54 Bert-Jaap Koops/Mark N. Gasson, Attacking Human Implants: A New Generation of Cybercrime, (2013) 5(2) Law, Innovation and Technology 248; Benjamin Wittes/Jane Chong, Our Cyborg Future. Law and Policy Implications, The Brookings Institution, September 2014, https://www.brookings.edu/research/our-cyborg-future-law-and-policy-implications/; Stephen S. Wu/Marc Goodman, Neural Devices Will Change Humankind: What Legal Issues Will Follow?, (2012) 8 The SciTech Lawyer 3.
  55. 55 Riga Declaration on remotely piloted aircraft (drones) «Framing the future of aviation», Riga, 6 March 2015.
  56. 56 The European Commission has set up a RPAS Steering Group, that in June 2013 has released a Roadmap for the Integration of Civil Remotely Piloted Aircraft Systems into the European Aviation System, http://www.sesarju.eu/sites/default/files/European-RPAS-Roadmap_Annex-1_130620.pdf. See also European Commission, Communication «A new era for aviation: Opening the aviation market to the civil use of remotely piloted aircraft systems in a safe and sustainable manner», COM (2014) 207 final.
  57. 57 An Unmanned Aircraft Systems Study Group (UASSG), has been set up, in charge of enacting standards and recommendation.
  58. 58 EASA, «Prototype» Commission Regulation on Unmanned Aircraft Operations, 22 August 2016. See also EASA, Concept of Operation for Drones: A Risk Based Approach to Regulation of Unmanned Aircraft, May 2015; Technical Opinion, Introduction of a Regulatory Framework for the Operation of Unmanned Aircrafts, 18 December 2015.
  59. 59 See Article 29 Data Protection Working Party, Opinion 01/2015 on Privacy and Data Protection Issues relating to the Utilization of Drones, 01673/15/EN, 2015; European ParliamentDirectorate-General for internal policies, Privacy and Data Protection implications of the civil use of drones. In-depth analysis for the LIBE Committee, 2015.
  60. 60 Directive 2007/46/EC of the European Parliament and of the Council of 5 September 2007 establishing a framework for the approval of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles (Framework Directive).
  61. 61 Declaration of Amsterdam of 14 e 15 April 2016 on cooperation in the field of connected and automated driving.
  62. 62 With the European Commission decision of 19 October 2015 Setting up the High Level Group on the Competitiveness and Sustainable Growth of the automotive industry in the European Union (C (2015) 6943 final), a working group (GEAR 2030) has been established, in charge of developing an action plan to steer the advancements in the field of connected and automated driving.
  63. 63 European Commission, Communication «A European strategy on Cooperative Intelligent Transport Systems, a milestone towards cooperative, connected and automated mobility», COM (2016) 766 final, 30 November 2016, has launched the Cooperative Intelligent Transport System (C-ITS) Strategy. This strategy has led to the setting up of a C-ITS Deployment Platform, conceived as a cooperative framework including national authorities, C-ITS stakeholders and the Commission, whose Final Report has been released in January 2016: https://ec.europa.eu/transport/sites/transport/files/themes/its/doc/c-its-platform-final-report-january-2016.pdf.
  64. 64 European Commission, Communication «Connectivity for a Competitive Digital Single Market – Towards a European Gigabit Society», COM(2016) 587 final, 14 September 2016.
  65. 65 For a recent discussion of the problem, see Andrea Bertolini, Insurance and Risk Management for Robotic Devices: Identifying the Problems, in Global Jurist, 2016, 291 ss.; Erica Palmerini/Andrea Bertolini, Liability and Risk Management in Robotics, in: R. Schulze/D. Staudenmayer (eds.), Digital Revolution: Challenges for Contract Law in Practice, Baden-Baden, 2016, 225–259.
  66. 66 The EU Resolution stresses the importance of creating testing sites, where experiments with robots can be carried out in real-life scenarios (§ 23).
  67. 67 The potential causes of failures would be spread across the entire process. For an example of the numerous potential wrongdoers, see Guido Noto La Diega, Clouds of Things: Data Protection and Consumer Law at the Intersection of Cloud Computing and the Internet of Things in the United Kingdom, in: Journal of Law & Economic Regulation, 2016, 9(1), 79 ff.; Claudio Arusio et al., The Law of Service Robots. Ricognizione dellassetto normativo rilevante nellambito della robotica di servizio: stato dellarte e prime raccomandazioni di policy in una prospettiva multidisciplinare, Nexa Center for Internet and Society, 2015, 14 f.
  68. 68 Andreas Matthias, The responsibility gap: Ascribing responsibility for the actions of learning automata, (2004) 6 Ethics and Information Technology 175.
  69. 69 Ryan Calo, Open Robotics, (2011) 70 Maryland Law Review 571.
  70. 70 This proposal was first formulated in the context of the Coordination Action euRobotics (2010-2012), funded by the European Commission under the 7FP: Christophe Leroux et al., Suggestions for a green paper on legal issues in robotics. Contribution to Deliverable D3.2.1 on ELS issues in robotics, 31 December 2012. For cautious speculation on the topic see also Elettra Stradella/Pericle Salvini/Alberto Pirni/Calogero Maria Oddo/Erica Palmerini, Robot Companions as Case-Scenario for Assessing the «Subjectivity» of Autonomous Agents. Some Philosophical and Legal Remarks, in Proceedings of the 1st Workshop on Rights and Duties of Autonomous Agents (RDA2) 2012, 29 s.
  71. 71 Leroux et al. (note 70), 57.