1.
Introduction ^
2.
Primary and Secondary Rules of Law ^
2.1.
The content of the primary rules and their goals ^
2.2.
Liability and burdens of proof ^
2.3.
Rules of change ^
A way to tackle the social dilemmas of AVs can be illustrated with the Federal Automated Vehicles Policy adopted by the U.S. Department of Transportation in 2016. On the one hand, we may appreciate the overall legislative goal of the policy, that is, the principle of «implementation neutrality» that does not intend to favour one or more of the AVs possible applications. This may bring a beneficial competition among scientists, business, and legal systems, according to Justice Brandeis’s doctrine of experimental federalism, as espoused in New State Ice Co. vs Leibmann.17 On the other hand, that on which the Federal Automated Vehicles Policy casts light is a basic difference among rules of the legal system, e.g. between the primary rules that aim to govern social and individual behaviour, and the secondary rules of change, namely, the rules of the law that create, modify, or suppress the primary rules of the system.18 The example of the Federal Automated Vehicles Policy sheds thus light also on this facet of the legal fabric, that is, how the secondary rules of the system help us understanding what kind of primary rules we may finally opt for. This approach seems even wise, when coping with certain threats of current developments in AI and robotics. Should AVs run over pedestrians or sacrifice «themselves and their passenger to save the pedestrians»?19
3.
Political Decisions ^
3.1.
Competing standards ^
3.2.
A matter of tolerance ^
The result is tolerance, rather than justice, as providing the fundamental virtue of social institutions that instructs us how to design rules that shall regulate human behaviour through the design of AVs. An open attitude to people whose opinions may differ from one’s own is after all the virtue on which any reasonable compromise ultimately relies. This approach is especially fruitful when dealing with risks of unpredictability and lost of control that has more often been associated with AI & robotics research in recent times.32 The wave of extremely detailed regulations and prohibitions on the use of drones by the Italian Civil Aviation Authority, i.e. «ENAC,» illustrates this deadlock.33 The paradox stressed in the field of web security decades ago, could be extended with a pinch of salt to the Italian regulation on the use of UAVs, so that the only legal drone would be «one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards – and even then I have my doubts.»34
4.
Re-engineering of the Law ^
4.1.
Techno-regulation ^
4.2.
The aims of design ^
What such examples of techno-regulation suggest is to pay attention to the different aims that working out the shape of objects, or the form of products and processes, or the structure of spaces and places, can have in the case of AVs.38 The modalities of design may in fact aim to encourage the change of social behaviour, to decrease the impact of harm-generating conducts, or to prevent that those harm-generating conducts may even occur. As an illustration of the first kind of design mechanisms, there are different ways in which the design of AVs can encourage humans to change their behaviour through e.g. incentives based on trust via reputation systems, or trade (e.g. services in return).39 As an example of the second modality of design, consider the introduction of air-bags to reduce the impact of harm-generating conducts, and current efforts on security measures for IT systems and user-friendly interfaces that do not impinge on individual autonomy, no more than traditional airbags affect how people drive.40 Finally, as an instance of total prevention, contemplate current projects on cars able to stop or to limit their own speed according to the driver’s health conditions and the inputs of the surrounding environment. Although the purpose is to guard people’s wellbeing against all harm, this is the most critical aim of techno-regulation, since the intent is to prevent any alleged harm generating-behaviour from occurring through the use of self-enforcing technologies.
4.3.
Changing the nature of cities and their environment ^
5.
Conclusion ^
The second type of open problems for legal AVs regards matters of design, and how the aim to prevent harm-generating behaviour from occurring through the use of self-enforcing technologies may affect some tenets of the rule of law. For some scholars, what is crucially imperiled is «the public understanding of law with its application eliminating a useful interface between the law’s terms and its application.»49 Others refer to «a rich body of scholarship concerning the theory and practice of «traditional» rule-based regulation [that] bears witness to the impossibility of designing regulatory standards in the form of legal rules that will hit their target with perfect accuracy.»50 While some others affirm that «the idea of encoding legal norms at the start of information processing systems is at odds with the dynamic and fluid nature of many legal norms, which need a breathing space that is typically not something that can be embedded in software,»51 the problem is clear and real.52 The more AV technology advances, the more its legal issues will concern techno-regulation and design. Since the devil is in the detail, we can even predict that an increasing number of such cases will progressively concern more and more experts of techno-regulation and legal design in the next future. Let us thus address this new generation of problems on the design of AVs and their corresponding informational environment, in accordance with the second lesson learned in this paper. We should be ready for several agreements and «reasonable compromises» in the foreseeable future, regarding standards for IoT, AI systems and robotics that will require an open attitude to people whose opinions may differ from one’s own. Most threats and risks brought on by the use of self-enforcing technologies, e.g. the modelling of social conduct and paternalism, can be conceived as illustration of a typical intolerant approach that hinders responsible scientific research and innovation.
Ugo Pagallo (University of Turin), Professor of Jurisprudence, Law School, University of Torino, Lungo Dora Siena 100 A, 10153 Torino, Italy. ugo.pagallo@unito.it.
- 1 The notion of autonomy has of course sparked a hot debate that, however, we can leave aside in this context. Suffice it to mention work by Michael J. Wooldridge/Nicholas R. Jennings, Agent Theories, Architectures, and Languages: A Survey. In: M. Wooldridge and N. R. Jennings (eds.), Intelligent Agents, pp. 1–22, Springer, Berlin 1995; Stan Franklin/Art Graesser, Is it an Agent, or just a Program? A Taxonomy for Autonomous Agents. In: J.P. Müller, M. J. Wooldridge & R. Nicholas (eds.), Intelligent Agents III, Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, pp. 21–35, Springer, Berlin 1997; Colin Allen/Gary Varner/Jason Zinser, Prolegomena to Any Future Artificial Moral Agent, Journal of Experimental and Theoretical Artificial Intelligence, 2000, 12: 251–261; and, Luciano Floridi/Jeff Sanders, On the Morality of Artificial Agents, Minds and Machines, 2004, 14(3): 349–379.
- 2 See, e.g., Jason R. Wilson/Matthias Scheutz, A Model of Empathy to Shape Trolley Problem Moral Judgements, The Sixth International Conference on Affective Computing and Intelligent Interaction, ACII 2015.
- 3 Jean-François Bonnefon/Azim Shariff/Iyad Rahwan, The Social Dilemma of Autonomous Vehicles, Science, June 2016, 352(6293): 1573–1576.
- 4 Selmer Bringsjord/Joshua Taylor, The Divine-Command Approach to Robot Ethics. In P. Lin, K. Abney and G. A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics, pp. 85–108, MIT Press, Cambridge, Mass. 2014.
- 5 See respectively John Horty, Agency and Deontic Logic, Oxford University Press, New York 2001; and Yuko Murakami, Utilitarian Deontic Logic. In: R. Schmidt et al. (eds.), Proceedings of the Fifth International Conference on Advances in Modal Logic, pp. 288–302. AiML, Manchester UK 2004.
- 6 As developed by Michael Anderson and Susan Leigh Anderson, Ethical Healthcare Agents. In M. Sordo et al. (eds.), Advanced Computational Intelligence Paradigms in Healthcare, pp. 233–257, Springer, Berlin 2008.
- 7 Roderick Chisholm, Practical Reason and the Logic of Requirement. In: S. Koerner (ed.), Practical Reason, pp. 1–17, Basil Blackwell, Oxford 1974; Philip Quinn, Divine Commands and Moral Requirements, Oxford University Press, New York 1978.
- 8 Clarence Irving Lewis/Cooper Harold Langford, Symbolic Logic, Dover, New York 1959.
- 9 The temporal reference of the text is given by legal work on the creation of the first special zone for robotics between 2002 and 2003 in Japan. We return to this approach to the normative challenges of AVs below in Section 2.
- 10 An introduction in Azim Eskandarian (ed.), Handbook of Intelligent Vehicles, Springer, London 2012.
- 11 See Tom M. Gasser, Legal Issues of Driver Assistance Systems and Autonomous Driving. In Handbook, see above previous note, at 1520 ff.; Gary E. Marchant/Rachel A. Lindor, The Coming Collision between Autonomous Vehicles and the Liability System, Santa Clara Law Review, 2012, 52, at 1321 ff.; Ugo Pagallo, Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems. Journal of Law, Information and Science, 2012, (21)2: 224–233; Jack Boeglin, The Costs of Self-driving Cars, Yale Journal of Law and Technology, 2015, 17, at 171 ff.; and, Melinda Florina Lohmann, Liability Issues Concerning Self-Driving Vehicles, European Journal of Risk Regulation, 2016, 7(2): 335–340.
- 12 See Ronald Leenes/Federica Lucivero, Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design, Law, Innovation and Technology, 2016, 6(2): 193–220.
- 13 There has been an intensive debate on this issue, especially in the field of business law. Suffice it to mention Jean-François Lerouge, The Use of Electronic Agents Questioned under Contractual Law: Suggested Solutions on a European and American Level, The John Marshall Journal of Computer and Information Law, 2000, 18: 403; Emily Mary Weitzenboeck, Electronic Agents and the Formation of Contracts, International Journal of Law and Information Technology, 2001, 9(3): 204–234; Anthony J. Bellia, Contracting with Electronic Agents, Emory Law Journal, 2001, 50: 1047–1092; and Giovanni Sartor, Cognitive Automata and the Law: Electronic Contracting and the Intentionality of Software Agents, Artificial Intelligence and Law, 2009, 17(4): 253–290. As to the analysis of the levels of risk in the field of AVs see the U.S. Department of Transportation’s Federal Automated Vehicles Policy, September 2016, at p. 21 ff.
- 14 See Bert-Jaap Koops, Should ICT Regulation Be Technology-neutral? In: B-J. Koops et al. (eds.), Starting Points for ICT Regulation: Deconstructing Prevalent Policy One-liners, pp. 77–108, The Hague, TMC Asser 2006; and, Chris Reed, Making Laws for Cyberspace, Oxford University Press, Oxford 2012.
- 15 For the sake of conciseness, the analysis takes into account the primary rules of the US legal system. A comparison with further legal systems and their rules on liability and burdens of proof in Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts, Springer, Dordrecht 2013.
- 16 For traditional traffic planning see the revolutionary approach of Hans Monderman in Tom Vanderbilt, Traffic. Why We Drive the Way We Do, Knopf, New York 2008.
- 17 The case is in 285 U.S. 262 (1932).
- 18 For the distinction between primary and secondary legal rules see Herbert L. A. Hart, The Concept of Law, Clarendon, Oxford 1961. In this context, we can leave aside such secondary rules, as the rules of recognition and of adjudication, just to focus on the rules of change.
- 19 See above n. 3, on the social dilemma of AVs.
- 20 Further details in Ugo Pagallo, Robots in the Cloud with Privacy: A New Threat to Data Protection? Computer Law & Security Review, 2013, 29(5): 501–508.
- 21 Yueh-Hsuan Weng/Yusuke Sugahara/Kenji Hashimoto/Atsuo Takanishi, Intersection of «Tokku» Special Zone, Robots, and the Law: A Case Study on Legal Impacts to Humanoid Robots, International Journal of Social Robotics, 2015, 7(5): 841–857 (p. 850 in text).
- 22 See the previous note.
- 23 See above n. 13.
- 24 See Ugo Pagallo/Massimo Durante, The Philosophy of Law in an Information Society. In: L. Floridi (ed.), The Routledge Handbook of Philosophy of Information, pp. 396–407, Oxon & New York 2016.
- 25 This is of course the thesis of Dworkin and his followers, e.g. Ronald Dworkin, A Matter of Principle, Oxford University Press, Oxford 1985. In a nutshell, according to this stance, a morally coherent narrative should grasp the law in such a way that, given the nature of the legal question and the story and background of the issue, scholars can attain the answer that best justifies or achieves the integrity of the law.
- 26 Privacy issues represent one of the thorniest legal challenges of AV technology. See Ugo Pagallo, Teaching «Consumer Robots» Respect for Informational Privacy: A Legal Stance on HRI. In: D. Coleman (ed.), Human-Robot Interactions. Principles, Technologies and Challenges, pp. 35–55, Nova, New York 2015.
- 27 See Lawrence Busch, Standards: Recipes for Reality, MIT Press 2011.
- 28 See Massimo Durante, What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents, Knowledge, Technology & Policy, 2010, 23(3-4): 347–366.
- 29 The field of informational warfare is another good example: see Ugo Pagallo, Cyber Force and the Role of Sovereign States in Informational Warfare, Philosophy & Technology, 2015, 28(3): 407–425.
- 30 E.g. Brendan Gogarty/Meredith Hagger, The Laws of Man over Vehicle Unmanned: the Legal Response to Robotic Revolution on Sea, Land and Air, Journal of Law, Information and Science, 2008, 19: 73–145.
- 31 See for example the remarks of a study sponsored by the EU Commission, i.e. RoboLaw, Guidelines on Regulating Robotics. EU Project on Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics, September 22, 2014.
- 32 Suffice it to mention Harold Abelson et al., Keys Under the Doormat: Mandating Insecurity by Requiring Government Access to All Data and Communications. MIT Computer Science and AI Laboratory Technical Report. 6 July 2015; The Future of Life Institute, An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence, 2015, at http://futureoflife.org/ai-open-letter/ (last visit: 18 October 2016); IEEE Standards Association, The Global Initiative for Ethical Considerations in the Design of Autonomous Systems, forthcoming (2017).
- 33 See Ugo Pagallo, Even Angels Need the Rules: On AI, Roboethics, and the Law. In: Kaminka G A et al. (eds.) ECAI Proceedings, pp. 209–215, IOS Press, Amsterdam 2016.
- 34 See the introduction of Steve Garfinkel/Gene Spafford, Web Security and Commerce. O’Reilly, Sebastopol, CA. 1997.
- 35 The aim of the meta-rules of «procedural regularity» is to determine whether a decision process is fair, adequate, or correct. See Joshua A. Kroll/Joanna Huey/Solon Barocas/Edward W. Felten/Joel R. Reindenberg/David G. Robinson/Harlan Yu, Accountable Algorithms, University of Pennsylvania Law Review, 165 (forthcoming 2017).
- 36 The reference is Luciano Floridi, The Ethics of Information, Oxford University Press, Oxford 2013.
- 37 See also Luciano Floridi (ed.), The Onlife Manifesto, Springer, Dordrecht 2015.
- 38 See Ugo Pagallo, Designing Data Protection Safeguards Ethically, Information, 2011, 2(2): 247–265.
- 39 In addition, we can think about security systems that slow down the speed of users that do not help, say, the process of informational file-sharing. See Andrea Glorioso/Ugo Pagallo/Giancarlo Ruffo, The Social Impact of P2P Systems. In: X. Shen, H. Yu, J. Buford and M. Akon (eds.), Handbook of Peer-to-Peer Networking, pp. 47–70, Springer, Heidelberg 2010.
- 40 See Ugo Pagallo, Online Security and the Protection of Civil Rights: A Legal Overview, Philosophy & Technology, 2013, 26(4): 381–395.
- 41 We return to this problem of control below, in the conclusion of this paper: the opening book of this debate is Lawrence Lessig’s Codes and Other Laws of Cyberspace, Basic Book, New York 1999.
- 42 See above note 24.
- 43 The reference is of course a classic text like Hans Kelsen, Pure Theory of Law, trans. B. L. Paulson and S. L. Paulson. Clarendon, Oxford 2002 (first ed. 1934).
- 44 See work mentioned above in notes 11, 13, and 29.
- 45 Steve Jobs, Thoughts on Music (2007), at http://www.apple.com/hotnews/thoughtsonmusic/ (last visit: 24 September 2016), p. 3.
- 46 See work mentioned above in notes 24 and 28.
- 47 See e.g. Bruce Bimber, The Politics of Expertise on Congress: The Rise and Fall of the Office of Technological Assessment, SUNY Press, New York 1996. On the technological impact assessments that will be necessary, see David Dunkerley/Peter Glasner (eds.), Building Bridges between Science, Society and Policy: Technology Assessment – Methods and Impacts, Springer, Dordrecht 2004. In more general terms, Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology, Elgar, Cheltenham 2015.
- 48 European Parliament’s Committee on Legal Affairs, doc. 2015/2103(INL), at n. 14.
- 49 Jonathan Zittrain, Perfect Enforcement on Tomorrow’s Internet. In: R. Brownsword and K. Yeung (eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, pp. 125–156, Hart, London 2007.
- 50 Karen Yeung, Towards an Understanding of Regulation by Design. In: R. Brownsword and K. Yeung (eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, p. 167, Hart, London 2007.
- 51 In Bert-Jaap Koops/Ronald Leenes, Privacy Regulation Cannot Be Hardcoded: A Critical Comment on the «Privacy by Design» Provision in Data Protection Law, International Review of Law, Computers & Technology, 2014, 28: 159–171.
- 52 This has been a leit motiv of my recent research. In addition to work mentioned above in notes 20 and 26, see Ugo Pagallo, On the Principle of Privacy by Design and its Limits: Technology, Ethics, and the Rule of Law. In: S. Gutwirth, R. Leenes, P. De Hert and Yves Poullet (eds.), European Data Protection: In Good Health?, pp. 331–346. Springer, Dordrecht 2012; Ugo Pagallo, Cracking down on Autonomy: Three Challenges to Design in IT Law, Ethics and Information Technology, 2012, 14(4): 319–328; up to Ugo Pagallo, The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design. In: S. Gutwirth, R. Leenes e P. de Hert (eds.), Data Protection on the Move. Current Developments in ICT and Privacy/Data Protection, pp. 387–410, Springer, Dordrecht 2016.