1.
Introduction ^
2.
Initial Objections ^
Our rejoinder is two-fold: that the impetus towards a human rights regime against technology in no way implies a relegation of the existing system of human rights protections against the State; and that human rights laws are but one of an array of «Swiss cheese» obstacles against the occurrence of violations1. Our articulation of a human rights regime against technology is envisaged to reinforce the spirit of the movement and to ensure continued protections against new types of powerful threats. Thus, even if weak artificial intelligence merely enhances the hand of the State in relation to the individual, the homeostatic equilibrium between State power and human rights would be upset because the contemporary legal constellation is predicated upon the continued existence and efficacy of complementary restrictions to the exercise of power which is eroded by the technology.
3.
Three Structural Obstacles ^
Even if the challenge posed by weak artificial intelligence to the protection of human rights is conceded, additional obstacles need to be cleared before the path is clear to develop human rights protections against technology can be explored. The first is the tendency to compartmentalise concerns according to the sphere of activity or the nature of the impugned right2. This tendency towards isolated considerations leads to a fragmented understanding of the true nature of the problem as a whole. The possibility that a larger structural shift is taking place is hidden by the fact that an incomplete portrait has been painted, and this truncated understanding militates against progressive developments of human rights protections.
The second is that contemporary human rights methodologies are extremely effective at illuminating certain enumerated types of harms caused by the State and its agents to identified individual victims within jurisdictional boundaries. The efficiency of this mechanism, however, risks leaving unrecognised harms that fall outside of this formula3. This essentially amounts to distinguishing between legitimate and illegitimate forms of harm, and sterilises technologically induced harms from the stigma of human rights abuse. Not only does this render ineffective avenues of remedy and redress against power wielded through technological means, but excludes an increasingly powerful agency from the purview of review and responsibility. As discussed below, this is an inherent issue subsisting within human rights law.
Finally, there is the monopolising tendency of human rights law that crowds out other perspectives on the pertinent issues4. The consequence of this hegemonising pressure is really that any claim to defend human values must be couched within its logic and language to be successful. This obstacle imposes significant constraints on the possibility of deviance away from the dominant human rights model, despite the cardinal features of this model being inherent to the problem in the first place.
4.
The Interface between Contemporary Human Rights and Emerging AI ^
The problems inherent within the contemporary configuration of human rights law that curtail its effectiveness in relation to what might be termed technological power (as opposed to State power) are threefold. First, there is the content of existing human rights law: the substantive rights have largely evolved in relation and against State power borne out in the experiential theory of human rights5. This suggests that technological wrongs are required before appropriate technologically-oriented rights can emerge as a reaction. More problematically, however, is that the substance of existing rights are not aligned with the challenges posed by AI: the freedoms of speech and assembly, for example, are tailored to resist against State repression but may not fully overlap with the concerns raised by our emerging technologies. Second, human rights law, ossified within the State-orientated approach, renders it oblivious to all other power dynamics that potentially impact human beings and challenge the very concept of the human individual. Not only does this overlook first order challenges raised by AI directly, but also gives rise to complex second order problems where corporations, for example, deploy artificial intelligence that would erect two interlocking jurisdictional barriers to traditional human rights claims. Finally, the legal focus upon isolated direct causal relationships raises the third problem because dispersed or distributed origins of harms and indirect or tangential effects cannot be recognised within this framework6. As the impact of AI is likely to arise cumulatively, its disparate effects will only be discernible through a broadened perspective, such that technological harms will fail to be recognised.
5.
The Need for a Human Rights Regime Oriented Against AI Power ^
6.
Advantages of a Human Rights Regime Oriented Towards AI Challenges ^
The effort of devising a convergent human rights regime that is directed specifically against technological power, manifested in this case by robotics and AI, which can be asserted where situations fall into the responsibility gap7. Building a complementary human rights regime holds forth the benefit of balancing responsibilities and calibrating capacities: unilateral thrusts of human responsibility behind robotic systems risk scapegoating human beings8, or exposes human beings as moral crumple zones where the human in a robotic system bears the responsibility for the failure of a broader system9.
Third, despite the inherent and inalienable nature of human rights that has been propounded in international law (Universal Declaration of Human Rights), the contemporary human rights regime is essentially relational. In other words, the theory of human rights as integral to the individual is incongruous with practical human rights protections which allow these rights to be asserted only against narrowly construed sets of actors, namely the state and its agents. In this context, the process of devising a human rights regime against robotics will be more faithful to its intrinsic nature, orientated to protecting the human being against certain types of infringements irrespective of the nature or character of the source of the violations.
7.
Responsibility, Control and Relationality ^
Given the increasing risk of leaving the human being under the loop when developing robotics and artificial intelligence, the key concern is that of control. One of the main reasons for people to feel threated when confronted with robotics and artificially intelligent creations is that they have only limited possibilities to control such technologies. From the perspective of individual users, the lack of control is due to various factors: limited understanding of how a given system is made and how it works; the design of the systems that often limits the possibility for external intervention; as well as an increasing degree of autonomy different systems and their functions are endowed with. The role of the individual is to act mainly as the consumer who can use different products and services. This includes adapting a system to his or her preferences, to a varying degree, which may give an illusion of but not the actual control over a system. When analysed from the perspective of the system designers, a reason for concern is that, as the systems become increasingly autonomous and intelligent as well as capable of learning, no one controls their conduct and the corresponding consequences. This is part of a broader socio-cultural context where neither experts nor the institutions are in a position to define and control different risks that have emerged in the contemporary societies. And yet, «society more than ever relies and insists on security and control»10. Therefore, we face a significant degree of complexity as well as contradictory trends: on the one hand, we assign an increasing degree of autonomy to robotic systems and AI as they seem to be more efficient than humans under certain aspects, and more reliable, for example in warfare11; on the other hand, the lack of or only limited control is exactly the reason for concern. Also, the control issue is directly related to the question of responsibility, including in the context of robotics: «a person can be held responsible for something only if that person has control over it»12. Part of such thinking is an assumption that a person can be held accountable for a given artefact to the extent he or she can foresee related risks and consequences. Thus, predictability is of crucial importance for the legal approaches to liability. At the same time, foreseeing outcomes and risks is increasingly difficult for autonomous and learning robots and AI13, and hence, the lack of control and «a responsibility gap»14.
As discussed above, there are different types of responsibility. The underlying assumption in this work is that responsibility is relational in nature. While relational responsibility has been sometimes addressed in terms of a relation between people and events or consequences15, other approaches, such as symbolic interactionism, emphasise the constructive nature of responsibility, where «the assessment of responsibility always includes a process of negotiation»16. In other words, the assignment of responsibility is a matter of negotiation among interactants17 rather than a matter of mere application of rules and norms18. This is related to the fact that responsibility implies both being responsible «for something» and «to someone». The latter requires not only acknowledging an entity an act of responsibility is directed to but also addressing such an entity as an actor actively engaged in the process of responsibility assignment. In other words, responsibility implies responding to others rather than merely reacting to a given person, event or a consequence. From this perspective, responsibility requires a degree of interaction and reciprocity, where all actors have a sufficient degree of autonomy and capabilities to enter interaction and the related process of negation of meanings (mutual engagement is also relevant for rights, where «[r]ights can be seen be viewed as instituting and fostering relationships of reciprocity and interdependence»19). In line with such thinking, responsibility may be assumed (accepted) rather than only assigned (imposed) to a given person, just as rights need to be respected rather than only prescribed. This leads us to another key component of the responsibility concept, namely the conceptualisation of responsibility as an ability. While responsibility may also be defined as a virtue20, we argue here that it is more of a process rather than an attribute. This is how one may learn to be responsible, rather than is responsible (a difference clearly shown between children and adults), just as he or she may learn to respond and interact socially with others, as well as negotiate socially constructed meanings.
8.
Conclusions ^
A different way of approaching this issue is by appealing to James Reason’s «Swiss cheese» model21: the need to establish regulatory redundancy is clear if catastrophic regulatory failure is to be avoided. Our proposal to complement the responsible robotics project aims to duplicate the critical functions of the regulatory system to increase reliability, embryonic and imperfect as it is. We argue here that human rights should be developed in a way to protect humans against the outcomes of robotics and artificial intelligence through strengthening the very notion of the human being as well as human value. How to achieve such a goal, remains an open question and the aim of this paper is to try to begin such discussions.
9.
Acknowledgements ^
Hin-Yan Liu, Associate Professor, University of Copenhagen, Centre for International Law, Conflict and Crisis, Faculty of Law Karen Blixens Plads 16, 2300 Copenhagen, DK; hin-yan.liu@jur.ku.dk.
Karolina Zawieska, Researcher, Industrial Research Institute for Automation and Measurements PIAP, Fundamental Research Team Al. Jerozolimskie 202, 02-486 Warsaw, PL; kzawieska@piap.pl.
- 1 James Reason, Human error: models and management, British Medical Journal, 2000, 320(7237), pp. 768–770.
- 2 David Kennedy, The Dark Sides of Virtue: Reassessing International Humanitarianism, Princeton University Press, 2005.
- 3 Scott Veitch, Law and Irresponsibility: On the Legitimation of Human Suffering, Routledge Cavendish, Oxford, 2007.
- 4 Kennedy 2005 (note 2).
- 5 Alan Dershowitz, Rights from Wrongs: A Secular Theory of the Origins of Rights, Basic Books, 2009.
- 6 Tracy Isaacs / Richard Vernon, Accountability for Collective Wrongdoing, Cambridge University Press, 2011; André Nollkaemper / Harmen van der Wilt (eds.), System Criminality in International Law, Cambridge University Press, 2009.
- 7 Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, Ethics and Information Technology, 2004, 6(3), pp. 175–183.
- 8 Hin-Yan Liu, Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems. In: Bhuta, Nehal/Beck, Susanne/Geiss, Robin/Liu, Hin-Yan/Kress, Clauss (eds.), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge University Press, 2006, pp. 325–344.
- 9 Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, WeRobot 2016 Working Paper, University of Miami, 2016.
- 10 Ulrich Beck, Living in the world risk society, Hobhouse Memorial Public Lecture given on Wednesday 15 February 2006 at the London School of Economics, Economy and Society, 2006, 35(3), pp. 329–345, p. 335.
- 11 Ronald R. Arkin, Governing Lethal Behavior in Autonomous Robots, CRC Press, 2009.
- 12 Dante Marino / Guglielmo Tamburrini, Learning robots and human responsibility, International Review of Information Ethics, 2006, 6(12), pp. 46–51, p. 49.
- 13 Peter M. Asaro, The Liability Problem for Autonomous Artificial Agents, AAAI, 2016.
- 14 Marino / Tamburrini 2006 (note 12).
- 15 Ronald Dworkin, Justice for Hedgehogs, Harvard University Press, 2011, p. 102.
- 16 Thomas J. Scheff, Being Mentally Ill: A Sociological Theory, Third Edition, Transaction Publishers, 2009, p. 116.
- 17 Marsha D. Walton, Negotiation of responsibility: Judgments of blameworthiness in a natural setting, Developmental Psychology, 1985, 21(4), pp. 725–736.
- 18 Howard S. Becker / Michal M. McCall (eds.), Symbolic Interaction and Cultural Studies, University of Chicago Press, 2009, p. 133.
- 19 Jill Marshall, Human Rights Law and Personal Identity, Routledge, 2014, p. 79.
- 20 Dworkin 2011 (note 15), p. 102.
- 21 Reason 2000 (note 1).