Jusletter IT

Robo Sapiens and Data Protection

Legal Consequences of Human-Robot Integration, from Human-Robot Interaction to Robot-Human Integration

  • Author: Robert van den Hoven van Genderen
  • Category: Articles
  • Region: Netherlands
  • Field of law: AI & Law, Robotics, Data Protection
  • Citation: Robert van den Hoven van Genderen, Robo Sapiens and Data Protection, in: Jusletter IT 24 May 2018
AI and Robotics will inescapable be part of our society. These systems and artificial entities will process unimaginable amounts of our personal data. The GDPR requirements for transparency and explicit consent will vanish the in black hole of self learning algorithms. If AI is integrated in human functioning as well as on a physical as also on intellectual level it will be very difficult to enact credible legal rules for the protection of personal data. Probably it will be an idea to create separate rules for a human(-like) anthropomorphic entity and human enhancement. Next to data protection regulations by design and a sui generis law on legal personhood we would better create non-discrimination rules to avoid separation between enhanced and non-enhanced humans!

Table of contents

  • 1. Introduction
  • 2. Human Robot Connection
  • 3. Personal Data Protection and Human AI Integration
  • 3.1. The Applicability of the GDPR
  • 3.2. Explicit Consent
  • 4. Internet Integrated Data Processing: Co-options
  • 4.1. Co-operation
  • 4.2. Co-existence between Humans and AI
  • 4.3. Wearables, Toys and Virtual Reality
  • 4.4. Co-integration
  • 5. Conclusion

«Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.»

Arthur C. Clarke1

«An ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‹intelligence explosion,and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.»

Irving John Good2

1.

Introduction ^

[1]

Is there a clear separation between humans and machines if they are ultraintelligent? Is there a fundamental difference between artificial and natural persons? In a physical as well as in a legal perspective there is a clear difference. Artificial persons are constructed on basis of the legal framework by men to perform tasks for society. Legal artificial persons have to be represented by natural persons to perform their function for economic, social, governmental or cultural purposes be it that the legal effect will come back to the artificial legal person. In the sense of earnings, profit, losses or even convictions in a criminal law sense they have an equal status. Also they are recognized by each other in legal sense. There is trust and acceptance by both parties. For the numerous legal activities and transactions between them that take place every day there are also numerous transfers of data, also including personal data. These transfers are regulated by a (European) framework of personal data regulations that are developed for the protection of natural persons.

[2]

When we think of artificial persons in a physical sense we think of robots in an anthropomorphic appearance, robots that appear, react and move similar to humans. Although these kinds of robots nowadays did not reach the same level of consciousness as human beings, the moment of singularity, the moment the artificial intelligent robot strives beyond human intelligence, seems not to be too far away.3

[3]

If AI is integrated in our lives what will be the result for natural and legal persons? Can we than as easily discern the differences between the legal, natural and artificial persons? The narrow AI will not create problems. Narrow AI’s do not possess their own intelligence to be able to think, learn or solve problems but relay on the algorithm designed to perform a limited defined set of tasks. True AI, in the sense of general AI is characterized, is designed to solve any problem by using knowledge and skills to acquire skills in new domains, learning directly from user interaction and new situations.

[4]

It is unescapable for human thinking to translate AI into human concepts. Therefore we declare true AI to be comparable with the human mind on all levels. The practical application of this concept is the passing of the Turing test: the undistinguisibility between AI (computer) and human intelligence.4

[5]

An important characteristic of the AI technology is the rapid dissemination of data. The development of Quantum computer techniques will enhance the speed of algorithms. Nowadays already algorithms determine which messages we do or do not see. This may evolve an unconscious shift in the balance of power between technology and man, because the technology makes the choices for people without the human being aware of this himself. Not to step on the thin ice of theories of independent will, it is clear that smart information technologies are already disruptive to our independency in decision-making on several levels. Gradually the AI will affect our daily life more and more. Decisions will be a combination of AI generated and human generated processes. Will it still be of use to discern between personal (human) data processing and the (un-known) combinations of AI processing of those and other data to protect our privacy? Is it still relevant in future what regulations will apply? This paper will look into the present and future solidness of a regulatory framework as the European General Data Protection Regulation.

2.

Human Robot Connection ^

[6]

Industrial robots are used in a wide sense in all kind of production processes, automotive industries, robot aided design but also in the medical environment as the «da Vinci» medical surgery system.5 All these robots and semi-AI aided systems are controlled by men and used as instruments. The actions and processing of data are the responsibility of the user.

[7]

For now and the near future the interaction between AI appliances, robots, artificial legal persons and natural persons will be on a cooperative level for a large part whereby in a legal sense, the robot will have a subordinate or instrumental role.

[8]

The starting integration between robotic and AI appliances will for a large part consist of supportive appliances, using devices as smarter smartphones, connected to our body to analyse our «simple» physical processes as speed of «activity tracking/heartbeat rate/measuring blood pressure» by fitbit-like devices; but also more integrated artificial limbs, organs or exoskeleton appliances.

[9]

Further we integrate artificial, more or less intelligent appliances in our lives, using all kind of gadgets to measure our health, organizing our agenda’s following our whereabouts or controlling our nutrition; also for more medical purposes as diabetic insulin pumps or appliances to control blood sanitizing, kidney control and of course pacemakers. Recently there are sensitive «bionic eyes» on the market that combine camera appliances to a sophisticated retinal implant. The camera uses a small microchip to process what it sees, and then wirelessly sends that data to the retinal implant, which has 60 electrodes in it that provide information to the optic nerve – which is what discerns light, movement, and shapes.6 They are already working to the next step: «We could tomorrow allow our patients to see outside the visible spectrum by using a different input device than a visible spectrum camera.»7 The development of brain-computer interfaces is already in full swing, the so called brain gates are improved by several commercial and non-commercial institutions such as Neuralink initiated by Elon Musk.

[10]

All these connections and interconnections between brain and AI or the internet as was done at the University of Witwatersrand in South Africa, represents a picture that produces several relevant ethical issues, opportunities, and (legal) risks. Where is all the sensitive data being processed and what person is generating this data? Is it a natural person or some AI device? Who will be the IP owner if works are generated by AI enhancement of the brain? The developer of the device or the natural person in which this device is integrated?

[11]

One of the most intriguing legal questions is: «What happens with all the personal and sensitive data that is being processed by all these AI applications and future robots that are working for natural and legal persons in this and certainly in future society?» Will the consisting set of laws and regulations be sufficient to protect our personal data? Privacy, data protection and physical integrity will be structurally influenced by the pervasive integration of Artificial Intelligence (AI) and robotics. Can we find ways to control this development or do we just have to live with the disintegration of privacy as we know it? Will the new rules on data protection by the GDPR suffice to protect our personal data or are these processes in the AI era impossible to regulate? How vulnerable is AI concerning the processing of our personal data? Who or what will have access to these data? Do we give up our privacy, if we increasingly share our personal information with other parties, be it artificial or (semi-)natural? As stated by Horst Eidenmueller: robot regulation must be robot- and context-specific – should this also be applicable on the processing of personal data, even if there is human-robot integration?

3.

Personal Data Protection and Human AI Integration ^

[12]

The protection of privacy and personal data is regulated in an extensive set of international treaties and as in Article 8 of the European Convention of Human Rights (ECHR), the fundamental rights in the Charter of Fundamental Rights of the European Union (European Charter) an the General Data Protection Regulation (GDPR), which will be the standard for the protection of our personal data in Europe. The question is if those instruments are sufficiently equipped to protect personal data, our data in our personal bubble, considering the technological developments towards integrating these data in all kind of processing in everyday life. The questions about explicit and non-explicit permission are already a problem when personal data is used for broadly defined purposes. But also the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (108) 1981, the Recommendation No. R (87) Regulating the Use of Personal Data in the Police Sector8 and the Law Enforcement Directive (EU) 2016/680 and the E-Privacy-Directive 2002/58/EC are relevant.9 All kind of personal data will be processed by or in combination with drones, robo-cops, surveillance systems, household and social and adult entertainment robots.

3.1.

The Applicability of the GDPR ^

[13]

These robots and AI generated appliances will process, profile or transfer the personal data of natural persons for a spectrum of purposes. Will it still be possible to apply these regulations on this broad spectrum of processing? The Regulation attempts to avoid non-application of unknown techniques of processing by consideration 9: «In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used.» The further reference to «automated processing» in this consideration should also encompass any other technique of processing. But would this also apply to using techniques of bio-chemical processes as in the human mind that could be considered as a viable way of processing when it considers artificial intelligent entities and systems? Could we consider the creation of bio-genetically algorithms as automation?

[14]

We are already confronted with questions about the processing of personal data by a social or entertainment robot fall under the «personal use exception», even if these data are used for distribution within groups?

[15]

If, for instance we look at the applicability of Article 22 of the GDPR concerning automated individual decision-making, including profiling the influence of the data-subject on the process seems to be a bit overrated:

 

«1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.»

[16]

It will be difficult to apply this article on AI appliances that will control physical processes which will be connected to the internet to be connected to specialists (robots) for analysing and possibly regulate blood streams or insulin pump functioning. The profiling done by AI appliances can have legal effects if used to make legal or practical decisions to perform certain tasks as categorizing people or giving them fines if used by police-tasks.

[17]

In a comment by Working Party Article 29 it was stated that: «The GDPR recognises that automated decision-making, including profiling can have serious consequences for individuals.» However, the GDPR does not define «legal» or «similarly significant» effects.

[18]

But still robots and AI applications can easily be defined as profiling according to Article 1.4 of the GDPR:

 

«(4) ‹profiling› means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;»

[19]

This will happen all the time, even when for the specific action no explicit permission is received. Social and other robots will process this (sensitive) data. The question is if the processing will fall outside the scope of the regulation because it considers the personal or household sphere: as stated in consideration (18):

 

«This Regulation does not apply to the processing of personal data by a natural person in the course of a purely personal or household activity and thus with no connection to a professional or commercial activity. Personal or household activities could include correspondence and the holding of addresses, or social networking and online activity undertaken within the context of such activities.»

[20]

The last sentence of this consideration though is bringing the responsibility to the parties that make these «personal or house-hold» processing possible, although evidently applicable on social networks:

 

«However, this Regulation applies to controllers or processors which provide the means for processing personal data for such personal or household activities

[21]

One could also extend this applicability to the manufacturer of the AI system or robot as the processor of the data.

3.2.

Explicit Consent ^

[22]

And will it under all circumstances be possible to give consent by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement, as stated under consideration 32 and Article 4.11 GDPR.10

[23]

Will consent be implied if you acquire a robot to assist you in your daily life and will connect to several third parties to support the tasks that are relevant for the functioning of this robot and there for relevant for you? Although art. 9 GDPR, entitled «Processing of special categories of personal data», after having setting forth the general rule, specifies that

 

«1. Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural persons sex life or sexual orientation shall be prohibited,»

 

identifies at paragraph 2 a few exceptions to that prohibition, which include – at letter e) – cases regarding processing that «relates to personal data which are manifestly made public by the data subject».

[24]

Will the aforementioned activity imply that no further consent is needed? What about the responsibility of the social networks and third parties as referred to above under «personal and household processing»? All «automatic» profiling and subsequent decisions should fall within the required «consent» of the data subject. As stated by the Article 29 working party:

 

«Even if a decision-making process does not have an effect on peoples legal rights it could still fall within the scope of Article 22 if it produces an effect that is equivalent or similarly significant in its impact.»11

[25]

In other words, even where no legal (statutory or contractual) rights or obligations are specifically affected, the data subjects could still be impacted sufficiently to require the protections under this provision. The GDPR introduces the word «similarly» (not present in Article 15 of Directive 95/46/EC12) to the phrase «significantly affects». This suggests that the threshold for significance must be similar, whether or not the decision has a legal effect.

[26]

One could state that all decisions solely made by AI systems with practical or legal effect on the life of natural persons are prohibited except for those under specific consent of the subject and of course the exceptions of Article 23 GDPR concerning national security etc.13

4.

Internet Integrated Data Processing: Co-options ^

[27]

We live in an era where we already mostly use semi intelligent systems and robotics, be it that most processes will be controlled by human oversight. But we are confronted with more and more options to leave control to the «automated» systems. As commonly known, the automotive industry is providing our vehicles with more and more options to give over human control to semi-intelligent autonomous system. We still, as natural persons, may decide if we activate these systems as distance keeping, lane tracking, autonomous speeding etc. Airplanes are for a large part controlled by computers and flight level and even descent landing can be done by the automatic pilot but can still be interrupted by the human pilot. In today’s society those systems or robots are still (partly) controlled by natural persons. However, there is an undeniable trend towards the use of self-thinking and self-acting systems. Also, natural persons are controlled in their professional activities in comparable ways by other natural persons or (artificial) legal persons. AI applications will be in the field of all kinds of industries, such as hosting, social and physical support, care robot in physical and social sense, the sex robot, industrial robots, medical robots, surveillance robots, military robots, drones, etc. In the medical sector molecular nano-robots are deployed of chemical or organic origin. The relevant point is that we, as humans, still are in the position to take over control, at least we think we still have that option.

4.1.

Co-operation ^

[28]

Clearly there is no industry without the support of robots in the production process, be it automotive factories, agriculture or any specific way of creating products. Even houses are developed and built by the use of 3D-printers. But also creative industry is using 3D-printing processes to create works of art. The development of the algorithms and the use of the computing mechanisms and the printers themselves is still for a vast part a human activity and also in the manufacturing of the products the human control is still there, be it often in a last resort as in the large factories to create products as automobiles where there are hardly humans to be found at the «work-floor». Robots and robotic appliances are used as instruments but the autonomy of these appliances is developing. Control-mechanisms are increasingly based on AI algorithms to detect faults or improve the manufacturing process without the watching human eye.

[29]

In the medical «industry» there is a visible development towards more autonomy and acceptance of the supremacy of AI appliances. For instance at the diagnosis of health problems and diseases we can see an improvement of the use AI. Some examples of these developments are given by Abby Norman.14

[30]

Researchers at the John Radcliffe Hospital in Oxford, England, developed an AI diagnostics system that’s more accurate than doctors at diagnosing heart disease, at least 80 percent of the time. At Harvard University, researchers created a «smart» microscope that can detect potentially lethal blood infections much faster and more secure than any human specialist can do. In Japan an AI system can specify colon cancer with more certainty than «normal» methods.15 Also in the assistance of elderly and handicapped people there are promising developments as in the (EU) Italian Institute of Technologies in Genoa icub an/dy project to create an more or less empathic assistant.16 These developments are «state of the art» this moment and will be of a more instrumental character, but still the processing of those «sensitive» data will be submitted to the GDPR.

[31]

In this spectrum of activities though, it will be almost impossible to reveal the system of processing to the data subject, a task as described in consideration 83 GDPR, «In order to maintain security and to prevent processing in infringement of this Regulation, the controller or processor should evaluate the risks inherent in the processing and implement measures to mitigate those risks», if there is no clear insight of the working of the algorithm transparent system of how data is being processed.

4.2.

Co-existence between Humans and AI ^

[32]

Co-existence can start in the VR-format, although it can have sensational impact as one could see in the feminine character in the film «Her». But even in a less sci-fi era of today we are connected to virtual assistances as Siri and Alexa chat-boxes of which one of the recent developments is there is launched a AI Replica chatbot-app, functioning as your best friend that learns about yourself by «feeding» the app with, of course, your personal and most sensitive data. You can give the app a gender and a name and will always have a unique relation with the «owner».17 This latest chatbot is just one in the series that started Eliza, first created in 1966 by Joseph Weizenbaum of the Massachusetts Institute of Technology (MIT). Although Weizenbaum was the first one to create an «AI» Chatbot, he also stated that even if it became possible to build truly intelligent machines (which he doubted), mechanistic reasoning should never be a substitute for human decision-making, which included not just logical deduction but emotional and ethical subtleties that could not be understood or debited by those applications.18 Chatbots are now widely spread in all kind dating sites and even are able to create fake news and can also be used with criminal intent as fake-bots for companies impersonating bank chats to acquire your financial or other personal data.

4.3.

Wearables, Toys and Virtual Reality ^

[33]

A newly developed technology could further revolutionize the future of virtual reality (VR). The innovative electronic skin, known as e-skin, is a soft, bendable, and wearable tech that allows the user to manipulate objects that exist only in the virtual world.19 This skin can interact with magnets to activate different applications.20 The e-skin is a thin film that can be worn on the hand and manipulated to interact with a magnet nearby. Depending on the angle of the wearers» hand, the voltage will vary and different instructions can be derived from this to activate commands and functions. Examples for use of this technology, could include turning off virtual light switches and typing on virtual keyboards – both with tangible results.21

[34]

The connection to the internet and probably to the manufacturers and service providers can create problems as is the case with the increasing appliances of the internet of things. Some examples of the use of personal data without permission by still semi-intelligent products, without knowledge or consent of the user are already there. The first case concerns the doll «Cayla» that can have «smart» communication with its child user. The German Federal Network Agency (Bundesnetzagentur) told parents to destroy this doll, because its smart technology could be used to reveal personal data by connecting to the Internet via Bluetooth. The doll responded to children’s queries by using a concealed internal microphone and this mechanism apparently violated German privacy law. The controversy over Cayla highlighted the privacy perils of a world where toys can be connected to the Internet, and where a child may confide private secrets to a «doll» that records what the child says and do.22 Even if a toy-company has no intention of violating privacy, the Internet connection could serve as a tempting target for hackers or ambitious marketers.

[35]

Another case is possibly even more sensitive. A company used the personal information of the user of the «We-Vibe Rave» vibrator, accessible via Bluetooth and Wi-Fi over the Internet. Obviously, the clearly sensitive information was send back to the manufacturer without permission of the user and was considered unlawful and convicted the company to pay $ 3.75 million to the plaintiffs.23

[36]

Also Nike was warned by the Dutch Privacy Authority concerning the Nikes «intelligent» running shoes. Data about the physical activities were connected to the user’s smartphone or watch appliance. These appliances were not just communicating with the user but also with the manufacturer. Sensitive information was processed and stored by Nike. This was against privacy regulations. The processing of health data entails risks, including the risk of discrimination based on an individual’s presumed or actual health condition. This is, therefore, a processing of special sensitive personal data.24 Recent wearables also will give audible feedback to improve physical performance or healthier lifestyle.25

4.4.

Co-integration ^

[37]

The (legal) nightmare options will come to surface with a next step in (partly) integration of AI applications in the human body and mind. As long as the human body, mind or nerve system is steering and control artificial limbs the risk is not that big in a legal and ethical sense. Superficial applications on the body as non-integrated e-skin or smart clothing or shoes still are not permanently connected to the human physics. Until now only non-creative implants have been applied as artificial limbs, pacemakers and insulin pumps. Because of the connection to the internet by Bluetooth or Wi-Fi, the risks of misuse and invasion of privacy are increasing. In August 2017 almost half a million pace-makers were «recalled» by the US FDA because of the risk they were «hackable».26 According to AFP, in Sweden thousands of people have inserted under-skin microchips to replace keys, credit cards and train tickets based on Near Field Communication (NFC) technology. 27 This NFC technology is not «hack-free» if not protected by strong encryption.

[38]

Furthermore, the sci-fi concept of brain implants are becoming reality, the «Neuromancer» concept as presented by Wiliam Gibson could be come true. Several institutes around the world are working on brain connected appliances. Neurotechnology is not a sci-fi concept anymore. Brain-computer interfaces (BCIS) are there to stay. There are several experimental applications to stimulate or dis-stimulate neurons by computer implants in order to cure or at least alleviate Parkinson’s disease. Deep brain stimulation (DBS) is already used to cure a common movement disorder called essential tremor. According to an article in the Economist, though, some of the treated patients suffer a sense of alienation and complain of feeling like a robot.28

[39]

Elon Musk, who just shot a Tesla into space to avoid traffic rules also develops neurotech in Neuralink, a company that is researching methods to upload and download thoughts will probably an app to directly connect to your next Tesla. Ultimately Neuralink wants to change how we interact with devices by linking our brains to the machines we interact with most often: cars, mobile devices, and even smart items in our smart home. DBS could be used to extend the deep learning to the human brain, skipping the outmoded education sytem of schools and universities.

5.

Conclusion ^

[40]

In the European Ethical Commission EESC report on AI, it is noted that there are some downsides to AI integration in society: As with every disruptive technology, AI also entails risks and complex policy challenges in areas such as safety and monitoring, socio-economic aspects, ethics and privacy, reliability, etc.29 the threats are connected to privacy, in the sense that they also concern ethical questions about the impact of autonomous (self teaching) AI on personal integrity, autonomy, dignity, independence, equality, safety and freedom of choice. It will be very difficult and probably impossible to control the processing of personal data by self learning algorithms. The European Commission emphasized precisely that, on the basis of the resolution of the European Parliament and the GDPR, it is of great importance to have insight into the operation of the algorithm. In such cases, data subjects are entitled to meaningful information about the logic involved in the decision.30

[41]

The question is how one will ensure that fundamental norms, values and human rights remain respected and safeguarded. Also, the transparency, the ability to understand, monitor and certify the operation, actions and decisions of AI systems, retrospectively as well, will become a problem. The comprehensibility, monitoring ability and accountability of the decision-making process of an AI system is crucial in this regard. If AI is integrated in human functioning as well as on a physical as also on intellectual level it will be very difficult to enact credible legal rules for the protection of personal data. It will probably be an idea to create separate rules for a human(-like) anthropomorphic entity and a human enhancement. Next to data protection regulations by design and a sui generis law on legal personhood it is also very important to create non-discrimination rules to avoid separation between enhanced and non-enhanced humans. When there is an integration between the AI application and the human brain, it will certainly be almost impossible to regulate any part of the activities and certainly not to maintain those rules if they should be separated from the functioning of the natural person as such. But neither would it be acceptable that a dichotomy in society would exist between beings as kind of enhanced robo sapiens and «normal» Homo sapiens.

 

Dr. Robert van den Hoven van Genderen, LLM. Msc., is Director of the Centre for Law and Internet of the VU University of Amsterdam and partner at Switchlegal lawyers.

  1. 1 Arthur C. Clarke, 2010: Odyssee Two, 1984.
  2. 2 Irving John Good, Speculations Concerning the First Ultraintelligent Machine, Advances in Computers, vol. 6, 1966, pp. 31–88.
  3. 3 Ray Kurzweil predicted: 2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the «Singularity» which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created. Dom Galeon/Christianna Reedy, Kurzweil Claims That the Singularity Will Happen by 2045, futurism.com, 5 October 2017 (https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/; all websites last visited on 16 May 2018).
  4. 4 See i.e. Stuart Russel/Peter Norvig, Artificial Intelligence, A modern Approach, 2003, pp. 2–10.
  5. 5 https://www.intuitivesurgical.com/.
  6. 6 Ibidem Brian Mech, VP of Business Development of Second Sight, cited from Zoltan Istvan, Bionic Eyes Can Already Restore Vision, Soon They'll Make It Superhuman, gizmodo.com, 12 December 2014 (https://gizmodo.com/bionic-eyes-can-already-restore-vision-soon-theyll-mak-1669758713).
  7. 7 Ibidem.
  8. 8 http://ec.europa.eu/justice/data-protection/law/files/coe-fra-rpt-2670-en-471.pdf.
  9. 9 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA; and the Proposal 2017/0003(COD) concerning the respect for private life and the protection of personal data in electronic communications.
  10. 10 «Consent» of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.
  11. 11 WP 29 Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, wp251rev.01, p. 10.
  12. 12 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281/31 of 23 November 1995.
  13. 13 When such a restriction respects the essence of the fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society to safeguard: (a) national security; (b) defence; (c) public security; (d) the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; (e) other important objectives of general public interest of the Union or of a Member State, in particular an important economic or financial interest of the Union or of a Member State, including monetary, budgetary and taxation a matters, public health and social security; (f) the protection of judicial independence and judicial proceedings; (g) the prevention, investigation, detection and prosecution of breaches of ethics for regulated professions; (h) a monitoring, inspection or regulatory function connected, even occasionally, to the exercise of official authority in the cases referred to in points (a) to (e) and (g); (i) the protection of the data subject or the rights and freedoms of others; (j) the enforcement of civil law claims.
  14. 14 Abby Norman, Your Future Doctor May Not be Human. This is the Rise of AI in Medicine, futurism.com, 31 January 2018 (https://futurism.com/ai-medicine-doctor/).
  15. 15 Ibidem: the AI-assisted tool was trained on a series of 100’000 images garnered from 25’000 slides treated with dye to make the bacteria more visible. The AI system can already sort those bacteria with a 95 percent accuracy rate. A study from Showa University in Yokohama, Japan revealed that a new computer-aided endoscopic system can reveal signs of potentially cancerous growths in the colon with 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy.
  16. 16 «The robot has several sensors to understand how the human is moving,» he explains. «The presence of the human being is detected, first, by sight. Secondly, during the interaction, the robot is able to sense contact with the human being through his ‹skin›. Then, to allow the robot to be aware of the human’s actions, it needs to be equipped with sensors.» https://ec.europa.eu/digital-single-market/en/news/robot-icub-could-help-older-people-live-alone-home-longer.
  17. 17 https://replika.ai/.
  18. 18 This typewriter mode of instruction made natural language conversation between man and computer possible, giving the sense of human answers i.e.: Joseph Weizenbaum, ELIZA – A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM Volume 9, Number 1 (January 1966), pp. 36–35 (https://www.csee.umbc.edu/courses/331/papers/eliza.html).
  19. 19 Joseph Libunao, New Self-healing Flexible Sensors Could Make «Electronic Skin» a Reality, futurism.com, 20 November 2015 (https://futurism.com/new-self-healing-flexible-sensors-could-make-electronic-skin-a-reality/).
  20. 20 Gilbert Santiago Cañón Bermúdez et al., Magnetosensitive e-skins with directional perception for augmented reality, ScienceAdvances 19 January 2018, Vol. 4, no. 1 (http://advances.sciencemag.org/content/4/1/eaao2623).
  21. 21 Angela Chen, This electronic skin lets you manipulate objects without touching them, theverge.com, 19 January 2018 (https://www.theverge.com/2018/1/19/16909126/eskin-vr-wearable-sensor).
  22. 22 Feliz Solomon, Germany is Telling Parents to Destroy Dolls that Might be Spying on Their Children, Fortune, 20 February 2017 (http://fortune.com/2017/02/20/germany-cayla-doll-privacy-surveillance/), also in: Rob van den Hoven van Genderen, Privacy and Data Protection in the Age of Pervasive Technologies in AI and Robotics, EDPL no 3, 2017.
  23. 23 N.P. v. Standard Innovation (US), Corp., d/b/a WE-VIBE [2016]. The United States District Court for the Northern District of Illinois Eastern Division, Case No 1:16-cv-8655. Also in: van den Hoven van Genderen (note 24).
  24. 24 Selection from DPA Investigation Nike+ Running App, Public version, 2 November 2015 (https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/01_conclusions_dpa_investigation_nike_running_app.pdf). Also in: van den Hoven van Genderen (note 24).
  25. 25 For example Tune running shoe by Kinematix.
  26. 26 The FDA says that the vulnerability allows an unauthorised user to access a device using commercially available equipment and reprogram it. The hackers could then deliberately run the battery flat, or conduct «administration of inappropriate pacing». Both could, in the worst case, result in the death of an affected patient. Alex Hern, Hacking risk leads to recall of 500,000 pacemakers due to patient death fears, The Guardian, 31 August 2017 (https://www.theguardian.com/technology/2017/aug/31/hacking-risk-recall-pacemakers-patient-death-fears-fda-firmware-update).
  27. 27 Camille Bas-Wohlert, Microchips get under the skin of technophile Swedes, Yahoo, 13 May 2018 (https://www.yahoo.com/news/microchips-under-skin-technophile-swedes-033147071.html?guccounter=2).
  28. 28 Grey matter, red tape. In search of serendipity. How obstacles to workable brain-computer interfaces may be overcome, The Economist, 6 January 2018.
  29. 29 European Economic and Social Committee, Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society, 31 May 2017 (http://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence), p. 5.
  30. 30 Based on the GDPR, automated processing, including profiling. Articles 13, 14, 15 and 22, as referred to in: Artificial Intelligence for Europe, 25 April 2018, COM(2018) 237 final.