Jusletter IT

AI and Emotional data between the Scylla and Charybdis of European Regulation

  • Authors: Robert van den Hoven van Genderen / Rosa Ballardini
  • Category of articles: AI & Law
  • Region: EU
  • Field of law: AI & Law
  • Collection: Conference proceedings IRIS 2024
  • DOI: 10.38023/aa115617-978f-4d5a-93a7-5b9f16005cf0
  • Citation: Robert van den Hoven van Genderen / Rosa Ballardini, AI and Emotional data between the Scylla and Charybdis of European Regulation, in: Jusletter IT 15 February 2024
Not just linguistic models as Chat GPT but also other applications in all other veins of human activities. Education, health and social services, traffic and science, military and surveillance, neither of those is escaping AI influence. These applications will influence our functioning, and emotional behaviour. Although we never doubt that regulating new technologies that have a strong effect on so many societal processes is necessary, we have to be careful not to chill further development of the use of AI to avoid possible unknown risks. Research on the effects of AI certainly has to be centred on the influence on human life and fundamental requirements and rights, but also investments and legal protection of AI-related innovations are indispensable. This is certainly the case when AI uses emotional data derived from or influencing natural persons and life experiences. Investments and legal protection of inventions in this field by patenting might be challenged by several legal provisions in the EU such as the GDPR, and the draft AI Act. Also, the new European Data strategy, including the Data Act and Data Governance Act, could endanger the further investment and creativity for new AI applications because of the data sharing requirements. This article analyses the opportunities and risks of regulating AI for emotional data processing.

Table of contents

  • 1. Introduction
  • 2. Emotions and AI
  • 3. Investments and Patentability of Emotional AI
  • 4. Acceptability of Emotional AI Inventions
  • 4.1. The undeniable complexity of Privacy
  • 4.2. Is transparency a blessing or a threat to Emotion AI applications? EU Strategy for Data Governance
  • 4.3. The draft AI Act and Emotional Inventions
  • 5. Does fear and risk avoidance rule the EU AI legal framework? Positioning Innovations in the Emotional AI Domain
  • 6. Conclusions: avoiding the Scylla and Charybdis

1.

Introduction ^

[1]

Artificial Intelligence (AI) is increasingly creeping in the capillaries of society. Not just linguistic models as Chat GPT, but also other applications in all other veins of human activities. Education, health and social services, traffic and science, military and surveillance, neither of those is escaping AI influence. These applications will influence our functioning, and emotional behaviour. Although we never doubt that regulating new technologies that have a strong effect on so many societal processes is necessary, we have to be careful not to chill further development of the use of AI to avoid possible unknown risks. Research on the effects of AI certainly has to be centred on the influence on human life and fundamental requirements and rights, but also investments and legal protection of AI-related innovations are indispensable. This is certainly the case when AI uses emotional data derived from or influencing natural persons and life experiences. Investments and legal protection of inventions in this field might be challenged by several legal provisions in the EU, such as the GDPR and the draft AI Act. Also, the new European Data strategy, including the Data Act and Data Governance Act, could endanger the further investment and creativity for new AI applications because of the data sharing requirements. Transparency is a good principle in a data driven society but not if it endangers trade secrets, IPR and security. The legal process of patenting new inventions in the field of processing emotional data by AI systems has to be scrutinized and brought up to good use for an AI driven society. This article touches upon the ethical and legal acceptance of AI and emotional data with a keen eye on the necessary ethical, legal and financial investment for the wellbeing of society.

2.

Emotions and AI ^

[2]

“Feelings” and “emotions” are often considered synonyms. It could be defended that there is no difference between these terms and separating them would be artificial.1 According to the American Psychological Association (APA), emotion is defined as “a complex reaction pattern, involving experiential, behavioural and physiological elements.” Emotions are how individuals deal with matters or situations they find personally significant. Emotional experiences have three components: a subjective experience, a physiological response and a behavioural or expressive response.

[3]

AI2 technologies will bring the use of emotional data to a multiverse of applications. There are two ways AI will process emotions to be applied to different services: AI-driven innovations that are able to collect, analyse and understand emotional data and will be used for different services concerning applications that use those data, and innovations that can “create”, “elicit” or “transform” emotions by making a person “feel” certain emotions or influence their emotional feeling and their behaviour. Certainly, those applications can improve the wellbeing of people and society as a whole.

[4]

An already often used example is the alert app in (luxury) cars based on the eye-movement and facial expression to avoid people falling asleep.3 These alert and security functions based on emotional expressions could also be used for different purposes, e.g. to detect fatigue in groups or detect aggression in football stadiums, but also to direct public in shopping malls or public places such as in airports and train stations. Also applying these technologies in professional working circumstances like hospitals could improve the peace of mind of patients. Emotional data4 as collected from facial expressions, speech tone, physiological measurements and other sources, will give insights on a person‘s emotional state. The use of AI will make it possible to analyse and interpret these data at a scale and speed that was previously unimaginable. Seemingly simple applications as Alexa can capture and respond to users’ perceived emotions based on their voice.5

[5]

The use of emotion data will be valuable for commercial companies as well as governmental organisations and social and medical institutions. For instance, governments could use those emotion data to avoid unsatisfactory policy decisions but also to influence the population to accept proposals. Companies can use these insights to improve customer service, personalized experiences, and improve products and services but also could use the data to influence and stimulate commercial goals, client behaviour, increase profits. AI-powered systems through various types of remote sensor and intelligent camera technologies are being developed to collect large amounts of emotional and behavioural data, analyse them, understand their meaning, and understand what types of reactions the system should produce to trigger certain (desirable) emotional states. An example of this is the German company audEERING since 2013 active in the area of Audio Intelligence Analysis (Machine Learning, Deep Learning, Transfer Learning, Feature Extraction, etc.) on emotional and social Artificial Intelligence and intelligent audio analysis projects.6 In the US a company as Eyeris is also since 2013 developing and producing systems of vision AI for Human Behavior Understanding. They apply face analytics, 2D body tracking, action recognition and activity prediction. The company‘s vision AI is used in today‘s commercial applications such as automotive and social robotics.7 Also the use of all kinds of human generated information in driving circumstances by Tesla cars are partly based on emotional reactions8. Moreover, the use of VR headsets are making use of biometrics and emotion data. Example is the multimodal affective dataset named VREED (VR Eyes: Emotions Dataset) in which emotions were triggered using immersive 360° Video-Based Virtual Environments (360-VEs) delivered via Virtual Reality (VR) headset. Behavioural (eye tracking) and physiological signals (Electrocardiogram (ECG) and Galvanic Skin Response (GSR)).9

[6]

But what is the source of this ”new gold” spinned by the wizards of AI? Indeed, the other side of the coin is that using all those rather sensitive personal data requires a strict protection of those fundamental right connected to privacy and autonomy, primarily the right to be let alone and the freedom of interdependence and autonomy of natural persons. As such, it will also be an ethical issue, next to legal requirements, if these highly sensitive data processed by AI systems will be accepted by society.10

3.

Investments and Patentability of Emotional AI ^

[7]

Clearly, inventions on emotional AI systems require considerable investment, and thus legal incentives such as intellectual property rights ‒ primarily patents ‒ are crucial. At the same time, it is also important to secure a level of certainty in terms of the extent of the legal and ethical acceptability of such innovations (for instance, with regard to their exploitability) as well as their societal acceptance.

[8]

First, the question is whether and to what extent emotional AI stand a chance to be deemed protectable in the current European patent law system. If one looks at the patentability criteria in the European Patent Convention (EPC), it can be argued that the most challenging requirements for emo-AI inventions to meet are the requirements of invention, or patentable subject matter, and the inventive step requirement, while the other patentability criteria such as novelty, industrial applicability and disclosure are perhaps less controversial11 Here, however, we only discuss issues about morality and order public.

[9]

In terms of “invention” or of patentable subject matter requirement, the EPC Article 52 (1) states that ‘European patents shall be granted for any inventions, in all fields of technology, provided that they are new, involve an inventive step and are susceptible of industrial application’. Moreover, according to Article 52 (2)(3) EPC: ‘discoveries, scientific theories and mathematical methods, aesthetic creations, schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers, presentations of information as such’ are to be excluded from patentability ‒ they are not inventions, as they are abstract ideas and/or fundamental concepts that should be available to everyone. In this regard, the consideration in terms of emotional AI in a rather general one that relates to whether AI can (always) be considered patentable. Indeed, this is not only a question relevant for emo-AI but for AI in general, as the by now abundant literature has already elaborated on.

[10]

Instead, more relevant in the context of emo-AI and requirement of invention, is the question that relates to the so-called “exclusions” to patentability according to Article 53 EPC, which stipulates that: “European patents shall not be granted in respect of (a) inventions the commercial exploitation of which would be contrary to “ordre public” or morality; [...] (b) plant or animal varieties or essentially biological processes for the production of plants or animals; [...] (c) methods for treatment of the human or animal body by surgery or therapy and diagnostic methods practiced on the human or animal body; [...].”

[11]

The rationale of this provision is based on socio-economic considerations, such as that allowing patenting of these types of inventions would be considered against widely accepted (European) moral values.12 Indeed, the sensitive nature of emotion processing AI technologies might raise several ethical and legal constraints that could be challenged by the patent system as being “contrary to ordre public or morality”, according to Article 53 EPC. The morality of these inventions could be questioned also in the light of e.g. other key provisions (primarily the GDPR and the draft “AI Act”) that although are external to the patent system might have a clear influence in defining the concept of “European morality” due to the sensitive nature of the bio-data that forms the fuel of these technological applications. Indeed, it could be considered morally (and also legally) problematic that these inventions diminish the autonomy of natural persons, let alone the effect of control of feelings and emotions by third parties. This of course depends on the sensitivity of those data. However, if something is considered personal data, its use might be blocked at least by the GDPR and the draft AI Act.

[12]

Indeed, it is fair to acknowledge that the concept of “morality” in European patent law has not been much elaborated upon. The few existing cases tend to conceive the concept of morality as “related to the belief that some behaviour is right and acceptable whereas other behaviour is wrong, this belief being founded on the totality of the accepted norms which are deeply rooted in a particular culture” (BoA decision T 0356/93 (21.2.1995)). This rather vague conception has been left in the dictate of the EPC and has, in practice, been used very seldom, and mostly in the field of genetics and biotechnologies (for instance, cloning, modification) of animals (for example, T 0315/03, 6.6.2004) or plants (T 0356/93, 21.2.1995). To our knowledge, there is to date no BoA decision where the concept of ordre public or morality has been discussed in the context of inventions involving emotions ‒ or other psychological effects or psychology-related features in an invention. Nor does the EPO case law collection discuss any types of inventions other than the above-mentioned.13

[13]

Therefore, it remains an open question whether inventions involving emotions ‒ and especially inventions intended to elicit emotions ‒ could be patentable in view of the moral evaluation. As previously mentioned, in a case where inventions make use of such personal data as emotions, objections could be raised to their patentability on morality grounds based on fundamental rights arguments, especially legal provisions such as the GDPR and the proposed AIA. Moreover, also other legislations outside the patent system could also affect the patentability of such inventions. For e.g., “subliminal techniques” in audiovisual commercial communication are prohibited in the EU.14 As is relatively well-known, subliminal stimuli are those that operate under the level of conscious awareness yet produce psychological effects in people.15 This would fit the description of emotion data and would be a problem to putting such AI inventions on the market as well as their patentability.

[14]

In sum, all this could at least suggest that inventions intended to produce or elicit an emotional response would not be patentable as such both due to ‘ordre public’ or morality concerns and due to the constraints mentioned in other European regulations.

4.

Acceptability of Emotional AI Inventions ^

[15]

Because the processing of emotional data is undeniably strongly connected to human life it is no surprise that fundamental rights norms play an important role in terms of acceptability of emotional AI innovations. First and foremost, it is important to point to protection of the rights protected by the European Charter of Fundamental Rights (CFR),16 especially Article 1, according to which ‘Human dignity is inviolable. It must be respected and protected’ and Article 3 on the right to the integrity of the person.17 In particular, for the purpose of this article, it is very important to consider especially the rules concerning personal data protection and the law of processing those data by AI applications, namely the GDPR and the AI Act.

4.1.

The undeniable complexity of Privacy ^

[16]

There is no universally accepted definition of privacy, but, broadly speaking, privacy is the right to be let alone, or the freedom from interference or intrusion. Information privacy is the right to have some control over how personal information is collected and used, also described as “personal information sovereignty”.18 One of the defining features of applying AI in relation to personal data is the emergence of new products and services such as smart wearables and even body devices and apps that leverage algorithm-based AI systems. These new developments create challenges for regulators and other policymakers, in particular in the context of privacy.19 These challenges, however, can also bring new opportunities. For instance, will it be possible in an increasingly complex framework of using these personal data to assure data subjects of their information sovereignty over their own data? Crucially, personal data should be treated as an extension of the individual’s own personality right, granting protection to their personal development.20 As the European Data Protection Supervisor (EDPS) Ethics Advisory Group stated in its report of 2018: ‘Direct encounters between persons in the digital world are increasingly replaced by remote algorithmic profiling’.21 Transcription of behaviours and propensities is neither neutral nor exhaustive. The question is whether digital representation of persons may expose them to new forms of vulnerability and harm. Data protection is not a technical or legalistic matter. It is a profoundly human one.

[17]

The leading principle of the GDPR is found in Recital (4) according to which: “The processing of personal data should be designed to serve mankind. The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.

[18]

This recital is in line with the ongoing debate over the assertion that modern technology should improve the lives, privacy and security of individuals but not undermine fundamental rights. One of the more difficult requirements to be met under the GDPR is the requirement that personal data must be processed “transparently” and with the explicit consent of the data subject. Article 6 of the GDPR describes the options available to process personal data without the express consent of the data subject. Under its sections E and F, however, this article offers several possibilities by mentioning grounds for processing without the consent of the data subject, namely in the vital interests of the data subject or of the public interest. Paragraph 3 provides that processing data without consent is governed by (a) Union law or (b) the national law to which the controller is subject.

[19]

Emotional data will fall under the GDPR if those data are considered personal data as defined in Article 2: “(1) “personal data” means any information relating to an identified or identifiable natural person (“data subject”); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

[20]

Importantly, emotional data (for instance, collected or used in “emotional inventions”) can be considered as sensitive (special category) data, also as defined in Article 2: “(14) “biometric data” means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data; (15) “data concerning health” means personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status.

[21]

If considered sensitive data, the special regime of Article 9 of the GDPR is applicable: “[...] the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person is forbidden except with the consent of subjects, or other legitimate reasons listed in the article as securing the data subject’s vital interest or reasons of national (security) interest.

[22]

Additionally, the requirement of transparency in Article 12 GDPR can pose a problem for emotional AI inventions. Indeed, the data subject has to be informed about the data processing, its purpose and possible extensions, in a concise, transparent, intelligible and easily accessible form, using clear and plain language. All the information rights of the data subject are specified in other Articles of the GDPR: access, change, control, storage information, retraction of consent, complaints procedure, and so on. On top of that, explaining the system, if possible, could pose problems concerning trade secrets.

[23]

Lastly, Article 22 GDPR states that: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

[24]

In sum, all data that is considered to be identifiable to a natural person will fall under the reign of the GDPR. The more advanced the technology is, the greater the chance that seemingly neutral data can be considered identifiable. It is questionable whether the draft ePrivacy Regulation, which has been on the table for a long time (since 2017), will take technology developments into account, striving for technological neutrality.22 European privacy rules might pose several challenges to consider for any invention that might process emotional data and, in the worst-case scenario, such inventions (even if patented) can never be put into use or exploited legitimately.

4.2.

Is transparency a blessing or a threat to Emotion AI applications? EU Strategy for Data Governance ^

[25]

Acknowledging the opportunities but also the high risks and challenges related to the use and processing of data, certainly with the use of large language models and generative AI applications, the European Union has launched in 2020 the European data strategy for harnessing existing barriers and creating a single European market for data, A driving principle of the data strategy relates to creating an appropriate balance between protection, regulation and innovation to allow data to flow freely within the EU and across sectors, in accordance with the ‘free movement of data’, which is one of the five pillars of the European internal market.23

[26]

The ‘Data Acts’, however, do not exist in a vacuum. On the contrary, they strategically complement the already existing EU legal framework for data governance, the GDPR, the Free Flow of Non-Personal Data Regulation, the Open Data Directive, as well as the Database Directive and the Platform to Business Regulation. The Free Flow of Non-Personal Data Regulation aims at removing obstacles to the free movement of non-personal data between different EU countries and IT systems in Europe by ensuring that every organisation should be able to store and process data anywhere in the EU, and ensuring availability of data for regulatory control, as well as introducing codes of conduct to facilitate switching data between cloud services to tackle the problem of the power of Tech giants and giving users opportunity to data-transfer and data sovereignty. But , as a rule, trade secrets must be protected and may only be disclosed if the data holder and the user “take all necessary measures prior to the disclosure” to preserve confidentiality (Art 5 of the draft Data Act). Access may be refused only if the data holder, which is a “trade secret holder”, can demonstrate, and duly substantiate, that they are “highly likely to suffer serious economic damage” from the disclosure, on a case-by-case basis (Art. 4(3b) of the draft Data Act.24

[27]

It is not imaginative that creators and investors would hesitate to put their energy in emotion generating or processing AI if investments and intellectual rights are vulnerable because of transparency and data-sharing requirements.

4.3.

The draft AI Act and Emotional Inventions ^

[28]

The Regulation of the European Parliament and of the Council on Harmonized rules on Artificial Intelligence (Artificial Intelligence Act or AI Act (AIA))25 is enacted to regulate AI based on the concept of risk assessment. The AIA can be classified as a form of preventive or proactive (as opposed to reactive) law, meaning that the approach to law is based on an ex ante, rather than an ex post view.26

[29]

The scope is directed to all AI providers and users, also outside the EU. The orientation is a human-centric approach, in the sense that all development and use of AI-related applications should be guided by (human) value-oriented principles. This is believed to enhance and promote protection of the rights covered by the European CFR, especially human dignity, democracy, respect for human rights and the rule of law. The AIA follows a risk-based approach and establishes obligations for providers and users depending on the level of risk the AI can generate, divided into four different risk categories: unacceptable risk, high risk, limited risk, and minimal risk. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (for instance, classifying people based on their social behaviour, socio-economic status, or personal characteristics and emotions). Although there might be some “lighter” AI applications, in the last version of article 5 emotion-processing AI inventions in public places will be categorized under forbidden or at least high-risk level if used in legitimate circumstances27.

[30]

In the context of the emotional AI invention, the AIA poses doubts about the and patentability of products and services considered as “remote biometric identification system”, defined in the Act as: “an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified.28

[31]

The risk assessment requirements for these types of applications are rather severe and, in several cases, even not allowed based on the fact that these products (systems) causes or is likely to cause that person or another person physical or psychological harm. Although not specifically directed at prohibition of emotional data, but rather directed at biometric and other characteristics data, there is the exception that this prohibition will not apply to AI systems intended to be used for approved therapeutic purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian. This exception could very well be extended to more AI applications that process emotional data.

5.

Does fear and risk avoidance rule the EU AI legal framework? Positioning Innovations in the Emotional AI Domain ^

[32]

The title of this article suggested that the development of emotion generating AI, as well emotion data processing AI, could be destroyed by monstrous (too strict) regulation in the field of data protection or by the uncertain whirlpool of EU AI regulation and data strategy, combined with the not too incentivizing legal environment of the current European IPR system for emo-AI inventions. Probably it is not that bad. As we noted at the beginning, our aim was first to shed light on reflection points for companies and inventors to consider in decision-making related to investments in and incentives for emotional AI-related innovations. In this regard, our analysis shows that, although European patent law does not directly forbid or exclude “emotion-related inventions” from the domain of patentable subject matter, it might be difficult for such inventions to meet key patentability requirements such as those related to morality and ordre public. In fact, these types of “sensitive” inventions might be considered as being against the current standard of European “morality” and/or ordre public and thus might be deemed un-patentable in accordance with Article 53 EPC. This interpretation seems to be supported by current trends in European regulation of AI, such the GDPR but also ‒ and especially ‒ the draft AIA. With continuing developments of AI, hardly any data cannot be considered sensitive personal data by the GDPR regime. This will hamper further use of processing data because of the requirements of transparency and explicit consent. Moreover, in the draft AIA, systems with an unacceptable level of risk to people’s safety and fundamental rights would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques or exploit people’s vulnerabilities or are used for social scoring (for instance, classifying people based on their social behaviour, socio-economic status, or personal characteristics). On top of that, in the latest version of the AIA (dated 15 October 2023), it is proposed by the EP to impose a full ban on Artificial Intelligence for biometric surveillance, emotion recognition and predictive policing. Only in some therapeutic medical cases is the use of emotional data considered acceptable. These regulations place considerable importance on human-centric approaches and on respect for fundamental human rights such as the right to be let alone and freedom from interference or intrusion. Indeed, it could be argued that all these principles form the current crux of the concept of morality (and by implication immorality) in terms of AI developments and use in Europe, and, accordingly, inventions that do not respect these concepts can be considered un-patentable as immoral. This situation creates several uncertainties for the development of emotional AI innovations, and even protection of inventions through other tools such as trade secrecy to secure returns on investments could be endangered by the requirement of transparency and data-sharing European strategy.

[33]

As mentioned, the AIA follows a preventive approach to law, based on an ex ante, rather than an ex post view.29 Notwithstanding the good intentions, and the certainly great potential of prevent/proactive approaches in law, their functioning is dependent on the premise that they are used to stimulate positive actions.30

[34]

Such an approach, it is claimed, could be beneficial to improvement of healthy, trustful and more sustainable legal relationships amongst parties.31 However, this approach should not be used for limiting positive developments or penalizing future actions that are even not yet in existence. Indeed, this would be both against the rule of law and legal certainty. Further, there are also many AI systems ‒ for instance insurance, or comfort-enhancing AI ‒ that could be using emo-data in a positive sense. In many cases, inventions that use emo-data could provide an increase of comfort and security for individuals and society as a whole. Regulations should provide stimulation for enhancing comfort and security, also from the aspect of fundamental rights in personal life, as a basis for human-centric oriented rules of law.

6.

Conclusions: avoiding the Scylla and Charybdis ^

[35]

To avoid being consumed by regulatory mythological monsters, the EU better provides for a calm sea where AI emotional inventions would have given a safe landing to be successful. Regulations should be oriented to increasing people’s comfort and well-being instead of prohibiting positive services by AI processing of emotional data based on (insufficient) knowledge of negative effects. Notwithstanding this premise, our analysis shows that the current European legal landscape follows a rather cautious ‒ to say the least ‒ approach in terms of promoting developments and stimulating exploitation of emotional AI innovations. On the one hand, the European patent system ‒ one of the more important legal pillars when it comes to direct R&D and investments in innovations, albeit not preventing emotional AI inventions from being patented ‒ is not particularly incentivizing it either. A similar approach can be observed when we look at the current legal landscape regulating the domain of exploitation of emotional AI inventions, especially in relation to privacy issues and the GDPR, where several challenges, limitations and, at times, even impediments, are in place. This makes it difficult for innovators operating in various streams related to this field to find freedom to operate, security for the heavy investments required as well as for sufficient returns and revenues. Nor does the future look brighter, with the currently pending AIA draft leaning towards a preventive law approach that seems to limit possibilities rather than promote opportunities. Considering that innovations ‒ in general and in the field of emotional AI especially ‒ are also known to be key in terms of improving welfare, progress and life on earth, one could even question whether the claim that the AIA follows a human-centric approach does indeed hold true if it doesn’t give chances to human development. The EP amendment of Art. 5 .d AIA, forbidding: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces, will block a world of positive application for human wellbeing, physically as well as state of mind. Rewarding positive results in increasing comfort and well-being of human participants in all layers of society will have a multiplier effect instead of concentrating on pre-emptive controls and even interdiction of services and systems that process emotional data. A better approach would be to avoid shipwrecking before arriving in a harbour of well-developed inventions. There are ample national and European regulations that will prevent inventions with negative effects in the sense of (product) liability, privacy and human rights treaties and even criminal law. It will be a risk for lawmakers to develop ex-ante regulations of yet unknown AI applications, and chilling further developments of inventions that will increase the well-being of natural persons and society and in a legal sense will hamper legal certainty.

  1. 1 Damasio, A.R. Emotions and Feelings: A Neurobiological Perspective. In: Manstead ASR/Frijda N./Fischer A. (Eds) Feelings and Emotions: The Amsterdam Symposium. Studies in Emotion and Social Interaction. Cambridge: Cambridge University Press, 49-57 doi:10.1017/CBO9780511806582.004 (2004).
  2. 2 For the sake of explaining, this article will rely on the latest definition of the OECD of 23 November 2023 that will be integrated in the EU AI Act Article 3, according to which: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions influence physical real or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.
  3. 3 https://www.driverknowledgetests.com/resources/what-are-sleepiness-or-fatigue-sensors-in-vehicles/.
  4. 4 Although no official definition of “emotional data” currently exists in legislation in the EU, in this article we will use our own developed definition of emotion data and emotional data processing as follows: ‘emotion data is considered as data representing the emotional, psychological or physical status of natural persons by identifying and processing their (facial) expressions, movements, behaviour or other physical, physiological or mental characteristics’.
  5. 5 Ayça A./Burkhard S./Lachlan U., How do you solve a problem like Alexa? In: Jusletter IT 30 March 2023 https://jusletter-it.weblaw.ch/en/issues/2023/30-maerz-2023/how-do-you-solve-a-p_f3b19c54f7.html__ONCE&login=false (2023).
  6. 6 http://www.audeering.com.
  7. 7 https://www.eyeris.ai/.
  8. 8 See https://spectrum.ieee.org/tesla-autopilot-data-scope.
  9. 9 Gupta K./Lazarevic, J./Suen Pai, Y./Billinghurst, M. Towards VR Personalized Emotion Recognition, 26th ACM Symposium on Virtual Reality Software and Technology, VRST 2020 Virtual, Online 1-4 November 2021, 1-3.
  10. 10 This was pointed out in the Irish DPA ruling of 31 December 2022 on behavioural advertising. After the EDP binding dispute decision Binding Decision 5/2022 on the dispute submitted by the Irish SA regarding WhatsApp Ireland Limited (Art. 65 GDPR), EDPB Chair Andrea Jelinek said: “The EDPB binding decisions clarify that Meta unlawfully processed personal data for behavioural advertising. Such advertising is not necessary for the performance of an alleged contract with Facebook and Instagram users. These decisions may also have an important impact on other platforms that have behavioural ads at the centre of their business model”. The EDPB found in both cases that Meta IE lacked a legal basis for this processing and therefore unlawfully processed these data. As a consequence, the EDPB instructed the IE DPA to amend the finding in its draft decisions and to include an infringement of Art 6(1) GDPR.
  11. 11 For basics on patentability rules in the European system see, e.g, Matthews, D./Torremans, P. European Patent Law, Berlin-Boston, De Gruyter (2023).
  12. 12 WIPO, SCP/15/3, Annex I.
  13. 13 Case Law of the EPO Boards of Appeal, Part I, Chapter B, item 2.2.2(b). See also Ballardini R.M./van den Hoven van Genderen R./Nokelainen T, Legal Incentives for Innovations in the Emotional AI Domain: A Carrot and Stick Approach? (forthcoming in 2024).
  14. 14 See Audiovisual Media Services Directive (EU) 2018/1808, Art 9, para 1(b).
  15. 15 McConnell, J.V./Cutler, R.L./McNeil, E.B. Subliminal stimulation: An overview. In: American Psychologist, Vol. 13, No. 5, 229–242 https://doi.org/10.1037/h0042953 (1958).
  16. 16 Charter of Fundamental Rights of the European Union, Official Journal of the European Union C83, vol. 53, European Union, 2010, 380.
  17. 17 According to Art 3 CFR: ‘1. Everyone has the right to respect for his or her physical and mental integrity. 2. In the fields of medicine and biology, the following must be respected in particular: (a) the free and informed consent of the person concerned, according to the procedures laid down by law; [...] (c) the prohibition of the reproductive cloning of human beings’. Specifically these two areas are the subject of use of emotional data.
  18. 18 IAPP, ‘What does Privacy mean?’, Available at: https://iapp.org/about/what-is-privacy/ last accessed 4.8.2023). See also van den Hoven van Genderen, R. Privacy Limitation Clauses: Trojan Horses Under the Disguise of Democracy, Wolters Kluwer, Alphen aan den Rijn (2016): the concept of informational self-determination as created by the German Constitutional Court, “Bundesverfassungsgericht” in 1983’. Privacy may entail a right to a lack of disclosure of personal information but at the very least also contains a right to selective disclosure of personal information. (ch 1, at 5).
  19. 19 In the context of healthcare, see, e.g., Corrales Compagnucci, M./Fenwick, M./Haapio, H./Minssen, T./Vermeulen, E.P.M. ‘Technology-Driven Disruption of Healthcare & „UI Layer“ Privacy-by-Design. In: Corrales Compagnucci, M./Wilson, M.L./Fenwick, M./Forgó, N./Bärnighausen, T. (eds), AI in eHealth: Human Autonomy, Data Governance & Privacy in Healthcare. CUP, Cambridge (2022).
  20. 20 Van der Sloot B., Privacy as Personality Right: Why the ECtHR’s Focus on Ulterior Interests Might Prove Indispensable in the Age of “Big Data”’. In: Utrecht Journal of International and European Law, Vol. 31, No. 80, 25-50 (2015).
  21. 21 Burgess, P.J./Floridi, L./Pols, A./van den Hoven, J. Towards a digital ethics EDPS Ethics Advisory Group, Report 2018, 11, available at: https://edps.europa.eu/sites/default/files/publication/18-01-25_eag_report_en.pdf last accessed 4.8.2023 (2018).
  22. 22 COM(2017) 10 final2017/0003 (COD) Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications).
  23. 23 https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/european-data-strategy_en.
  24. 24 See more extensively: Ballardini R./van den Hoven van Genderen, R. https://fromlaplandwithlaw.blogspot.com/.
  25. 25 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts COM/2021/206 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206. At the time of writing (December 2023) the amended text of the AIA has been accepted by the European Parliament and is negotiated with the European Council and Commission. The Member States in the Council will have to adapt this text to their doubts and convictions.
  26. 26 Brown, L.M. Preventive Law, Prentice-Hall, New York (1950).
  27. 27 Also recital 35 & 36 on education and workspace. In the trilogue text of 26 November the following applications were banned in Article 5: “Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit: biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; social scoring based on social behaviour or personal characteristics; AI systems that manipulate human behaviour to circumvent their free will; AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
  28. 28 AIA, Art 3.
  29. 29 ibid.
  30. 30 Berger-Walliser, G./ Østergaard, K., Proactive Law in a Business Environment, 1st edn Tilst, DJØF Publishing, Jurist- og Økonomforbundets Forlag, 16 (2012).
  31. 31 Siedel G.J./Haapio H. Using Proactive Law for Competitive Advantage, (August 1, 2010). Ross School of Business Paper No. 1148, Available at SSRN: https://ssrn.com/abstract=1664561 or http://dx.doi.org/10.2139/ssrn.1664561 (2010).