Jusletter IT

Trust in Automated Vehicles from a Sociological Perspective

  • Authors: Thomas Zenkl / Martin Griesbacher
  • Category of articles: Autonomes Fahren
  • Category: Articles
  • Region: Austria
  • Field of law: Applications
  • Collection: Conference proceedings IRIS 2020
  • DOI: 10.38023/962185e1-c563-4193-a56a-c73280c86461
  • Citation: Thomas Zenkl / Martin Griesbacher, Trust in Automated Vehicles from a Sociological Perspective , in: Jusletter IT 27 Mai 2020
Contemporary human factors research is often based on individual perceptions and neglects the effects of connected and automated mobility (SAE Level 3 and above) on stakeholders outside the relationship between end-users and manufacturers. Within Project VERDI, we seek to broaden this view by investigating a societal based understanding of trust and trustworthiness towards highly automated vehicles. We argue that it is necessary to include a variety of stakeholders’ perspectives that are potentially affected by the ongoing automation of individual mobility, in order to investigate how trust is mediated and shaped through society.

Inhaltsverzeichnis

  • 1. Introduction
  • 2. Public Opinions about Automated Vehicles
  • 3. Trust in Automated Vehicles
  • 4. Conclusion
  • 5. References

1.

Introduction1 ^

[1]

Trust is not only considered a central concept for the interaction between humans, but also for the adequate use of automated systems: inappropriate trust («over-trust») can lead to a system’s misuse or abuse, too little trust («under-trust») to its rejection, i.e. non-use. It plays a leading role in determining people’s willingness to rely on automated systems – just as it does in interpersonal relationships [Hoff/Bashir 2015, 3]. A prominent view in the domain of psychology is that the expected benefits of highly automated driving systems, defined by the SAE standards and referred to here as SAE Level 3 (e.g. lane assist functions and the limited possibility for drivers to take their hands off the steering wheel) [SAE 2018], can only manifest if users encounter automation with a degree of «calibrated trust», thus knowing the limits of the system’s functions and in which situations it is appropriate to trust it [Muir 1994]. But beyond individual users putting «trust» into their automated devices, a technical innovation with the potential to alter society’s perception of personal mobility and put human lives in danger cannot just be analysed from an individualistic point of view, but must be considered as embedded within manifold societal and medial discourses that establish, form and influence (dis)trust towards it. For our purpose, it is therefore crucial to see the complex interlacing of both collective and individual perceptions in the formation of trust towards automation; the (objectively) most (un)reliant system might withstand an individual’s proper calibration of (dis)trust because of collectively made assumptions (e.g. about the manufacturing brand or widespread criticism of similar technologies), framing the process of «trusting» prior to a personal assessment. On the other hand, these societal attributions form through the aggregation of individual experiences, advertising, legislation and media coverage, all contributing to discourses on the topic and therefore establishing the framework in which the actor’s calibration of trust takes place. We want to acknowledge the importance of psychological analysis in human factor research and in the formation of trust towards automation, yet advocate for a stronger focus on social parameters that, in our understanding, initially frame the possibilities of «trust». Additionally, an analysis as such focusses not only on public opinions and identifies them as a key barrier or enabler respectively towards the future adoption of automated vehicle (AV) technologies [Cunningham et al. 2018]; it also guides our attention towards the manifold stakeholders in road traffic affected by this innovation [Penmetsa et al. 2019; Reig et al. 2018].

[2]

This article will combine an overview of the current empirical findings on automated vehicles (AVs) with the most important sociological considerations on the concept of trust, and furthermore explores the possibilities of how to frame trust towards an automated device.

2.

Public Opinions about Automated Vehicles ^

[3]

Perceived problems with (concerns over) new technology significantly influence the potential amount of trust that people are willing to put in it. If concerns about certain technologies are high, people tend to avoid them rather than seek ways for more secure usage [Riek/Abramova/Böhme 2017]. Trust is created in the light of expectations, raising the importance of taking the publics’ considerations on technological development seriously in order to analyse the societal formation of trust and its alteration over time [Bordum 2004]. This is especially relevant for automation that bears the potential of harming oneself or others at any time of use. By the introduction of AVs, experts and industrial players promise manifold benefits including increased road safety, less traffic congestion and greater comfort than manual driving [Underwood 2014; Sommer 2013]. Nevertheless, there still seem to be strong reservations to fully handing over control and putting ones’ life completely in the hands of an AV [Schoettle/Sivak 2015; Payre et al. 2014; Wolf 2015]. Potential users would furthermore like the ability to regain control over their car at any time and switch back to manual driving, despite an automations’ capability of driving completely autonomously [Cunningham et at. 2018]. This indicates distrust or concerns over potentially losing the joy of driving. It must be mentioned that the majority of contemporary research on (highly) automated driving is focused on fully autonomous driving (SAE Level 5), neglecting the gradual process of automation. This refers to the ambiguity of unknown deployment paths and the ongoing debate, as to whether AVs implementing SAE Level 3 will even prove to be practical [Fagnang/Kockelman 2015; Underwood 2004] or if instead only fully automated cars will succeed on the market. Caused by the special requirements imposed onto a «fall-back ready user»2, these considerations are crucial for analysing the driver’s inevitable reconfiguration of their own role and relationship to driving, and therefore their formation of trust towards AVs (e.g. the loss of driving competence due to the decreasing time spent practicing driving, and simultaneously, the increasing demand to quickly take over control in difficult driving situations).

[4]

Apart from the loss of autonomy and adaption of the «driver’s» role towards becoming a mere «user» or passenger of an automated device, the topic of road safety creates ambiguous expectations, picturing largely a somehow positive attitude towards the deployment of AVs but with some reservations. On one hand, AVs are expected to raise road safety by eliminating the human factor that is prone to failure of concentration, attentiveness, reaction or distraction; on the other hand, users don’t seem to fully trust the machine either, or at least identify other issues (hacking, malfunctioning, surveillance) emerging, leading to a shift in the area of concerns. With ongoing automation, the safety concerns of the progressively disempowered driver are substituted by reservations towards the automated systems now in charge [Hulse et al. 2018]. Further concerns arise around the topics of security [Schoettle/Sivak 2014; Kyriakidis et al. 2015; Bansal/Kockelman 2018], legal liability [Howard/Dai 2014; Kyriakidis et al. 2015], or privacy [Kaur/Rampersad 2018; Kyriakidis et al. 2015; Cunningham et at. 2018].

[5]

But the expression of expectations and concerns by potential future users, targeted by the majority of research on AVs, should not be confused with a general opinion of the wider public; a technical innovation like automated driving is going to affect all groups of road-users that have to interact with it. If the implementation of AVs will not be restricted to special isolated lanes or streets, the perceptions of passengers, pedestrians, cyclists and other (manual) drivers must be considered as well. The sociological contribution to this realm must be to highlight these opinions and picture them within social constraints, opening two perspectives:

  • The central role of public discourses, mediated and influenced through day to day experiences, advertisement and media coverage is often neglected in favour of a potential buyers’ perspective. In order to emphasise that «trust» is being negotiated and formed within a wider public and not merely by users, all groups affected need to be included in the analysis. A broadening of view as such is followed only by some attempts [Hulse et al. 2018; Deb et al. 2017; Penmetsa et al. 2019], other research expands user’s perspective by asking whether they would feel comfortable, if close relatives interacted with AVs [Cunningham et al. 2018; Deb et al. 2017].
  • Opinions and attitudes differ strongly through parameters of social structure. Men were found to be more comfortable with the idea of riding an AV than women, and younger people were found to be more receptive than older people [e.g. European Commission 2015; Nees 2016; Payre et al. 2014]. Also, regional differences, social status and personal innovativeness, experience and risk taking were found to be relevant social factors [Bansal/Kockelman 2018; Howard/Dai 2014; Kyriakidis et al. 2015]. Explaining these influences is a necessary step towards a holistic analysis of trust in automation.
[6]

These findings need to frame sociology’s theoretical considerations, as they not only offer further explanations on complex processes and contribute to a discussion currently focused on merely psychological factors, but also highlight the importance of a holistic analysis that includes social constraints, as well as the perspectives of all stakeholders involved in road use.

3.

Trust in Automated Vehicles ^

[7]

While many studies research public acceptance towards AVs, it is crucial to realize the conceptional difference to trust; acceptance can be habitual or may result merely from the lack of a better option, even in the absence of trust (e.g. mediated through social pressure) [Bordum 2004]. The public’s acceptance towards AVs might therefore indicate a certain level of trust, but should not be confused with the perception of AVs trustworthiness. Levels of concern can be high in some areas where the level of acceptance is equally high [Reichmann/Griesbacher 2018].

[8]

Sociology offers a broad variety of approaches for analysing trust towards people, groups, families, neighbours, organizations or even societies, that all share a notion of «trust» as something conceived not as an isolated property of individuals, but as an attribute of social relationships produced in processes of collective attribution. However, traditional literature in both sociology and psychology does not discuss technologies as objects of trust [Corritore et al. 2003, 739]. By focussing on social relationships rather than personal opinions, trust can be conceptualized as the mutual faithfulness on which all social systems ultimately depend [Lewis/Weigert 1985]. From the perspective of system theory, trust shapes expectances in social relationships and serves as a function of a social system that reduces complexity in order to function properly [Luhmann et al. 2017]; it is something that we ascribe in a relationship between us and something else given in a situation [Bodrum 2004]; it is therefore not a relationship itself, but a property of relations and a «facilitator of interactions among the members of a system» [Taddeo 2017, 1]. Since there is no need for trust without risk, the relationship referred to must contain an element of uncertainty or vulnerability. Emerging trust is therefore the «optimistic acceptance of a vulnerable situation which is based on positive expectations of the intentions of the trusted individual or institution» [Hall et al. 2001, 615].

[9]

In this understanding, trust is the result of an assessment of a trustee’s (someone that is trusted) trustworthiness by a trustor (someone that trusts). This assessment is defined as «the set of beliefs that the trustor holds about the potential trustee’s abilities, and the probabilities she assigns to those beliefs» [Taddeo 2010, 6] and reflects the trustors experience with the trustee’s previous performance, as well as beliefs and expectations, mediated by society. Society therefore provides the initial framework of what, in what way, and for what tasks something can be trusted – especially outside of relationships without longer personal histories (e.g. between family members or close friends). Therefore, social constraints precede cognitive processes of calibrating trust and shape the initial level of perceived trustworthiness and trust towards something; it’s a collective assessment that influences the personal one. Accordingly, it must be acknowledged that trust might not only differ interpersonally or due to demographic factors, but also interculturally. The evident differences in trust towards AVs prior to their introduction that manifest in some studies [e.g. Kyriakidis et al., 2015] could be explained with Floridi’s distinction of mature and immature information societies «in terms of their members’ unreflective and implicit expectations» [Floridi 2016, 3] to rely on digital or automated technologies (or not). Maturing information societies are specified as hybrid systems of both human and artificial agents that transformed their mutual expectations into a widespread and resilient form of trust [Taddeo 2017, 3]. These considerations also offer an answer to the question as to whether it is even legitimate to talk about «trust» or «trustworthiness» in the case of machines or automation. Because of a machine’s lack of moral agency (and therefore a lack of intention to harm) it has been argued that technologies cannot be conceived of as «trustworthy» and that the notion of trust is a mere assessment of their previous performance (reliability) [Corritore et al. 2003, 739]. However, the focus on trust processes offer a more complex and realistic approach towards humans’ adoption of, and interaction with, AVs. While end-users can experience reliability predominantly through the observation of visible behaviour (e.g. steering in the course of lane assist functions), AVs involve a broad spectrum of non-visible processes (e.g. the handling of data, decision processes through algorithms, observation of the driver). The concept of trust however always involves an element of uncertainty, expressed by the issue of the trustee’s non-observable behaviour and therefore the necessity of trust for engaging in an interaction. This is especially relevant when considering other road users, that, in contrast with the users’ possibilities to get information on reliability and failures from the vehicle, lack such a source of information. Despite not being a moral agent, AVs are being perceived and responded to as social agents through their immediate social presence (e.g. the possibility to interact with them) [Corritore et al. 2003, 745]. Additionally, and contrary to the concept of mere reliability, trust emphasizes the aforementioned aspects of uncertainty and vulnerability as key elements of the interaction between humans and machines in traffic situations.

[10]

This notion of expanding trust towards artificial agents refers to the ongoing discussion about e-trust [Taddeo 2010], cyber trust [Etzioni 2019] and On-line trust [Corritore et al. 2003], that are, concepts of trust specifically developed in digital contexts and/or involving artificial agents or hybrid systems3 [Taddeo/Floridi 2011; Taddeo 2017]. Even though this discussion concerns «trust» instead of mere «reliability» towards AVs, human-automation trust should not be confused with interpersonal trust. Despite similar factors (situation specific attitudes relevant in uncertain situations), there are several qualitative distinctions attributing a relationship between humans and automation. These differences manifest by reverse progressions in the process of trust formation4, as well as differing attributes5 that the trusting relationship depends upon [Hoff/Bashir 2015, 11].

[11]

However, it could be argued here that the trusting relationship an end-user engages in when operating an AV is not addressing the vehicle itself, but the brand, the company, and therefore the engineers and designers that created the automation. Even though research shows varying levels of trust mediated through brand associations [Carlson et at. 2014], with higher levels of perceived trustworthiness attributed to well-known brands [Reig et al. 2018], the fact of the AVs immediate physical presence and the user directly interacting with it (and not with remote engineers) would suggest the formation of a relationship towards the automation that is affected by the brand (and the societal expectations associated with it). This follows findings that people enter relationships with computers, websites and other new media (rather than with the company running them) because of their «social presence», that is technology appearing as social actors [Corritore et al. 2003, 739f].

[12]

Another interesting approach analysing trust, especially in traffic, was brought up by the theoretical strand of symbolic interactionism, that focusses on the societal negotiation of norms and conventions on the basis of meaning ascribed to objects and actions (symbols) that finally result in what can be perceived as social order. Traffic rules and the consequent traffic order provide an obvious example [Goffman 1966]. «Conformity to rules is supplemented with by-passings, secret deviations, excusable infractions, flagrant violations and the like» [Goffman 1971, x, cited in Conley 2012, 222], «so a social order is a product of both rules – the traffic code – and the manoeuvrings of actors within and beyond its constrains» [Conley 2012, 222]. Traffic can therefore be considered a diffuse social occasion in which mobile actors come into each other’s presence as part of «a wider social affair, undertaking, or event ... [that] provides the structuring social context in which many situations and their gatherings are likely to form, dissolve, and re-form» [Goffman 1963, 18, cited in Conley 2012, 222]. Social interactions between motorists are therefore not only an integral part of the traffic situation in which individuals confront one another, define one another, and negotiate with one another in coordinating their movement; this theoretical approach also highlights the importance of driver interaction to the establishment of mutual trust [Dannefer 1977]. Any collision is likely to result in serious to fatal injuries and damages, indicating the great degree of trust that is implicit in automobile traffic. Going beyond symbolic interactionism’s traditional assumption, that trust emerges from social interaction [Gawley 2007, 48], it has been pointed out that this specific form, implicit trust, is not a result of a driver’s (or other road user’s) individual experience, but an integral part of modern society’s socialisation. Because of the non-awareness of the risk involved in driving, this implicit form of trust in traffic only ever manifests when being erupted (e.g. by an accident or when passengers perceive their driver as incompetent) [Dannefer 1977, 37]. For the case of AVs, three new perspectives emerge:

[13]

Firstly, it will have to be investigated how and if implicit trust in different traffic situations is affected or even erupted by automated driving. The scene of another driver reading a book while driving on a highway could be disturbing for other drivers, leading to a recalibration of their trust towards the motorist and her vehicle that results in behavioural changes (e.g. not daring to take over or trying to get ahead of the perceived «threat» as quickly as possible).

[14]

Secondly, the question of how the implicit societal trust towards traffic is being shaped and affected in public discourse needs to be highlighted. By postulating that social order is based on a frame of reference that is encoded into symbols (objects and acts) of shared meaning (mediated through discourse and manifested in public opinions), the focus of investigation must ask about how these (cultural) codes are being affirmed, reproduced and altered, and finally how societal discourse can shape a calibration of trust towards AVs.

[15]

Thirdly, the theoretical approach of symbolic interactionism guides our attention towards the interaction between different actors. Therefore, the inevitable recalibration of interactions caused by the introduction of AVs comes into focus. By identifying the generic social dimension inherent in the driving situation [Dannefer 1977], not just the driver’s interaction with the AVs and driving assistance systems, but also other road user’s interactions with an AV, could influence road safety, raising questions of how (and by what means) AVs can and have to symbolically interact with others (e.g. with warning lights or sounds).

4.

Conclusion ^

[16]

Despite its origins in human interaction, trust is a useful concept for research on people’s attitudes towards automated and autonomous systems. The issue of trust towards automation is a complex phenomenon, with a variety of disciplines offering different approaches and explanations. With the considerations made above we want to promote a perspective that, without neglecting psychology’s important findings on trust formation and calibration, focusses on societal constraints and conditions that frame, shape, and determine people’s potential to put trust in automation and to assess the trustworthiness of automated vehicles.

[17]

We argue that it would be insufficient to reduce human’s relationships to machines to a mere assessment of the latter’s previous performance (reliability); as has been shown by recent research, people tend to project human abilities and properties on automated devices. Despite their lack of a distinct moral agenda, machines are often perceived as social agents through their immediate social presence. Even though it could be argued that the absence of self-interest would suggest assessing them in terms of their reliability, for human actors the uncertainty and vulnerability in road traffic situations is not changed by the introduction of AVs. The functioning of road traffic still relies on the formation and sustainment of implicit trust. Despite apparent qualitative differences in ascribing trust to interpersonal relationships, we argue that processes of trust formation can be mediated digitally and can therefore occur towards technologies such as machines, websites, distributed services or artificial agents («e-trust», «on-line trust», «cyber trust»).

[18]

By framing trust from a sociological standpoint, we want to broaden the view towards an understanding of trust that is societal, and therefore inclusive towards often-neglected stakeholders involved in road traffic: passengers, cyclists, pedestrians, etc. Because everyone participating in road traffic will potentially be affected by the introduction of AVs it is necessary to include this diverse set of actors’ opinions in any analysis. Additionally, when we seek to understand the adoption of any technology, we need to consider the societal negotiation of a technological invention’s potential trustworthiness in public discourses. Public opinions, expressed by expectations and concerns towards a future deployment of AVs on public roads, must therefore be central when empirically investigating trust. In short, if people are not willing to accept a new technology or meet it with strong reservations, they are likely to avoid, reject, misuse or abuse it, therefore hindering a broader adoption despite potential advantages (e.g. increased road safety).

[19]

This shift of focus away from individual users and towards a holistic perspective that is rooted in society opens further aspects of the societal debate on AVs: the potential of social inclusion by allowing excluded groups to participate in individual traffic might be outweighed by AV’s impact on the reorganization of the labour market; the necessity of excessively monitoring the user and the permanent surveillance of traffic (and everyone participating in it) might not justify the prospects of a decrease in crashes and gas emissions and could shift public’s discourse towards negative perceptions of AVs.

[20]

By identifying the generic social dimension inherent to the driving situation, other road user’s interactions with an AV, besides that of the driver, came into focus, raising questions of how (and by what means) AVs can and have to symbolically interact with others (e.g. warning lights or sounds). Any collision is likely to result in serious to fatal injuries and damages, indicating the great degree of trust that is implicit in automobile traffic. As an integral part of modern society’s socialisation, this implicit and mediated level of trust only becomes evident when suddenly erupted. Research into AV’s trustworthiness must therefore consider the effect of automated driving on the implicit trust in traffic, the shaping and re-negotiation of implicit trust by the introduction of AVs and finally the new forms of interaction between road-users and artificial agents.

[21]

The ways of AVs deployment on public streets is still open, so is the question whether standards like SAE Level 3 will ever proof to be practical. However, trust towards AVs and the trustworthiness of automated devices will always be part of societal debate and therefore negotiated collectively, framing the expectations, concerns, and the individual’s potential to trust. With our research we want to contribute to an interdisciplinary approach of analysing trust towards automation and advocate for a holistic perspective, that, next to considering all stakeholders involved, takes into account the manifold societal discourses that shape the public’s expectations and concerns.

5.

References ^

Bansal, Pratek/Kockelman, Kara M., Are we ready to embrace connected and self-driving vehicles? A case study of Texans, Transportation, 2018, Volume 45, Issue 2, pp. 641–675. DOI: 10.1007/s11116-016-9745-z.

 

Bordum, Anders, Trust as a critical concept, Center of Market Economics, Copenhagen Business School, Frederiksberg 2004.

 

Carlson, Michelle S./Desai, Munjal/Drury, Jill L./Kwak, Hyangshim/Yanco, Holly A., Identifying Factors that Influence Trust in Automated Cars and Medical Diagnosis Systems. Intersection of Robust Intelligence and Trust in Autonomous Systems, IAAA, 2014, pp. 20–27.

 

Cunningham, Mitchell/Ledger, Selena/Regan, Michael, A survey of public opinion on automated vehicles in Australia and New Zealand, 28th ARRB International Conference – Next Generation Connectivity, Brisbane 2018.

 

Conley, Jim, A Sociology of Traffic: Driving, Cycling, Walking. In: Vannini, Phillip (Ed.), Technologies of Mobility in the Americas, Peter Lang, Oxford & Bern 2012, pp. 219–236.

 

Corritore, Cynthia L./Kracher, Beverly/Wiedenbeck, Susan, On-line trust: concepts, evolving themes, a model, International Journal of Human-Computer Studies, 2003, Volume 58, pp. 737–758.

 

Dannefer, W. Dale, Driving and Symbolic Interaction, Sociological Inquiry, 1977, Volume 47, Issue 1, pp. 33–38.

 

Deb, Shuchisnigdha/Strawderman, Lesley/Carruth, Daniel W./Dubien, Janice/Smith, Brian/Garrison, Teena M., Development and validation of a questionnaire to assess pedestrian receptivity toward fully autonomous vehicles, Transport Research Part C: Emerging Technologies, 2017, Volume 84, pp. 178–195.

 

Etzioni, Amitai, Cyber Trust, J Bus Ethics, 2019, Volume 156, Issue 1, pp. 1–13.

 

European Commission, Eurobarometer 427, Autonomous Systems, European Commission, 2015.

 

Fagnang, Daniel J./Kockelman, Kara, Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations, Transportation Research Part A: Policy and Practice, 2015, Volume 77, pp. 167–181.

 

Floridi, Luciano, Mature Information Societies – a Matter of Expectations, Philosophy & Technology, 2016, Volume 29, Issue 1, pp. 1–4.

 

Gawley, Tim, Revisiting Trust in Symbolic Interaction: Presentations of Trust, Qualitative Sociology Review, 2007, Volume 3, Issue 2, pp. 46–63

 

Goffman, Erving, Behaviour in public places, The Free Press, New York 1966.

 

Goffman, Erving, Relations in public, Microstudies of the public order, Harper & Row, New York 1971.

 

Hall, Mark A./Dugan Elizabeth/Zheng, Beiyao/Mishra, Aneil K., Trust in Physicians and Medical Institutions: What Is It, Can It Be Measured, and Does It Matter?, Blackwell Publishers, Milbank 2011.

 

Howard, Daniel/Dai, Danielle, Public Perceptions of Self-Driving Cars: The Case of Berkeley, https://www.ocf.berkeley.edu/∼djhoward/reports/Report%20-%20Public%20Perceptions%20of%20Self%20Driving%20Cars.pdf, (accessed on 31 July 2019), 2014

 

Hoff, Kevin/Bashir, Masooda, Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust, Human Factors, 2015, Volume 57, Issue 3, pp. 407–434.

 

Hulse, Lynn M./Xie, Hui/Galea, Edwin R., Perceptions of autonomous vehicles: Relationships with road users, risk, gender and age, Safety Science, 2018, Volume 102, pp. 1–13.

 

Kaur, Kanwaldeep/Rampersad, Giselle, Trust in driverless cars: Investigating key factors influencing the adoption of driverless cars, Journal of Engineering and Technology Management, 2018, Volume 48, pp. 87–96.

 

Kyriakidis, Miltos/Happee, Riender/de Winter, J. C. F., Public opinion on automated driving: Results of an international questionnaire among 5000 respondents, Transportation Research Part F: Traffic Psychology and Behaviour, 2015, Volume 32, pp. 127–140.

 

Lee, John/Moray, Neville, Trust, control strategies and allocation of function in human-machine systems, Ergonomics, 1992, Volume 35, Issue 10, pp 1243–1270.

 

Lewis, J. David/Weigert, Andrew, Trust as a Social Reality, Social Forces, 1985, Volume 63, Issue 4, pp 967–985.

 

Luhmann, Niklas/King, Michael/Morgner, Christian/Davies, Howard/Raffan, John/Rooney, Kathryn, Trust and Power, Polity Press, Newark 2017.

 

Mayer, Roger C./Davis, James H./Schorman, F. David, An Integrative Model of Organizational Trust, Academy of Management Review, 1995, Volume 20, Issue 3, pp. 709–734.

 

Muir, Bonnie M., Trust in automation: Part I, Theoretical issues in the study of trust and human intervention in automated systems, Ergonomics, 1994, Volume 37, Issue 11, pp. 1905–1922.

 

Nees, Michael A., Acceptance of Self-driving Cars, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2016, Volume 60, Issue 1, pp. 1449–1453.

 

Payre, William/Cestac, Julien/Delhomme, Patricia, Intention to use a fully automated car: Attitudes and a priori acceptability, Transportation Research Part F: Traffic Psychology and Behaviour, 2014, Volume 27, pp. 252–263.

 

Penmetsa, Praveena/Adanu, Emmanuel Kofi/Wood, Dustin/Wang, Teng/Jones, Steven L., Perceptions and expectations of autonomous vehicles – A snapshot of vulnerable road user opinion, Technological Forecasting and Social Change, 2019, Volume 143, pp. 9–13.

 

Reichmann, Stefan/Griesbacher, Martin, Deliverable D3.1 of TRUESSEC.eu, Public Perceptions of Cybercrime and Cyber Security in the EU, https://truessec.eu/content/deliverable-31-public-perceptions-cybercrime-and-cybersecurity-eu (accessed on 10 November 2019), 2018.

 

Reig, Samantha/Norman, Selena/Morales, Cecilia G./Das Samadrita/Steinfeld, Aaron/Forlizzi, Jodi, A Field Study of Pedestrians and Autonomous Vehicles, Proceedings, 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2018, pp. 198–209.

 

Riek, Markus/Abramova, Svetlana/Böhme, Rainer, Analyzing Persistent Impact of Cybercrime on the Societal Level: Evidence for Individual Security Behavior, Thirty Eighth International Conference on Information Systems, South Korea, 2017.

 

SAE, Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, SAE, Warrendale 2018.

 

Schoettle, Brandon/Sivak, Michael, A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia, University of Michigan, Transportation Research Institute, Michigan 2014.

 

Schoettle, Brandon/Sivak, Michael, Motorists’ Preferences for Different Levels of Vehicle Automation, University of Michigan, Transportation Research Institute, Michigan 2015.

 

Sommer, Klaus, Continental Mobilitätsstudie 2013, http://cdn.pressebox.de/a/1bee22ce89b60e0a/attachments/0633780.attachment/filename/Continental_Mobilit%C3%A4tsstudie_2013_Ergebnisse_DE.pdf (accessed on 2 October 2019), 2013.

 

Taddeo, Mariarosaria, Modelling trust in artificial agents, a first step toward the analysis of e-trust, Minds and machines, 2010, Volume 20, Issue 2, pp. 243–257.

 

Taddeo, Mariarosaria, Defining Trust and E-Trust, International Journal of Technology and Human Interaction, 2010, Volume 5, Issue 2, pp. 23–35.

 

Taddeo, Mariarosaria/Floridi, Luciano, The case of e-trust, Ethics and Information Technology, 2011, Volume 13, Issue 1, pp. 1–3.

 

Taddeo, Mariarosaria, Trusting Digital Technologies Correctly, Minds and Machines, 2017, Volume 27, Issue 4, pp. 565–568.

 

Underwood, Steven E., Automated Vehicles Forecast, Automated Vehicle Symposium, https://path.berkeley.edu/sites/default/files/pre_12.2014_automatedvehiclessymp.pdf (accessed on 25 October 2019), 2014.

 

Wolf, Ingo, Wechselwirkung Mensch und autonomer Agent. In: Mauerer, Markus/Gerdes, Christian J./Lenz, Barbara/Winner, Hermann (Eds.), Autonomes Fahren, Springer, Berlin Heidelberg 215, pp. 103–125.

  1. 1 This article presents research from the project VERDI («Vertrauen in Digitalisierung am Beispiel von Systemen zum [teil-]autonomen Fahren und Fahrassistenzsystemen»). VERDI aims to investigate trust and trustworthiness towards SAE Level 3 from a multidisciplinary perspective, including law, ethics, psychology and sociology and is funded by the Zukunftsfonds Steiermark.
  2. 2 In this mode, a driver or more precisely a fall-back ready user is «considered to be receptive to a request to intervene and/or to an evident vehicle system failure, whether or not the ADS (automated driving system, author’s note) issues a request to intervene as a result of such vehicle system failure» [SAE 2018, 14].
  3. 3 «E-trust occurs in environments where direct and physical contacts do not take place, where moral and social pressures can be differently perceived, and where interactions are mediated by digital devices» [Taddeo 2011, 7].
  4. 4 Contrary to interpersonal trust progressing from a cautious start towards faith or benevolence, human-automation trust often progresses in reverse order with a high level of initial trust into a «perfect» machine that is prone to be rapidly dissolving after automation failure [Hoff/Bashir 2015, 11f].
  5. 5 Interpersonal trust can be based on ability, integrity or benevolence [Mayer et al. 1995] whereas trust towards automation is defined by performance, process and purpose [Lee/Moray 1992].