1.
Trust and the Digital Single Market ^
As early as 20 years ago, Bons et al. [1998] observed that trust, or rather a lack thereof, is one of the main reasons for consumers (and companies) not to engage in online commerce [Castelfranchi/Tan 2001, xvii f.; see also e.g. Fahrenholtz/Bartelt 2001]. In online commerce, «problems of trust are magnified, because agents reach out far beyond their familiar trade environments» [Falcone/Castelfranchi 2001, 55], sometimes crossing national and cultural borders. This problem has gained significance in recent years, in particular as it pertains to the EU’s digital single market strategy1, a priority for the current European Commission. The DSM strategy aims at establishing a market place where «the free movement of persons, services and capital is ensured and where the individuals and businesses can seamlessly access and exercise online activities under conditions of fair competition, and a high level of consumer and personal data protection».2 It therefore calls for a reinforcement of trust in and security of digital services and the handling of personal data. However, it can be difficult for consumers/businesses to assess the trustworthiness of the products and services they use. Indeed, only 22% of European citizens fully trust digital products and services, and 72% of Internet users are concerned about disclosing personal data online.3 However, in digitized societies and digitized social institutions, «only perceived reliability counts and determines confidence» [Castelfranchi/Tan 2001, xxiv]. In other words, the reliability of socio-technical systems such as online market places can be useless if unnoticed by (potential) users. Perceptions are often based not on objective empirical data, but on vague social attitudes. Consequently, one very successful way to induce trust has been to demonstrate (or to claim, at least) that other users already trust some technology [Castelfranchi/Tan 2001, xxiv]. The TRUESSEC.eu project consortium therefore considers increased transparency as a viable solution [Griesbacher/Stelzer/Staudegger 2017]: What is needed is a way to assess the trustworthiness of digital products that does not presuppose expert knowledge on the part of the user.
In the last two decades, an increasing body of literature has focused on e-trust [Taddeo 2009; 2010], i.e. «trust specifically developed in digital contexts and/or involving artificial agents» [Taddeo/Floridi 2011, 1], for example [Gefen et al. 2008; Grodzinsky et al. 2011; Maadi et al. 2016; Pieters 2011; Spiekermann 2016]. The topics addressed stem from a range of issues, from trust in artificial agents to trust in online environments to more general issues of digitally mediated social interaction. Regardless of the vast literature [see e.g. Hardin 2006; Lagerspetz 1998; Seligman 1997; Shapiro 1987; Sztompka 2003 for sociological accounts] that has developed around trust, the phenomenon itself remains elusive and difficult to operationalize [Lyon et al. 2012; Möllering 2006, 1]. How trust (and the rule of law and the respect for human rights) can enhance the DSM remains the fundamental question for the TRUESSEC.eu consortium. Many sociologists [Luhmann 1979, 2001; Barber 1983; Lewis/Weigert 1985a, 1985b; Möllering 2006, 2001] have argued that trust serves a fundamental function for societies, as it is at the core of social interaction. This can only be appreciated by adopting a holistic perspective [Lewis/Weigert 1985b]; i.e., trust is not only essential for social interaction, but is itself irreducibly social [Lewis/Weigert 2012].
2.1.
Classic and Recent Positions ^
Like Luhmann [1979] and Giddens [1996], Simmel observes that trust serves the reduction of complexity. The reduction of complexity gained particular importance in modern industrial societies where individuals were increasingly forced to rely on expert knowledge which they did not understand. This reliance on experts (doctors, lawyers, engineers) is necessary to make informed decisions, but is only possible when individuals trust these experts. Many accounts of modernity [e.g. Giddens 1996] thus stress the social dimensions of trust: The uncertainties and risks modernity entails necessitate a belief in the good intentions of strangers. These views model trust as a kind of behavior that obtains just in those situations where the trustor has little or no reason to suspect that the trustee will fulfill his/her expectations [Baier 2001].
Barber [1983] attempts one of the first exclusive, systematic expositions of trust by distinguishing three possible instances of trust: (1) trust in the continuity of natural and social orders, (2) trust in actors' technical competence, and (3) trust with respect to moral responsibilities (i.e. of putting others' interests ahead of one’s own). Barber [1983] takes a functional and cognitive stance and describes trust as the expectation that there is a general moral order and specific norms of competence and responsibility. However, Luhmann [2001] points out, such an analysis leaves aside the mechanisms by which trust is established despite the risk of disappointment.
According to the rational choice account of trust [Coleman 1990], action is a rational choice made on the basis of its expected consequences [Nickel et al. 2010, 431]. In the rational choice perspective for someone to be trustworthy simply means to exhibit predictable behavior of the desired type [Nickel et al. 2010, 432]. Such a model of trust makes trustworthiness indistinguishable from reliability. It does not explain emotional aspects of (betrayed) trust, such as feelings of anger or betrayal. This is the reason for assuming that trust involves specific motivations on the part of the trusted person [ibid.], such as taking into account the trustor’s interests [e.g. Hardin 2001, 2006]
2.2.
Trust, Risk, and the Contingency of Social Interaction ^
Hardin [2001] noticed that in deterministic settings, there is no place for trust. In the same vein, Gambetta [1988] and Luhmann [1979] claim that trust involves a moment of uncertainty concerning the behavior of others. For Luhmann [2001], trust is a solution for problems of risk. In pre-modern times, religion served the same function; modernity has replaced this by the acknowledgement of unintended consequences of intentional behavior (i.e. risk). Modernity can be characterized, among other things, by the (shared) assumption that unexpected results can follow from intentional behavior. This observation of risks is central for modernity: Moderns experience risk as a consequence of (individual and collective) decisions [e.g. Beck 2017]; trust is granted at the trustor’s risk. For Luhmann [1979], trust solves the same problems on the system level as at the (inter)personal level [Lewis/Weigert 1985b, 462], namely, to deal with an uncertain and complex future and to help sustain relatively stable expectations.
2.3.
Trust in the Social System ^
2.4.
Trust as the a priori of Social Interaction ^
The previous discussion has shown that trust is regarded by many (classic) sociologists as the irreducible foundation of the holistic, meaningful social world with which human beings interact. Sociologically, then, «trust is conceptualized as a reciprocal orientation and interpretive assumption that is shared, has the social relationship itself as the object, and is symbolized through intentional action» [Lewis/Weigert 1985, 456]. Contrast this with a psychological notion of trust: «trust resides in individuals, focuses on some social object, and is enacted in behavior» [ibid. 456]. However, like shame, guilt, or loyalty, trust is irreducibly social underneath [Lewis/Weigert 2012, 26]. Regarding trust as a form of association opens up the possibility that all psychological and interactional content of trust relationships must be understood on the basis of sociological principles. [ibid.] Sociological trust is also irreducible to individual, atomistic trusting behavior. Instead, «it concerns the social order as a moral order» (ibid.), as the form of association cannot be created or destroyed by individuals [Barber 1983; Luhmann 1979). In the «post-modern condition» [Lyotard 1984], the idea of a coherent social order gradually gives way to a loss of normative coherence (e.g. beliefs in god or a general sense of progress). Seligman [1997] observes that the transition from traditional (pre-modern) to post-modern societies implies that trust is becoming more essential as the significance of ascribed status-roles declines; the need to solve global problems (e.g. global warming, world poverty) in an increasingly cosmopolitical context [Beck 2017] lends credence to the post-modern predicament as solutions «require cooperation among nations that are often distrustful and at times bitter rivals» [Lewis/Weigert 2012, 28]. The implications for a post-modern trust culture might be detrimental, as trust in the continuity of the social order [e.g. Barber 1983] or the accountability of institutional actors [Sztompka 2003; Beck 2017] seem no longer an option in the post-modern condition.
3.1.
Trust as Indispensable for Digitized Social Interaction ^
As the previous discussion has shown, trusting behavior is indispensable for social interaction, though trust itself is not reducible to trusting behavior. The sociologically interesting questions concern the way(s) trust is actually established. Recently, the notion of e-trust was introduced to describe the form of trust «specifically developed in digital contexts and/or involving artificial agents» [Taddeo/Floridi 2011, 1]. It involves three dimensions [ibid.]: trust in technologies, trust in other users, and trust in technology providers, to which spiekermann [2016, 114] adds a fourth kind: trust in the engineers who built the technologies in question. For the most part, e-trust is defined as cognitive/rational with behavioral components [e.g. in Golbeck 2009, 3], thus disregarding the irreducibly social dimension of trust.
Trust is normally understood as a relationship between a trustor and a trustee where the trustor depends on the trustee in order to fulfill his/her expectations [Taddeo/Floridi 2011, 1]. Part of the reason for such a cognitive emphasis lies in the fact that accounts grounding trust in either motivations [e.g. Hardin 2006] or morality [e.g. Nickel 2009] seem inapt to apply to artifacts [Nickel et al. 2010]. However, some, such as Sztompka [2003], postulate a continuum from trust in others' rationality to trust in others' moral integrity; both involve a bet about future actions. Betting on moral virtues (that others will take my interests into account) is more risky than merely betting on their basic rationality. An even more demanding expectation would have others place my interests before their own [see e.g. Barber 1983].
According to Taddeo [2010], both (analogue) trust and (digital) e-trust ultimately rest on an (objective) assessment of the trustee’s trustworthiness by the trustor. This assessment is defined as «the set of beliefs that the trustor holds about the potential trustee’s abilities, and the probabilities she assigns to those beliefs» [Taddeo 2010, 247]. An assessment of trustworthiness includes objective criteria such as previous experience with the trustee’s performance. This feature of trustworthiness can be implemented into artificial agents as well, insofar as an artificial agent can be programmed to calculate the ratio of successful actions by the trustee to the total number of its actions [taddeo 2010].
Modelling trustworthiness in terms of thresholds allows the definition of trustworthiness as the probability of the trustor to gain by the trustee’s performance (and the risk that the former will not act as expected). In this sense, trustworthiness acts as a guarantee for the trustor that the trustee will act as expected/desired. In Taddeo’s [2010, 248] analysis, trustworthiness is not a general property of a trustee (human or artificial agent), as it does not refer to its general reliability. Instead, trustworthiness means a trustee’s dependability to perform a given task. However, as e.g. [Nickel et al. 2010] remark, dependability alone may not be enough, as trust is dependent on a trustee’s ability to take into account the trustor’s interests. An agent’s trustworthiness is assessed based on that agent’s past performances, and not on iterated interaction (cf. the added «since …» that forms part of many brand names). What can be vested in technologies is not trust, strictly speaking, but rather reliance [Spiekermann 2016, 114].
3.2.
Confidence in versus Reliance on Technology ^
There is, then, a difference between relying on digital technologies because we have confidence in them and relying on technologies because we trust them [Spiekermann 2016, 113]. Philosophers such as [McLeod 2011] emphasize that trust only obtains when the commitment invested in another party comes from the right reasons, namely care for the other’s interests combined with moral integrity ([Nickel et al. 2011] make a similar point). Luhmann [2001] saw that trust needs to be distinguished from confidence where trust refers to an active decision by the trustor to delegate some aspect of importance to the trustee [Spiekermann 2016, 114; Grodzinsky et al. 2011]
The notion of trust is important in digitized contexts because it forms the bedrock of a theory of agency. Agents (which in the digital realm includes other users as well as engineers, institutional agents, and possibly artificial agents) are delegated to take care of some task without direct supervision by the user; this entails that the delegating agent should trust them in order to rely on them [Castelfranchi/Tan 2001, xxii]. Trust is incremental when it comes to Cyber-physical systems and other forms of Artificial Intelligence (AI). The fundamental question is whether trust is affected by environmental features in such a way that it can only really occur offline, or whether trust is affected by/reducible to features of agents. Only the second possibility entails true e-trust [Taddeo 2009; see Grodzinsky et al. 2011 for an in-depth development of the second view]. Nissenbaum [2001] argues that the absence of corporeality in digital environments constitutes an obstacle to trust occurrence, and hence notions other than e-trust should be looked for. Opposing views are held irrespective of competing accounts of the phenomenon in question (e.g. whether trust is based on ethical norms as in Castelfranchi/Tan 1998] or obtains where the trustor does not have reason to believe that her interests will be taken into account [Baier 2001]). To make matters worse, digitally mediated social interaction means that it is frequently impossible to assess the trustee’s emotional and psychological status [Taddeo 2010]. Unfortunately, these accounts have nothing to say on the effects of trust and e-trust on behavior and on social systems [Taddeo 2010]. On the other hand, research on the management of trust and e-trust seeks to understand how the two emerge and how their level can be objectively assessed. Insofar as this strand of research rests on accounts of trust/e-trust, it is also limited by them [Taddeo 2010].
4.
Conclusion: Trust and Distrust in the Digital Single Market ^
From the previous discussion it can be argued that there are at least four possible ways sociology can engage with phenomena of e-trust: It can help to identify the fundamental aspects of e-trust, it can shed light on the relation between trust and e-trust, it can describe and theorize broader societal factors conducive to e-trust, and it can reflect on the extent to which artificial agents can be the objects of e-trust [Grodzinsky et al. 2011]. Theorizing e-trust can benefit from sociological insights, as these reveal trust as the irreducible foun-dation of social interaction [Lewis/Weigert 2012, 25]. The holistic approach [Lewis/Weigert 1985b: 456] holds that trust cannot be reduced to reliance. Instead, it reveals trust as an a priori condition for all forms of social interaction (however mediated), because trust of necessity refers back to principles of morality, of mutual interests, and social norms. If what we currently observe is indeed a «cultural tragedy» (Simmel) concerning trust, «most difficult to achieve precisely at a time when it is most urgently needed» [Lewis/Weigert 2012, 28], then theories of e-trust will need to pay much closer attention to the deeply social nature of trust or they will fail to describe their target phenomenon.
Observing the holistic unity of the phenomenon can help to unify accounts of trust and e-trust. If this is true, then at least three of the four types of e-trust introduced above are deeply social in nature, i.e. trust in other users of technologies, trust in technology developers, and trust in institutions/organizations that use digi-tal technologies. Trust in the digital single market involves all three kinds of trust, plus (possibly) a fourth kind, i.e. trust in the technologies that sustain it. If trust is an essential feature of social reality as the social a priori view holds, it is insufficient to conceive e-trust as (merely) psychological or (merely) behavioral. Whether the characteristics of online environments and social interactions satisfy the minimal requirements for the emergence of trust is at present unclear; a positive answer would presuppose (at least) the following two conditions [Seligmann 1997]: a shared cultural background (which seems difficult to attain as online social interaction often spans national and cultural borders) and some form of certainty about the trustee’s identity [Taddeo/Floridi 2011, 1]. Both possibilities will be further explored in the remainder of the TRUESSEC.eu project.
5.
References ^
Beck, Ulrich, Die Metamorphose der Welt, Suhrkamp, Berlin 2017.
Bons, R. W. H./Lee, R. M./Wagenaar, R. W., Obstacles for the development of open electronic commerce, International Journal of Management Information Electronic Commerce, volume 2, issue 3, 1998, pp. 61–83.
Castelfranchi, Cristiano/Tan, Yao-Hua, Introduction: Why Trust and Deception are Essential for Virtual Societies. In: Castelfranchi, Cristiano/Tan, Yao-Hua (Eds.), Trust and Deception in Virtual Societies, Springer, Dordrecht 2001, pp. xi–xxviii.
Coleman, J. S., Foundations of Social Theory, Harvard University Press, Boston 1990.
Eisenstadt, Shmuel N./Roniger, L., Patrons, Clients, and Friends. Interpersonal relations and the structure of trust in society, Cambridge University Press, Cambridge 1984.
Fahrenholtz, Dietrich/Bartelt, Andreas, Towards a Sociological View of Trust in Computer Science. In: Schoop, M./Walczuch, R. (Eds.), Proceedings of the Eighth Research Symposium on Emerging Electronic Markets (RSEEM 01), Maastricht 2001.
Gambetta, Diego, Can we trust trust? In: Gambetta, Diego (Ed.), Trust: Making and Breaking Cooperative Relations, Basil Blackwell, Oxford 1988, pp. 213–238.
Gefen, David/Benbasat, Izak/Pavlou, paul a., A Research Agenda for Trust in Online Environments, Journal of Management Information Systems, volume 246, issue 4, 2008, pp. 275–286.
Giddens, Anthony, The Consequences of Modernity, Polity Press, Cambridge 1996.
Griesbacher, Martin/Stelzer, Harald/Staudegger, Elisabeth, SSH in ICT Using the Example of TRUESSEC.EU. In: Schweighofer/Kummer/Hötzendorfer/Sorge (Eds.), Trends und Communities der Rechtsinformatik, Tagungsband des 20. Internationalen Rechtsinformatik Symposions IRIS 2017, OCG Verlag, Wien 2017, p. 469–473.
Grodzinsky, F. S./Miller, K. W./Wolf, M. J., Developing artificial agents worthy of trust: «Would you buy a car from this artificial agent?», Ethics of Information Technology, volume 13, 2011, pp. 17–27.
Hardin, Russell, Trust, Polity, Cambridge 2006.
Hardin, russell, Conceptions and Explanations of Trust. In: Cook, Karen S. (Ed.), Trust in Society, Russell Sage Foundation, New York 2001, pp. 3–39.
Lagerspetz, Olli, Trust. The Tacit Demand, Springer, Dordrecht, 1998.
Lewis, J. David/Weigert, Andrew J., The Social Dynamics of Trust: Theoretical and Empirical Research, Social Forces, volume 91, issue 1, 2012, pp. 25–31.
Lewis, J. David/Weigert, Andrew J., Trust as a Social Reality, The Sociological Social Forces, volume 63, issue 4, 1985a, pp. 967–985.
Lewis, J. David/Weigert, Andrew J., Social Atomism, Holism, and Trust, The Sociological Quarterly, volume 26, issue 4, 1985b, pp. 455–471.
Luhmann, Niklas, Trust and Power. Two Works by Niklas Luhmann. With Introduction by Gianfranco Poggi, Wiley, Chichester 1979.
Luhmann, Niklas, Vertrautheit, Zuversicht, Vertrauen. Probleme und Alternativen. In: Hartmann, Martin/Offe, Claus (Eds.), Vertrauen. Die Grundlage des sozialen Zusammenhalts, Campus, Frankfurt/New York 2001, pp. 143–160.
Lyon, Fergus/Möllering, Guido/Saunders, Mark N. K., Introduction: the variety of methods for the multi-faceted phenomenon of trust. In: Lyon, Fergus; Guido Möllering; Mark N. K. Saunders (Eds.), Handbook of Research Methods on Trust, Elgar, Cheltonham/Northampton, MA 2012, pp. 1–15.
Lyotard, Jean-Francois, The Post-Modern Condition. A Report on Knowledge, University of Minnesota Press, Minneapolis, 1984.
McLeod, Carolyn, Trust. In: Edward N. Zalta, (Ed.), Stanford Encyclopaedia of Philosophy. https://plato.stanford.edu/entries/trust/, 2015.
Möllering, Guido, The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation, and Suspension, Sociology, volume 35, issue 2, 2001, pp. 403–420.
Möllering, Guido, Trust: Reason, Routine, Reflexivity, Elsevier, London 2006.
Nickel, Philip J./Franssen, Maarten/Kroes, Peter, Can We Make Sense of the Notion of Trustworthy Technology?, Knowledge, Technology, and Policy, volume 23, 2010, pp. 429–444.
Pieters, Wolter, Explanation and Trust: What to Tell the User in Security and AI, Ethics of Information Technology, volume 13, 2011, pp. 53–64.
Seligman, Adam B., The Problem of Trust, Princeton University Press, Princeton 1997.
Shapiro, Susan P., The Social Control of Impersonal Trust, American Journal of Sociology, volume 93, 1987, pp. 623–658.
Spiekermann, Sarah, Ethical IT Innovation. A Value-Based System Design Approach, CRC Press, London, New York 2016.
Sztompka, Piotr, Trust. A Sociological Theory, Cambridge University Press, Cambridge 2003.
Taddeo, Mariarosaria, Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust, Minds & Machines, volume 20, 2010, pp. 243–257.
Taddeo, Mariarosaria/Floridi, Luciano, The Case for e-Trust, Ethics of Information Technology, volume 13, 2011, pp. 1–3.
Urban Walker, Margaret, Moral Repair: Reconstructing Moral Relations after Wrongdoing, Cambridge University Press, Cambridge 2016.
- 1 See https://ec.europa.eu/digital-single-market/en/policies/shaping-digital-single-market (all websites last accessed on 4 January 2018).
- 2 Ibid.
- 3 For further details see Special Eurobarometer 423 (Cyber Security), http://ec.europa.eu/public_opinion/archives/ebs/ebs_423_en.pdf, Special Eurobarometer 431 (Data Protection), http://ec.europa.eu/public_opinion/archives/ebs/ebs_431_en.pdf and Special Eurobarometer 432 (Europeans» Attitudes towards Security), http://ec.europa.eu/public_opinion/archives/ebs/ebs_432_en.pdf.
- 4 I use «association» here to refer to what Simmel and others called «Vergesellschaftung» to emphasize the processual nature of society.