Jusletter IT

TRUESSEC.eu – European Values and the Digital Single Market from a Sociological Perspective

  • Author: Stefan Reichmann
  • Category: Articles
  • Region: Austria
  • Field of law: Security and Law
  • Collection: Conference proceedings IRIS 2018
  • Citation: Stefan Reichmann, TRUESSEC.eu – European Values and the Digital Single Market from a Sociological Perspective, in: Jusletter IT 22 February 2018
At the IRIS2017 Griesbacher/Staudegger/Stelzer introduced the Horizon2020 project TRUESSEC.eu («SSH In ICT Using the Example of TRUESSEC.eu»). This year, preliminary project findings will be elaborated from a multidisciplinary perspective. One of the central claims of the project consortium concerns the role of trust in digital products and services: Trust in ICT products and services can be reinforced increased transparency. This article reviews conceptions of trust and e-trust with the aim of outlining trust as the social a priori of ICT use. It starts by briefly introducing classical accounts of trust found in sociology. These accounts are then related and compared to the more recent theoretical innovation of e-trust (i.e. trust in digital environments). The article concludes by sketching a way in which the recognition of the social nature of the phenomenon may improve the understanding of trust in digital contexts.

Table of contents

  • 1. Trust and the Digital Single Market
  • 2. The Sociology of Trust
  • 2.1. Classic and Recent Positions
  • 2.2. Trust, Risk, and the Contingency of Social Interaction
  • 2.3. Trust in the Social System
  • 2.4. Trust as the a priori of Social Interaction
  • 3. e-Trust: Extending Trust Relationships into the Digital Realm
  • 3.1. Trust as Indispensable for Digitized Social Interaction
  • 3.2. Confidence in versus Reliance on Technology
  • 4. Conclusion: Trust and Distrust in the Digital Single Market
  • 5. References

1.

Trust and the Digital Single Market ^

[1]

As early as 20 years ago, Bons et al. [1998] observed that trust, or rather a lack thereof, is one of the main reasons for consumers (and companies) not to engage in online commerce [Castelfranchi/Tan 2001, xvii f.; see also e.g. Fahrenholtz/Bartelt 2001]. In online commerce, «problems of trust are magnified, because agents reach out far beyond their familiar trade environments» [Falcone/Castelfranchi 2001, 55], sometimes crossing national and cultural borders. This problem has gained significance in recent years, in particular as it pertains to the EU’s digital single market strategy1, a priority for the current European Commission. The DSM strategy aims at establishing a market place where «the free movement of persons, services and capital is ensured and where the individuals and businesses can seamlessly access and exercise online activities under conditions of fair competition, and a high level of consumer and personal data protection».2 It therefore calls for a reinforcement of trust in and security of digital services and the handling of personal data. However, it can be difficult for consumers/businesses to assess the trustworthiness of the products and services they use. Indeed, only 22% of European citizens fully trust digital products and services, and 72% of Internet users are concerned about disclosing personal data online.3 However, in digitized societies and digitized social institutions, «only perceived reliability counts and determines confidence» [Castelfranchi/Tan 2001, xxiv]. In other words, the reliability of socio-technical systems such as online market places can be useless if unnoticed by (potential) users. Perceptions are often based not on objective empirical data, but on vague social attitudes. Consequently, one very successful way to induce trust has been to demonstrate (or to claim, at least) that other users already trust some technology [Castelfranchi/Tan 2001, xxiv]. The TRUESSEC.eu project consortium therefore considers increased transparency as a viable solution [Griesbacher/Stelzer/Staudegger 2017]: What is needed is a way to assess the trustworthiness of digital products that does not presuppose expert knowledge on the part of the user.

[2]

In the last two decades, an increasing body of literature has focused on e-trust [Taddeo 2009; 2010], i.e. «trust specifically developed in digital contexts and/or involving artificial agents» [Taddeo/Floridi 2011, 1], for example [Gefen et al. 2008; Grodzinsky et al. 2011; Maadi et al. 2016; Pieters 2011; Spiekermann 2016]. The topics addressed stem from a range of issues, from trust in artificial agents to trust in online environments to more general issues of digitally mediated social interaction. Regardless of the vast literature [see e.g. Hardin 2006; Lagerspetz 1998; Seligman 1997; Shapiro 1987; Sztompka 2003 for sociological accounts] that has developed around trust, the phenomenon itself remains elusive and difficult to operationalize [Lyon et al. 2012; Möllering 2006, 1]. How trust (and the rule of law and the respect for human rights) can enhance the DSM remains the fundamental question for the TRUESSEC.eu consortium. Many sociologists [Luhmann 1979, 2001; Barber 1983; Lewis/Weigert 1985a, 1985b; Möllering 2006, 2001] have argued that trust serves a fundamental function for societies, as it is at the core of social interaction. This can only be appreciated by adopting a holistic perspective [Lewis/Weigert 1985b]; i.e., trust is not only essential for social interaction, but is itself irreducibly social [Lewis/Weigert 2012].

[3]
The article begins with an exposition of the social nature of trust and goes on to compare this result with recent discussions of trust-in-digital-products/services (e-trust). The contribution provides an assessment of sociological positions on the phenomenon of trust and relates them to recent findings on digitally mediated trust. It argues that a successful implementation of the DSM strategy should start with an appreciation of the deeply social nature of trust.

2.

The Sociology of Trust ^

2.1.

Classic and Recent Positions ^

[4]
Walker [2006] observes that in many situations, humans interact with what she terms «default trust», i.e. trust as a shortcut when deciding to cooperate with others. Default trust is based on an individual’s assumption that one knows what to expect from whom, what counts as normal behavior etc. This echoes Emile Durkheim’s (1858–1917) observation, pointed out in his critique of social contract theories (which argue, roughly, that societies are based on a sort of contract between all past, present and future members), that individualistic accounts of society have difficulties in explaining contract formation. The contract as a social form is not, Durkheim argues, rooted in individual negotiations, but rather is a social institution supported by social control mechanisms (among them law). These non-contractual aspects of contracts are based on trust [Lewis/Weigert 1985b, 457]. As ethnomethodologists have made clear, social interaction always involves an unspoken «et cetera assumption». Were contract partners to distrust each other, a contract would either be so complicated as to be unworkable, or it would (literally) never be completed.
[5]
Another early writer who explicitly theorized trust was the German sociologist Georg Simmel (1858–1918) [e.g. Möllering 2001; 2006]. Simmel observed that trust, unlike e.g. love and hate, is fundamentally social (as opposed to individualistic). I.e., there is such a thing as unrequited love, but no such thing as unrequited trust. [Lewis/Weigert 1985b; Möllering 2001] In its logical (generalized) form, trust consists of what Simmel termed «faithfulness». To Simmel, faithfulness (Treue) is the only human feeling sociological in form. The mere occurrence of social interaction builds faithfulness, Simmel noted, because participating in social interaction inevitably leads to bonds with others. This general connectedness, called metaphysical trust, grounds all other social relations (which is exactly what individualistic accounts of trust ignore, according to Durkheim). For Simmel, the cognitive aspects of trust were inextricably linked with its emotional and behavioral aspects [Lewis/Weigert 1985b, 456 ff.].
[6]
Simmel distinguished between metaphysical trust and existential trust. Existential trust, i.e. trust in a particular other [Lewis/Weigert 1985b, 458] is based on perceived trustworthiness. It is epistemically located between complete knowledge and complete ignorance about the other. In that sense, existential trust is a background assumption of everyday encounters where it would be impossible to administer a check for trustworthiness. Simmel also explicitly relates trust to power relations; whoever holds trust, holds power (and vice versa: those who hold power also exercise it, and those subject to their power must trust the powerful) [Lewis/Weigert 1985b, 459].
[7]

Like Luhmann [1979] and Giddens [1996], Simmel observes that trust serves the reduction of complexity. The reduction of complexity gained particular importance in modern industrial societies where individuals were increasingly forced to rely on expert knowledge which they did not understand. This reliance on experts (doctors, lawyers, engineers) is necessary to make informed decisions, but is only possible when individuals trust these experts. Many accounts of modernity [e.g. Giddens 1996] thus stress the social dimensions of trust: The uncertainties and risks modernity entails necessitate a belief in the good intentions of strangers. These views model trust as a kind of behavior that obtains just in those situations where the trustor has little or no reason to suspect that the trustee will fulfill his/her expectations [Baier 2001].

[8]

Barber [1983] attempts one of the first exclusive, systematic expositions of trust by distinguishing three possible instances of trust: (1) trust in the continuity of natural and social orders, (2) trust in actors' technical competence, and (3) trust with respect to moral responsibilities (i.e. of putting others' interests ahead of one’s own). Barber [1983] takes a functional and cognitive stance and describes trust as the expectation that there is a general moral order and specific norms of competence and responsibility. However, Luhmann [2001] points out, such an analysis leaves aside the mechanisms by which trust is established despite the risk of disappointment.

[9]

According to the rational choice account of trust [Coleman 1990], action is a rational choice made on the basis of its expected consequences [Nickel et al. 2010, 431]. In the rational choice perspective for someone to be trustworthy simply means to exhibit predictable behavior of the desired type [Nickel et al. 2010, 432]. Such a model of trust makes trustworthiness indistinguishable from reliability. It does not explain emotional aspects of (betrayed) trust, such as feelings of anger or betrayal. This is the reason for assuming that trust involves specific motivations on the part of the trusted person [ibid.], such as taking into account the trustor’s interests [e.g. Hardin 2001, 2006]

2.2.

Trust, Risk, and the Contingency of Social Interaction ^

[10]

Hardin [2001] noticed that in deterministic settings, there is no place for trust. In the same vein, Gambetta [1988] and Luhmann [1979] claim that trust involves a moment of uncertainty concerning the behavior of others. For Luhmann [2001], trust is a solution for problems of risk. In pre-modern times, religion served the same function; modernity has replaced this by the acknowledgement of unintended consequences of intentional behavior (i.e. risk). Modernity can be characterized, among other things, by the (shared) assumption that unexpected results can follow from intentional behavior. This observation of risks is central for modernity: Moderns experience risk as a consequence of (individual and collective) decisions [e.g. Beck 2017]; trust is granted at the trustor’s risk. For Luhmann [1979], trust solves the same problems on the system level as at the (inter)personal level [Lewis/Weigert 1985b, 462], namely, to deal with an uncertain and complex future and to help sustain relatively stable expectations.

[11]
Luhmann [2001] criticizes that sociological accounts of trust were for a long time, if they were undertaken at all, oblivious to the conceptual intricacies of the phenomenon, thus subsuming various phenomena under the rubric «trust» such as positive/negative attitudes towards political elites (…), alienation (…), participation, meaning and solidarity [e.g. in Eisenstadt/Roniger 1984]. The latter proposed that trust presupposes functioning social institutions and can obtain only under certain conditions. These results cannot be simply transposed to more complex societies. However, as Luhmann (ibid.) points out, this is just a reformulation of well-known theses concerning Gemeinschaft (community, the form of association4 typical for pre-modern societies) and Gesellschaft (society, the form of association typical for modern societies).

2.3.

Trust in the Social System ^

[12]
Sztompka [2003, 119 ff.] employs the term «trust culture» for the system of rules (norms and values) providing normative obligations to be trustworthy and to trust. One locus of both kinds of obligations is the social roles which include a moral imperative to trust others. There are also more diffuse rules demanding general trustworthiness, e.g. from professionals. He identifies five conditions conducive to trust culture: normative coherence (i.e. a clear idea of what people will and should do), stability of the social order, the transparency of the social organization (i.e. availability of information about its functioning, efficiency, achievement, failures) which assures what people can expect (this also pertains to socio-technical systems, see below), familiarity of the (natural, social, technological) life-world, and accountability of other people and institutions.
[13]
Luhmann’s insight that interpersonal trust is founded in sociological system trust [Gambetta 1998] helps to explain why social exchange is more than and different from economic exchange [Lewis/Weigert 1985b, 469]. As interpersonal trust is more open and general than trust in social institutions (e.g. the government, the judicial system etc.), the two cannot be the same. The important difference between interpersonal trust and institutional trust is that the former comes naturally [Lewis/Weigert 1985b, 469] in relationships. Indeed, modern societies rely heavily on system trust, e.g. trust in the functioning of bureaucratic and legal sanctions, as opposed to simpler forms of social order where interpersonal trust plays a bigger role. This is because the differentiation of social domains (e.g. through the division of labor) pushes the emphasis towards system trust. Interpersonal trust rests on a different foundation than system trust: Whereas the former is based on an emotional bond between the trusting parties, the latter lacks emotional involvement [Luhmann 1979, 66 ff.], but rests on the assumption that «everything is normal». What Simmel calls «metaphysical trust» Luhmann refers to as «trust in trust», i.e. the mutually shared conviction that all others will uphold trust in the system. [Luhmann 1979]
[14]
According to [Lewis/Weigert 1985a, 976], «[t]rust begins where prediction ends». Hardin [2001] observes that trust presupposes contingency. For this reason, Luhmann distinguishes trust from confidence [Luhmann 2001, 147] to accommodate the observation that expectations might be frustrated but that life without expectations of contingent events would be impossible. As expectations do not have a functional equivalent, a lack thereof would lead to complete uncertainty. Luhmann uses the term «confidence» for situations where alternatives are not considered, whereas «trust» refers to confidence in other people’s actions. If confidence is disappointed, it is the world’s fault; if trust is disappointed, it was wrongfully invested. Trust is only relevant where possible losses exceed expected gains. [Luhmann 2001, 148] Trust and confidence are in complex interplay. Trust is self-referential, and risks are contingent on actions and decisions: According to Luhmann, trust rests on a circular relationship between action and risk, where action is determined relative to some future risk while said risk simultaneously inheres the action in question. Risk is then a form of self-reference of actions. It refers to what remains uncontrollable relative to what can be controlled. [Luhmann 2001, 152] Pieters [2011, 56] argues that confidence is replaced by trust once alternatives are considered: Confidence can be vested in the electricity supply, but deciding on a particular supplier to the exclusion of other competitors demands a decision and hence demands trust.

2.4.

Trust as the a priori of Social Interaction ^

[15]

The previous discussion has shown that trust is regarded by many (classic) sociologists as the irreducible foundation of the holistic, meaningful social world with which human beings interact. Sociologically, then, «trust is conceptualized as a reciprocal orientation and interpretive assumption that is shared, has the social relationship itself as the object, and is symbolized through intentional action» [Lewis/Weigert 1985, 456]. Contrast this with a psychological notion of trust: «trust resides in individuals, focuses on some social object, and is enacted in behavior» [ibid. 456]. However, like shame, guilt, or loyalty, trust is irreducibly social underneath [Lewis/Weigert 2012, 26]. Regarding trust as a form of association opens up the possibility that all psychological and interactional content of trust relationships must be understood on the basis of sociological principles. [ibid.] Sociological trust is also irreducible to individual, atomistic trusting behavior. Instead, «it concerns the social order as a moral order» (ibid.), as the form of association cannot be created or destroyed by individuals [Barber 1983; Luhmann 1979). In the «post-modern condition» [Lyotard 1984], the idea of a coherent social order gradually gives way to a loss of normative coherence (e.g. beliefs in god or a general sense of progress). Seligman [1997] observes that the transition from traditional (pre-modern) to post-modern societies implies that trust is becoming more essential as the significance of ascribed status-roles declines; the need to solve global problems (e.g. global warming, world poverty) in an increasingly cosmopolitical context [Beck 2017] lends credence to the post-modern predicament as solutions «require cooperation among nations that are often distrustful and at times bitter rivals» [Lewis/Weigert 2012, 28]. The implications for a post-modern trust culture might be detrimental, as trust in the continuity of the social order [e.g. Barber 1983] or the accountability of institutional actors [Sztompka 2003; Beck 2017] seem no longer an option in the post-modern condition.

3.

e-Trust: Extending Trust Relationships into the Digital Realm ^

3.1.

Trust as Indispensable for Digitized Social Interaction ^

[16]

As the previous discussion has shown, trusting behavior is indispensable for social interaction, though trust itself is not reducible to trusting behavior. The sociologically interesting questions concern the way(s) trust is actually established. Recently, the notion of e-trust was introduced to describe the form of trust «specifically developed in digital contexts and/or involving artificial agents» [Taddeo/Floridi 2011, 1]. It involves three dimensions [ibid.]: trust in technologies, trust in other users, and trust in technology providers, to which spiekermann [2016, 114] adds a fourth kind: trust in the engineers who built the technologies in question. For the most part, e-trust is defined as cognitive/rational with behavioral components [e.g. in Golbeck 2009, 3], thus disregarding the irreducibly social dimension of trust.

[17]

Trust is normally understood as a relationship between a trustor and a trustee where the trustor depends on the trustee in order to fulfill his/her expectations [Taddeo/Floridi 2011, 1]. Part of the reason for such a cognitive emphasis lies in the fact that accounts grounding trust in either motivations [e.g. Hardin 2006] or morality [e.g. Nickel 2009] seem inapt to apply to artifacts [Nickel et al. 2010]. However, some, such as Sztompka [2003], postulate a continuum from trust in others' rationality to trust in others' moral integrity; both involve a bet about future actions. Betting on moral virtues (that others will take my interests into account) is more risky than merely betting on their basic rationality. An even more demanding expectation would have others place my interests before their own [see e.g. Barber 1983].

[18]
With the emergence of trust demands in digital contexts (i.e. the increasing need to trust digital products/services), new issues (both theoretical and practical) arise [Fahrenholtz/Bartelt 2001]. The most recent debates focus on two issues: the definition of e-trust (largely left to social scientists and philosophers) and the management of e-trust (largely left to the domain of ICT-developers) [Taddeo 2010]. Much of the literature on e-trust thus looks at the characteristics of online environments and whether these environments fulfil the conditions for the emergence of trust. Two answers predominate: a positive one [e.g. in Grodzinsky et al. 2011] and a negative one. However, the research seldom acknowledges the social nature of trust while at the same time recognizing that trust somehow relates to social interaction.
[19]

According to Taddeo [2010], both (analogue) trust and (digital) e-trust ultimately rest on an (objective) assessment of the trustee’s trustworthiness by the trustor. This assessment is defined as «the set of beliefs that the trustor holds about the potential trustee’s abilities, and the probabilities she assigns to those beliefs» [Taddeo 2010, 247]. An assessment of trustworthiness includes objective criteria such as previous experience with the trustee’s performance. This feature of trustworthiness can be implemented into artificial agents as well, insofar as an artificial agent can be programmed to calculate the ratio of successful actions by the trustee to the total number of its actions [taddeo 2010].

[20]

Modelling trustworthiness in terms of thresholds allows the definition of trustworthiness as the probability of the trustor to gain by the trustee’s performance (and the risk that the former will not act as expected). In this sense, trustworthiness acts as a guarantee for the trustor that the trustee will act as expected/desired. In Taddeo’s [2010, 248] analysis, trustworthiness is not a general property of a trustee (human or artificial agent), as it does not refer to its general reliability. Instead, trustworthiness means a trustee’s dependability to perform a given task. However, as e.g. [Nickel et al. 2010] remark, dependability alone may not be enough, as trust is dependent on a trustee’s ability to take into account the trustor’s interests. An agent’s trustworthiness is assessed based on that agent’s past performances, and not on iterated interaction (cf. the added «since …» that forms part of many brand names). What can be vested in technologies is not trust, strictly speaking, but rather reliance [Spiekermann 2016, 114].

3.2.

Confidence in versus Reliance on Technology ^

[21]

There is, then, a difference between relying on digital technologies because we have confidence in them and relying on technologies because we trust them [Spiekermann 2016, 113]. Philosophers such as [McLeod 2011] emphasize that trust only obtains when the commitment invested in another party comes from the right reasons, namely care for the other’s interests combined with moral integrity ([Nickel et al. 2011] make a similar point). Luhmann [2001] saw that trust needs to be distinguished from confidence where trust refers to an active decision by the trustor to delegate some aspect of importance to the trustee [Spiekermann 2016, 114; Grodzinsky et al. 2011]

[22]

The notion of trust is important in digitized contexts because it forms the bedrock of a theory of agency. Agents (which in the digital realm includes other users as well as engineers, institutional agents, and possibly artificial agents) are delegated to take care of some task without direct supervision by the user; this entails that the delegating agent should trust them in order to rely on them [Castelfranchi/Tan 2001, xxii]. Trust is incremental when it comes to Cyber-physical systems and other forms of Artificial Intelligence (AI). The fundamental question is whether trust is affected by environmental features in such a way that it can only really occur offline, or whether trust is affected by/reducible to features of agents. Only the second possibility entails true e-trust [Taddeo 2009; see Grodzinsky et al. 2011 for an in-depth development of the second view]. Nissenbaum [2001] argues that the absence of corporeality in digital environments constitutes an obstacle to trust occurrence, and hence notions other than e-trust should be looked for. Opposing views are held irrespective of competing accounts of the phenomenon in question (e.g. whether trust is based on ethical norms as in Castelfranchi/Tan 1998] or obtains where the trustor does not have reason to believe that her interests will be taken into account [Baier 2001]). To make matters worse, digitally mediated social interaction means that it is frequently impossible to assess the trustee’s emotional and psychological status [Taddeo 2010]. Unfortunately, these accounts have nothing to say on the effects of trust and e-trust on behavior and on social systems [Taddeo 2010]. On the other hand, research on the management of trust and e-trust seeks to understand how the two emerge and how their level can be objectively assessed. Insofar as this strand of research rests on accounts of trust/e-trust, it is also limited by them [Taddeo 2010].

4.

Conclusion: Trust and Distrust in the Digital Single Market ^

[23]

From the previous discussion it can be argued that there are at least four possible ways sociology can engage with phenomena of e-trust: It can help to identify the fundamental aspects of e-trust, it can shed light on the relation between trust and e-trust, it can describe and theorize broader societal factors conducive to e-trust, and it can reflect on the extent to which artificial agents can be the objects of e-trust [Grodzinsky et al. 2011]. Theorizing e-trust can benefit from sociological insights, as these reveal trust as the irreducible foun-dation of social interaction [Lewis/Weigert 2012, 25]. The holistic approach [Lewis/Weigert 1985b: 456] holds that trust cannot be reduced to reliance. Instead, it reveals trust as an a priori condition for all forms of social interaction (however mediated), because trust of necessity refers back to principles of morality, of mutual interests, and social norms. If what we currently observe is indeed a «cultural tragedy» (Simmel) concerning trust, «most difficult to achieve precisely at a time when it is most urgently needed» [Lewis/Weigert 2012, 28], then theories of e-trust will need to pay much closer attention to the deeply social nature of trust or they will fail to describe their target phenomenon.

[24]

Observing the holistic unity of the phenomenon can help to unify accounts of trust and e-trust. If this is true, then at least three of the four types of e-trust introduced above are deeply social in nature, i.e. trust in other users of technologies, trust in technology developers, and trust in institutions/organizations that use digi-tal technologies. Trust in the digital single market involves all three kinds of trust, plus (possibly) a fourth kind, i.e. trust in the technologies that sustain it. If trust is an essential feature of social reality as the social a priori view holds, it is insufficient to conceive e-trust as (merely) psychological or (merely) behavioral. Whether the characteristics of online environments and social interactions satisfy the minimal requirements for the emergence of trust is at present unclear; a positive answer would presuppose (at least) the following two conditions [Seligmann 1997]: a shared cultural background (which seems difficult to attain as online social interaction often spans national and cultural borders) and some form of certainty about the trustee’s identity [Taddeo/Floridi 2011, 1]. Both possibilities will be further explored in the remainder of the TRUESSEC.eu project.

5.

References ^

Barber, Bernard, The Logic and Limits of Trust, Rutgers University Press, New Brunswick/New Jersey 1983.

Beck, Ulrich, Die Metamorphose der Welt, Suhrkamp, Berlin 2017.

Bons, R. W. H./Lee, R. M./Wagenaar, R. W., Obstacles for the development of open electronic commerce, International Journal of Management Information Electronic Commerce, volume 2, issue 3, 1998, pp. 61–83.

Castelfranchi, Cristiano/Tan, Yao-Hua, Introduction: Why Trust and Deception are Essential for Virtual Societies. In: Castelfranchi, Cristiano/Tan, Yao-Hua (Eds.), Trust and Deception in Virtual Societies, Springer, Dordrecht 2001, pp. xi–xxviii.

Coleman, J. S., Foundations of Social Theory, Harvard University Press, Boston 1990.

Eisenstadt, Shmuel N./Roniger, L., Patrons, Clients, and Friends. Interpersonal relations and the structure of trust in society, Cambridge University Press, Cambridge 1984.

Fahrenholtz, Dietrich/Bartelt, Andreas, Towards a Sociological View of Trust in Computer Science. In: Schoop, M./Walczuch, R. (Eds.), Proceedings of the Eighth Research Symposium on Emerging Electronic Markets (RSEEM 01), Maastricht 2001.

Gambetta, Diego, Can we trust trust? In: Gambetta, Diego (Ed.), Trust: Making and Breaking Cooperative Relations, Basil Blackwell, Oxford 1988, pp. 213–238.

Gefen, David/Benbasat, Izak/Pavlou, paul a., A Research Agenda for Trust in Online Environments, Journal of Management Information Systems, volume 246, issue 4, 2008, pp. 275–286.

Giddens, Anthony, The Consequences of Modernity, Polity Press, Cambridge 1996.

Griesbacher, Martin/Stelzer, Harald/Staudegger, Elisabeth, SSH in ICT Using the Example of TRUESSEC.EU. In: Schweighofer/Kummer/Hötzendorfer/Sorge (Eds.), Trends und Communities der Rechtsinformatik, Tagungsband des 20. Internationalen Rechtsinformatik Symposions IRIS 2017, OCG Verlag, Wien 2017, p. 469–473.

Grodzinsky, F. S./Miller, K. W./Wolf, M. J., Developing artificial agents worthy of trust: «Would you buy a car from this artificial agent?», Ethics of Information Technology, volume 13, 2011, pp. 17–27.

Hardin, Russell, Trust, Polity, Cambridge 2006.

Hardin, russell, Conceptions and Explanations of Trust. In: Cook, Karen S. (Ed.), Trust in Society, Russell Sage Foundation, New York 2001, pp. 3–39.

Lagerspetz, Olli, Trust. The Tacit Demand, Springer, Dordrecht, 1998.

Lewis, J. David/Weigert, Andrew J., The Social Dynamics of Trust: Theoretical and Empirical Research, Social Forces, volume 91, issue 1, 2012, pp. 25–31.

Lewis, J. David/Weigert, Andrew J., Trust as a Social Reality, The Sociological Social Forces, volume 63, issue 4, 1985a, pp. 967–985.

Lewis, J. David/Weigert, Andrew J., Social Atomism, Holism, and Trust, The Sociological Quarterly, volume 26, issue 4, 1985b, pp. 455–471.

Luhmann, Niklas, Trust and Power. Two Works by Niklas Luhmann. With Introduction by Gianfranco Poggi, Wiley, Chichester 1979.

Luhmann, Niklas, Vertrautheit, Zuversicht, Vertrauen. Probleme und Alternativen. In: Hartmann, Martin/Offe, Claus (Eds.), Vertrauen. Die Grundlage des sozialen Zusammenhalts, Campus, Frankfurt/New York 2001, pp. 143–160.

Lyon, Fergus/Möllering, Guido/Saunders, Mark N. K., Introduction: the variety of methods for the multi-faceted phenomenon of trust. In: Lyon, Fergus; Guido Möllering; Mark N. K. Saunders (Eds.), Handbook of Research Methods on Trust, Elgar, Cheltonham/Northampton, MA 2012, pp. 1–15.

Lyotard, Jean-Francois, The Post-Modern Condition. A Report on Knowledge, University of Minnesota Press, Minneapolis, 1984.

McLeod, Carolyn, Trust. In: Edward N. Zalta, (Ed.), Stanford Encyclopaedia of Philosophy. https://plato.stanford.edu/entries/trust/, 2015.

Möllering, Guido, The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation, and Suspension, Sociology, volume 35, issue 2, 2001, pp. 403–420.

Möllering, Guido, Trust: Reason, Routine, Reflexivity, Elsevier, London 2006.

Nickel, Philip J./Franssen, Maarten/Kroes, Peter, Can We Make Sense of the Notion of Trustworthy Technology?, Knowledge, Technology, and Policy, volume 23, 2010, pp. 429–444.

Pieters, Wolter, Explanation and Trust: What to Tell the User in Security and AI, Ethics of Information Technology, volume 13, 2011, pp. 53–64.

Seligman, Adam B., The Problem of Trust, Princeton University Press, Princeton 1997.

Shapiro, Susan P., The Social Control of Impersonal Trust, American Journal of Sociology, volume 93, 1987, pp. 623–658.

Spiekermann, Sarah, Ethical IT Innovation. A Value-Based System Design Approach, CRC Press, London, New York 2016.

Sztompka, Piotr, Trust. A Sociological Theory, Cambridge University Press, Cambridge 2003.

Taddeo, Mariarosaria, Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust, Minds & Machines, volume 20, 2010, pp. 243–257.

Taddeo, Mariarosaria/Floridi, Luciano, The Case for e-Trust, Ethics of Information Technology, volume 13, 2011, pp. 1–3.

Urban Walker, Margaret, Moral Repair: Reconstructing Moral Relations after Wrongdoing, Cambridge University Press, Cambridge 2016.

  1. 1 See https://ec.europa.eu/digital-single-market/en/policies/shaping-digital-single-market (all websites last accessed on 4 January 2018).
  2. 2 Ibid.
  3. 3 For further details see Special Eurobarometer 423 (Cyber Security), http://ec.europa.eu/public_opinion/archives/ebs/ebs_423_en.pdf, Special Eurobarometer 431 (Data Protection), http://ec.europa.eu/public_opinion/archives/ebs/ebs_431_en.pdf and Special Eurobarometer 432 (Europeans» Attitudes towards Security), http://ec.europa.eu/public_opinion/archives/ebs/ebs_432_en.pdf.
  4. 4 I use «association» here to refer to what Simmel and others called «Vergesellschaftung» to emphasize the processual nature of society.