1.
Introduction ^
[1]
In his 1996 Declaration of the Independence of Cyberspace John Perry Barlow called upon the «Governments of the Industrial World, you weary giants of flesh and steel[,]» to leave cyberspace «alone». «You are not welcome among us», he wrote, «[y]ou have no sovereignty where we gather».1 This was empirically untrue already in 1996, it is even more so today. States can exercise sovereignty over situations emerging in cyberspace and have consistently done so, albeit with varying degrees of situational and technological awareness, legal sense and sensibility.
[2]
What has been missing, though, is a coherent legal framework for cyberspace, or, more to the point, for the Information Society.2 As the development of an extensive set of rules that should regulate the Information Society lies outside of the scope of this contribution, I will focus on the questions to be asked in all regulatory attempts. While some authors have identified principles governing «Internet law»,3 I propose going back one step further. In essence, I argue that all attempts at designing the rules governing Information Society are preceded by three levels, or dimensions, of questions: a legal-philosophical level (Should we regulate?), a legal-practical level (Who should regulate?), and a legal-technological level (How should we regulate?). The answers to these questions are socio-economically conditioned and depend upon the politico-philosophical approach followed by the norm-creator and its situation-sense (as are all norms). This does not, however, infringe upon the basic validity of the three-level approach, which will be presented in section 2.
[3]
Section 3 will then, at the example of combating hate speech, analyze which experiences can be learned from mapping normative endeavors in light of the three-stage approach. To the question whether there is a material standard, one grand universal unifying principle that the normative development of Information Society should be based on, the concluding section 4 will reply with a qualified no as far as material standards go. Proposing rather a formal standard – the functional approach – as the golden rule for weaving more closely the web of norms governing the Information Society, this contribution will nevertheless end extolling the importance of safeguarding human dignity in the Information Society, as in all others.
2.
The normative design of the Information Society: three preliminary questions ^
[4]
There exist three important preliminary questions to any normative endeavour in the Information Society. These three questions – each to be identified with a different level, stage or dimension of regulation – allow for different philosophical, normative, and technological approaches.
[5]
First, alegal-philosophical question has to be asked: Should we do something or do nothing and leave things as they are, i.e. unregulated? Does a set of patterns need regulation or not? In the early stages of technological development this question will usually be answered in the negative. It is always easier for the norm-creator not to regulate. As technologies are mainstreamed, however, and situations with legal ramifications arise, the need to regulate will grow in strength and the normative ‘call’ will become stronger. Determining the point in time when rules need to be formulated is fraught with difficulty. Introducing too strict a rule too early in time might hamper technological development; not ruling at all might lead to sectoral anarchy, though in today’s thoroughly legalized societies law usually finds an answer through, for example, broad interpretation techniques. Should the legal-philosophical question be answered in the positive (should regulation be thus deemed necessary), the next step on the regulatory ladder needs to be taken.
[6]
Second, alegal-practical question needs to be asked: Who should regulate? Do we as a society trust in the self-regulatory powers of the stakeholders or should governments as the traditional rule-making authority come into play? Different ideological approaches to the relative role of governments and the private sector may determine the answer to this question as may different legal traditions and country-specific historical experiences, cultural patterns or religious convictions. This explains why any general assessment of the inherent advantage of auto-normative versus hetero-normative rule-making is problematic, and why international organizations with powers regarding the regulation of Information Society have had a hard time developing rules that satisfy the normative expectations of all member states.
[7]
Once it is clear that a rule needs to be laid down and the future author of that rule has been identified, the third question, which can be characterized aslegal-technological , needs to be answered: How should we regulate? Which normative instruments and technological means of implementation should be used to render a newly minted rule and its implementation effective and efficient? Economically speaking, the ‘costs’ of disobeying need to be raised until they are higher than the possible advantage gained by breaking the rule. The challenge with each new rule, especially in the information and communication technology sector, is that society’s sense of what is right and wrong may not have developed sufficiently to define the rule breaker as a social outcast (Breaking intellectual property rights through online sharing of mp3s is a case in point). The more socially corrosive a regulated behaviour is (and is recognized as such), the easier this will be.
[8]
Before I will look at the application of the three-level approach to combating hate speech, one caveat needs to be mentioned: Further study should be devoted to the question in which forums the three questions should be asked and answered. Societal aggregation of opinion and articulation through political channels in an integrative discursive process would be the ideal of democratic decision-making. But this ideal may still mask an important problem. If we argue that the traditional institutionalized discursive structures should be employed to formulate answers, we might influence the outcome to these questions, as these structures are traditionally state-dominated. Thus, an ideally disinterested discussion of the first question might be overtaken by a positive answer to the second question – bureaucracies arrogating as much power as they can and regulatory authorities regulating as much as they are allowed to. We need to take great care to frame the debate on the three levels of questions in a way amenable to finding answers which are predetermined as little as possible by existing power structures in societies but rather by the intrinsic normative demands.
3.
Case study: regulating online hate speech (or not) ^
[9]
Evidence of the basic validity of my three-level pre-normative model can be found, I argue, in all attempts at regulating behavioural patterns in Information Society. More often than not, however, the questions are either not asked explicitly or answered wrongly. I will use the case of regulating hate speech to illustrate this point though I caution that space does not allow for more than a cursory look at this challenging subject.4
[10]
On the first level, we have to ask whether societies should fight online hate speech or allow it. A country that, for reasons of its historical development or cultural or religious traditions, takes a more pro-active approach to combating hate speech will provide for laws criminalizing it online and offline. Countries such as Germany, Austria, Switzerland and France, for instance, do not allow certain Neo-Nazi slogans to be promoted online and criminalize the denial of the Holocaust or, in the case of Switzerland and France, also of certain other genocides.
[11]
Secondly, states need to ask themselves the question of whether to combat hate speech through laws or through self-regulation. Who should set the rules? Internet Service Providers, relying on unofficial but widely circulated blacklists established by non-governmental organizations, or states through their systems of criminal law? With online hate speech, examples of both approaches can be found. Some examples of hate speech – such as the sale of online Nazi paraphernalia – are penalized through national criminal law. With regard to hate speech not falling under criminal law, self-regulation by providers (and social media services) is relied upon.
[12]
With regard to hate speech the third level of regulatory challenges relates to the legal-technological approach undertaken in the fight against hate speech. Regulatory attempts have materialized as censorship, blocking of website or filtering for certain terms. Which of these attempts, we have to ask, is effective and yet still in keeping with the human rights obligations of states and non-state actors, including Internet Service Providers?
4.
Assessing success ^
[13]
While all normative endeavours in Information Society can be analyzed in light of the three regulatory dimensions described above, it is important to note that they do not abstractly prejudge the quality of the normative attempt. Rather, the example of regulating hate speech has shown that the choices made in the three dimensions of regulation are based on diverging approaches to regulation in online environments.
[14]
No one of these normative choices (regulation vs. non-regulation; statal regulation vs. self-regulation; hard intervention vs. soft intervention) is – again abstractly – more suited to overcome the challenges of regulating Information Society. Rather, it is essential to keep in mind the goal of all regulation and to allow one’s normative choices to be influenced by afunctional approach.
[15]
The aim of all regulation in Information Society, and of all law in general, is the protection of the human being. Laws do not protect states as states. They protect them as vehicles to the security of the individual, their human security.5
[16]
A functional approach will allow us to question entrenched regulatory views arguing, for example, that only state regulation will succeed and self-regulation is bound to serve only the interest of selected stakeholders. The integration of all stakeholders is essential for discovering, in the pre-normative phase, the challenges that regulatory attempts need to overcome and the regulatory demand they set out to answer. The multi-stakeholder approach, therefore, to which the international community is firmly committed,6 must continue to hold sway.
[17]
Since 1996, when John P. Barlow declared the «global social space we are building to be naturally independent of the tyrannies you [states] seek to impose on us»7 , his declaration of independence has become gradually less true. By 2011, the Internet is neither anomic nor anarchic, though elements of both states exist in sub-regimes and during limited periods in time. Generally, however, legal lacunae and hurdles to effective enforcement do not equal anomia or anarchy. When Barlow discounted the states’ «moral right to rule us [inhabitants of cyberspace]», he was wrong again.8 A moral right to rule does exist (and even a duty), but only insofar as states aim to safeguard human dignity.
[18]
Information Society has not yet reached a stage where there are too many laws. Arguably, there are too few. Those that exist are sometimes misapplied or cannot be executed effectively and efficiently. The conclusion is evident: We need better laws. The three-level approach to regulation and the analysis of normative attempts in light of the functional approach may be useful in guiding the way.
Matthias C. Kettemann, Research and Teaching Fellow, University of Graz, Institute of International Law und International Relations, Universitätsstraße 15/A4, 8010 Graz, AT,matthias.kettemann@uni-graz.at ,www.uni-graz.at/vrewww
Matthias C. Kettemann, Research and Teaching Fellow, University of Graz, Institute of International Law und International Relations, Universitätsstraße 15/A4, 8010 Graz, AT,matthias.kettemann@uni-graz.at ,www.uni-graz.at/vrewww
- 1 Barlow, J.P. , A Declaration of the Independence of Cyberspace, Davos, 8 February 1996,http://www.actlab.utexas.edu/~captain/cyber.decl.indep.html , accessed: 11 January 2011.
-
2
For an introduction into the different legal regimes touched
by the evolution of Information Society, seeBenedek, W., Bauer, V.,
Kettemann, M.C. (eds.), Internet Governance and the Information Society.
Global Perspectives and European Dimensions, Eleven, Utrecht
(2008).
- 3 Uerpmann-Wittzack ,R., Principles of International Internet Law, 11 German Law Journal 1245-1263 (2010), http://www.germanlawjournal.com/index.php?pageID=11&artID=12 , accessed: 11 January 2011.
- 4 For a broader study of hate speech online, see, e.g.,Barnett, B., Untangling the web of hate: are online «hate sites» deserving of First Amendment protection? Cambria, Amherst 2008, andSpinello, R.A. , Cyberethics: morality and law in cyberspace, Jones & Bartlett, Sudbury, MA 2006, pp. 71.
- 5 See, for an introduction into this conceptual approach, e.g.Kettemann, M.C., Harmonizing International Constitutional Law and Security: the Contribution of the Concept of Human Security, inEberhard, H., Lachmayer, K., Ribarov, G., Thallinger, G. (eds.), Constitutional Limits to Security. Proceedings of the 4th Vienna Workshop on International Constitutional Law, Nomos, Vienna/Baden-Baden 2009, pp. 109-134.
- 6 World Summit on the Information Society, Tunis Commitment, WSIS-05/TUNIS/DOC/7-E, 18 November 2005,http://www.itu.int/wsis/docs2/tunis/off/7.html , acccessed: 11 January 2011.
- 7 See note 1.
- 8 Id.