Jusletter IT

Evaluation Management in e-Participation

  • Author: Michael Sachs
  • Category: Articles
  • Region: Austria
  • Field of law: E-Democracy
  • Collection: Conference proceedings IRIS 2018
  • Citation: Michael Sachs, Evaluation Management in e-Participation, in: Jusletter IT 22 February 2018
This paper provides an overview of e-participation and discusses evaluation of e-participation. Based on the overview on e-participation and evaluation provided, the paper presents a process for the creation and execution of custom-made evaluations that suit the needs of individual e-participation projects. The described process for custom-made evaluation management addresses primarily practitioners and e-participation projects where structured and reliable project evaluation is required despite limited resources.

Table of contents

  • 1. Introduction
  • 2. E-Participation
  • 2.1. Success Factors
  • 2.2. Evaluation
  • 2.3. Frameworks and criteria
  • 2.4. The problem of ex post evaluation
  • 2.5. Methods for evaluation
  • 2.6. The evaluators
  • 3. Methodological approach of this paper
  • 3.1. Process for creating custom-made evaluation management
  • 3.2. Example
  • 4. Conclusion
  • 5. References

1.

Introduction ^

[1]
In the early 2000’s, e-democracy looked promising to scholars of democratic systems as it would have the potential to not only sustainably transform the democratic system but society as a whole. The agendas of relevant international and governing bodies are committed to promote an open, collaborative and transparent democracy based on the integration of information and communication technologies in democratic processes [The White House 2009]. Citizens shall become more informed, more engaged and be more satisfied with political decisions and their public administrations [Ministers eGovernment 2009]. E-Participation is an important aspect in providing better public services as governments make better use of data sources and become gradually more open, transparent and co-creative [Toots et al. 2017].
[2]
Academia, public administration and political activists worked together in numerous pilot projects to test the application of e-democracy tools in realistic settings. Innovative technologies were developed and existing technologies were mashed up to serve the needs of participatory processes as defined within the different projects. As a result, pilot projects generated lists of recommendations, lessons learned and described best practices enabling the community to learn from each other [Parycek et al. 2014, Scherer/Wimmer 2012].
[3]
In real life applications, e-democracy could not keep what it promised [Prosser 2012], and e-participation projects often did not meet initiators’ expectations [Große 2013] as hardly any pilot research projects managed to continue sustainably. While early projects were technology driven, recent pilot projects have focused on user engagement in the democratic decision-making process. In light of some disappointing results, it merits to question whether the expectations of e-democracy were too high and if citizens or decision makers really wish for a shift from a representative to a more participatory form of democracy.

2.

E-Participation ^

[4]

The rise of social media and web 2.0 facilitated new forms of interaction and collaboration of citizens with governments collectively labelled as e-participation. While general elections in democracies are part of the participatory decision-making [Krimmer 2002], this paper refers to the term e-participation as applied by Macintosh [2004] who contrasts the new technological opportunities for the inclusion of citizens in the decision-making process within a deliberative democracy with the digital transformation of elections as in e-voting. Thus, e-participation comprises variations of citizen engagement and can be categorized in different schemas.

[5]
Macintosh [2004] refers to three levels of participation: enabling; engaging; and empowering. Based on DEMO-net deliverables, Wimmer [2007] refers to four levels of engagement: eInforming; eConsulting; eCollaboration; and eEmpowering. The Austrian working group on e-democracy described a step model comprising of four steps ranging from the least intense participatory action to the most intense participatory action [Parycek 2008]: information; consultation; cooperation; and co-decision. Recent work added the level of decision to the Austrian model to include legally binding decisions in the participatory process, such as e-referenda and e-elections [Schossböck et al. 2016].
[6]
Estonia builds a compelling case for the sustainable success of internet voting [Krimmer 2016], but the country has not transferred the success story from e-voting to e-participation and e-democracy [Toots et al. 2016]. Successful large-scale examples of sustainable deliberative e-participation projects are difficult to find. Even the European Citizens Initiative (ECI), which should have become a strong direct democratic tool for citizens, lacks acceptance and success stories despite its continuously increasing ease of use for citizens – in contrast to the hurdles that are in place for initiators of the ECI [ECAS 2014]. The implementation of e-participation seems most promising on local levels as initiatives such as participatory budgeting may foster a more participatory culture to make e-participation more popular [Krenjova/Raudla 2012].
[7]
E-participation can be embedded into decision-making processes on all levels of governance in many ways. The concept of the policy cycle exists in various representations [Janssen/Helbig 2015, Müller 2010] and e-participation can be integrated at several stages of the policy cycle. Even though evaluation is an explicit stage in all variations of the policy cycle, it is important to point out that evaluation and feedback processes can be incorporated into almost any stage. Parycek/Höchtl/Ginner [2014] show the feedback and evaluation potentials within the policy cycle in the context of open data.

2.1.

Success Factors ^

[8]
E-participation is a complex topic and new projects can benefit greatly from insights gained by evaluating successes and failures of other projects. Evaluation fosters precise and objective analysis of the actual outcomes and success factors of e-participation and it facilitates learning from mistakes as an organised community [Aichholzer/Westholm 2009]. The definition of success criteria is useful information for those who plan to implement and execute e-participation projects.
[9]
Despite the lack of large scale sustainable e-participation projects, success factors in e-participation have been defined extensively. Panopoulou/Tambouris/Tarabanis [2014] conducted a study among practitioners about possible success factors in e-participation. 40 different e-participation initiatives on all levels of governance from twelve European countries provided feedback. Panopoulou/Tambouris/Tarabanis [2014] indicate seven success factors, of which some can be divided into more specific sub-categories: commitment by the government; usability; combining different channels (on- and offline); thorough communication and promotion; security and privacy; organisation and management; topic complexity and quality of participation. Similar success factors were defined in works of other scholars [e.g. Kubicek/Lippa/Koop 2011].
[10]
While general agreement on the most relevant success factors for e-participation can be found in literature, it is more challenging to define normative values of the success in e-participation as they strongly relate to the purpose of the participation process. In some cases, high participation rates might be relevant for legitimising a decision, while in other cases a small participation group might lead to the best collaboratively created solutions for specific problems. Similar approaches in purpose oriented process design can be found in public sector innovation processes [Edelmann/Höchtl/Sachs 2012].
[11]
Concrete success values are hardly defined in literature as comparative evaluation models often code evaluation results of other projects to make them comparable. Consequently, the lessons learned from comparative evaluation could be of limited use for the individual projects, but of course relevant to the community.

2.2.

Evaluation ^

[12]
The importance of high-quality e-participation evaluation frameworks has often been stated. The following quote can be found in many papers on evaluation frameworks [Kubicek/Aichholzer 2016, Loukis/Xenakis/Charalabidis 2010, Macintosh/Whyte 2008], as it highlights the need to understand the mechanisms that lead to successful e-participation projects: «[T]here is a striking imbalance between the amount of time, money and energy that governments in OECD countries invest in engaging citizens and civil society in public decision making and the amount of attention they pay to evaluating the effectiveness and impact of such efforts.» [OECD 2005, 10] The OECD report states that thorough evaluation concepts and procedures were often not planned nor implemented. This might be rooted in the complexity of e-participation evaluation, as costs both in terms of time and money can increase exponentially with complexity.
[13]
Several evaluation frameworks draw on a multi-level approach for an integral analysis of e-participation initiatives [e.g. Macintosh/Whyte 2008] and recent research addresses the limits of generalised frameworks. Kubicek/Aichholzer [2016] do not recommend the application of general frameworks as evaluations must consider the project-specific contexts. Overall, evaluation in e-participation projects doesn’t differ greatly from evaluation of more traditional (offline) participation projects, but the integration of information and communication technologies allows additional methods for evaluation and monitoring that may be automated to a certain extent.

2.3.

Frameworks and criteria ^

[14]
Scholars develop frameworks for the analysis of e-participation and many of these frameworks provide excellent guidelines for conducting an analysis of e-participation initiatives. As hinted at previously, the context of e-participation is crucial when setting up an evaluation process [Kubicek/Aichholzer 2016]. Most evaluation frameworks incorporate ideas from existing frameworks and many projects employ custom-made solutions for their evaluations needs. The VoicE-project [Scherer/Wimmer 2010] applied the DEMO-net framework described by Macintosh/Whyte [2008] which focuses on three perspectives: project perspective; socio-technical perspective; and democratic perspective. Subsequently, Scherer/Wimmer [2010] evaluated the following 16 criteria in their evaluation of the VoicE-project. Project perspective: engaging with a wider audience; obtaining better-informed opinions; scope of deliberation; effectiveness; feedback; process quality; and sustainability. Socio-technical perspective: social acceptability; usefulness; and usability. Democratic perspective: representation; engagement; transparency; conflict and consensus; political equality; and community control.
[15]
The OurSpace-project conducted an evaluation that adapted the model from Macintosh/Whyte [2008] to the individual needs of the project [Parycek et al. 2014], as did the ePartizipation-project [Heussler et. al. 2018]. Loukis/Xenakis/Charalabidis [2010] draw from existing models and present an excellent example of the scholarly work necessary to research a considerable number of frameworks to end up with a framework best suitable for a specific purpose – the LEX-IS-project evaluation. The research identifies many possible criteria for evaluation and Loukis/Xenakis/Charalabidis [2010] end up with 48 criteria for analysis divided into three different evaluation perspectives: process; system; and outcome. Their criteria are slightly more narrowly defined than the criteria of Scherer/Wimmer [2010]. Producing such a deep analysis of frameworks as showcased in Loukis/Xenakis/Charalabidis [2010] requires well-funded research and collaboration with academic partners capable of providing the required expertise, experience and resources. As such, there might be a need for more practical ways to manage the evaluation of smaller-scale e-participation projects (e.g. on a local level).

2.4.

The problem of ex post evaluation ^

[16]

Kubicek/Aichholzer [2016] describe an «evaluation gap» when they state that the research for standard evaluation frameworks will not bring the desired results of general applicability. They claim that different criteria and methods for evaluation must be applied depending on the specific participation initiative and different groups of actors. Despite the increasing literature on evaluation frameworks in the domain of e-participation there is no widely accepted model. The same e-participation tool can be useful and successful in one context and be the source of failure in another. Thus, Kubicek/Aichholzer [2016] propose earlier involvement of stakeholder groups prior to the evaluation as opposed to ex post involvement and define at least five stakeholder groups: decision makers; organizers; users/participants; target groups/people concerned; and the general public. Kubicek/Aichholzer [2016] call it an actor-related approach set up like a field experiment.

2.5.

Methods for evaluation ^

[17]
The most common methods for the analysis of e-participation projects have been defined in the work of Macintosh/Whyte [2008]: field observation in a real-world setting; feedback of stakeholders (interviews and discussions); online questionnaires; analysis of the produced content and online discussions; analysis of the project documentation; and usage statistics of the electronic tools. The work of Scherer/Wimmer [2010] adds two items to the applicable methods for evaluation: analysis of legal procedures; and expert interviews. Technical security and functionality testing are additional items found in Loukis/Xenakis/Charalabidis [2010] as their approach is driven by technical evaluations of information systems. Security evaluations of e-participation platforms can also take methods from e-voting evaluations concepts [Krimmer 2006, Gibson et al. 2016]. This is not to be taken as an exhaustive list of all methods useful in e-participation evaluation but rather a current overview of the most commonly used methods. Innovative technologies are likely to allow more sophisticated analysis such as semantics, language or network analysis to be integrated into participation tools.

2.6.

The evaluators ^

[18]
Fross [2005] describes and contrasts three kinds of actors that can conduct evaluations: internal; independent; and participatory. Each of these actors comes with its own set of advantages and disadvantages when it comes to evaluation. Internal evaluation means that those who organise the participation process evaluate the process. They have full information access and can immediately apply learnings, but they might have limited competence and might avoid difficult issues. Independent evaluation means that external experts are tasked with the evaluation. They have the competence and an outside perspective, but they might have limited access to information and limited impact with their findings. Adapted to the context of e-participation in this paper, participatory evaluation can be described as those who are subjects or participants of the e-participation process taking part in its evaluation. Mutual learnings allow straightforward application of lessons learned, but the process might be slow and requires commitment from all participants. Ideally, evaluation consists of a mix of these three kinds of evaluation actors, and their evaluation work must be orchestrated by a managing body.

3.

Methodological approach of this paper ^

[19]
So far, the basis for the evaluation management as indicated in the title of the paper has been set. The process model for custom-made evaluation management will be described in the next section and is directed at practitioners who want to execute an e-participation project of any scale but lack the resources to deeply research evaluation literature. The question to be answered is: How can organisers/initiators of e-participation projects ensure that the evaluation of their project accurately addresses those aspects that are of most relevance to them?
[20]
Firstly, this question requires an ex ante approach to evaluation as outlined in the following description. Secondly, it requires knowledge about e-participation, and this paper briefly addressed the key aspects of e-participation evaluation. The previous sections of this paper can be a helpful reference and source of further reading when designing the evaluation instrument. Thirdly, the question requires experience in designing e-participation evaluation. Building on experience in evaluation gained from two e-participation projects, the process model described here simplifies the construction of an evaluation instrument and avoids errors inherent in ex post evaluation. The following section describes the processes of setting up the evaluation framework and allows the evaluation to become an integral part of project management from the very beginning.

3.1.

Process for creating custom-made evaluation management ^

[21]
Hardly any two e-participation projects are alike and they differ in various ways: reasons for setting up the participation process; objectives and targets of the project; stakeholder groups; processes and methods; intensity of the participation; technologies applied; time.
[22]
This paper generalises a process for creating a custom-made evaluation instrument as opposed to creating or applying a general evaluation framework. The generic process model allows for adaptation of the evaluation to specific contexts and serves the needs of those that initiate the e-participation process.
[23]
The process is modelled for organisers/initiators of e-participation initiatives such as governing bodies, public authorities and civil organisations, to provide them with the best evaluation outcomes possible that answer their questions and ensure learnings for potential follow up projects. Consequently, the model allows adaptation to various levels of participation at all levels of governance. The degree of complexity (which impacts the resources needed) can be controlled by the organisers of the e-participation process. Ideally, the process is set up by an external expert in close collaboration with the organisers of the e-participation project to be evaluated.
[24]

The process consists of three basic steps, each divided into subitems: expectation analysis; creation of the evaluation instrument; conduction of the evaluation.

  1. Expectation analysis: Structured inquiry into the expectations of the organisers in charge of the participation process, such as public agencies or project leaders. (In research projects, the expectation analysis is often based on the work plan.)
    1. Definition of rationale and objectives: The inquiries of rationale and objectives should be conducted by a domain expert who provides additional expertise and an outside perspective, to determine primary and possibly secondary objectives of the participation process. This can be done in interviews with the project organisers or in a discussion setting with stakeholders.
    2. Definition of expected outcomes and success: Following the definition of the rationale and objectives, the expectations of the organisers/stakeholders should be collaboratively defined. The expert can relate the definition of success to experiences from other e-participation projects and together with the organisers reflect on the truly relevant and realistic outcomes of the project.
  2. Creation of the evaluation instrument: Structuring and defining the evaluation process based on the expectation analysis. This should be done by an expert with support from the organisers or managers of the respective e-participation project.
    1. Definition of the criteria that shall be analysed: What shall be evaluated?
    2. Definition of indicators to analyse the criteria: What shall be measured?
    3. Definition of the methods or tools used to measure the indicators: How shall it be measured?
    4. Definition of success values: When can a measured value be defined as success or failure?
    5. Definition of the measuring time: When is the appropriate time to measure?
  3. Conduction of the evaluation: The execution of the evaluation tasks according to the evaluation instrument.
    1. The actual evaluation takes place simultaneously with project execution and thereafter. Feedback loops for instant improvements of the e-participation process can be defined in the evaluation instrument if adequate measurement takes place during project execution.
    2. After all tasks defined in the evaluation instrument are completed, a final evaluation report can be produced. This might take considerable time after the end of the e-participation process if impact criteria are to be evaluated. In such cases interim evaluation reports are recommended.
[25]
The process outlined here also considers the aspect of the time as a crucial element in the evaluation process, i.e. at what time evaluation data should be collected and analysed. This is not given much attention in e-participation evaluation frameworks as documentation and platform data is digitally stored anyway. However, the evaluation data gathered from e-participation processes enables organisers to assess the quality and practicability of the e-participation process instantly and allows them to make adjustments to the e-participation project if necessary within a short time. Figure 1 shows the planning of the measuring time as a result of the described process.
[26]
Since the evaluation instrument requires the entire evaluation to be fully understood at the beginning, the process outlined here ensures that all evaluation tasks are defined at the very beginning of the e-participation project and presented in a highly structured way

3.2.

Example ^

[27]

The following example in table 1 visualises how the evaluation instrument could be structured by referring to basic evaluation criteria in e-participation. For instance, participation rate: In step one of the process (the expectation analysis) the fictional organisers of an e-participation project state that a sufficient number of registered and active users is of great relevance to them and they consider 50 active users per participation stage and a minimum of 100 registered users overall as a success. Thus, the evaluation instrument for this specific item could be set up as the first row in the following table.

Criteria Indicators Method/Tool Success values Measuring time
Participation rate Number of active users Platform data > 50 active users per stage End of each e-participation stage
Number of registered users Platform data > 100 registered users End of e-participation execution
Platform design Helpdesk support requests Number of support requests submitted to helpdesk < 2 per 50 users Each week
Quality of the users’ ideas Ideas are realisable Assessment from expert At least 5 positively assessed ideas 1 month after end of e-participation
Implementation of results Inclusion of aspects of the best users’ ideas Interview with selected users Positive assessment by users 1 year after end of e-participation

Table 1: Example of an evaluation instrument

[28]
Evaluation of the participation rate is usually part of each evaluation in a classic ex post evaluation setting, and it is here used as a simple example to depict the process of setting up the evaluation instrument. The key difference of the above evaluation instrument to an ex post evaluation process is the fact that the success values are defined prior to the execution of the e-participation project and put in context and overview with all other evaluation activities. More complex requirements defined within the expectation analysis would also require a more complex evaluation instrument that might not be realisable in an ex post evaluation.
[29]

Looking at other the evaluation criteria in table 1 (platform design, quality of the users’ ideas, and implementation of the results) shows the practicability of the process modelled to create a suitable evaluation instrument that can be adapted to the needs of individual projects with the integration of different evaluation methods. Some criteria can be more complexly measured with several indicators and indicators can also be measured with several methods or tools. The examples in table 1 are kept simple to avoid the need of explaining a context. It is crucial to remember that all the information placed in the table of the evaluation instrument originates from a thorough expectation analysis conducted by an expert together with the (here fictional) organisers/initiators of the e-participation project. Figure 1 shows the exemplary e-participation process with 3 stages. All evaluation elements from the example in table 1 have been integrated in the timeline below.

Figure 1: Timeline of evaluation measurement

[30]
It must be pointed out that creating an evaluation instrument for an e-participation project will take considerable time and several drafts. One needs to think through the entire project from the beginning to the end and ideally revise the final draft together with the organisers of the e-participation project. The more thoughts and preparation are spent on planning the evaluation ex ante, the better the execution of the evaluation and subsequent results should be.

4.

Conclusion ^

[31]
Literature on evaluation of e-participation produced frameworks, success factors, criteria to be evaluated, and methods of evaluation. It also points out that due to the many varieties of e-participation evaluation remains a complex issue, which it truly is. This paper seeks to reduce the complexity of integrating thoroughly planned evaluation processes into e-participation project management and execution. The described process modelled for custom-made evaluation management addresses primarily practitioners and e-participation projects where structured and reliable project evaluation is required despite limited resources.

5.

References ^

Aichholzer, Georg/Westholm, Hilmar, Evaluating eParticipation projects: practical examples and outline of an evaluation framework. European Journal of ePractice,7(3), 2009, pp. 1–18.

ECAS (European Citizen Action Service), The European Citizens’ Initiative Registration: Falling at the first hurdle? http://www.ecas.org/wp-content/uploads/2014/12/ECI-report_ECAS-2014_1.pdf (accessed 10 August 2017), 2014.

Edelmann, Noella/Höchtl, Johann/Sachs, Michael, Collaboration for Open Innovation Processes in Public Administrations. In: Charalabidis, Yannis/Koussouris, Sotirios (Eds.), Empowering Open and Collaborative Governance. Springer, Heidelberg, 2012, pp. 21–37, https://doi.org/10.1007/978-3-642-27219-6_2.

Forss, Kim, An Evaluation Framework for Information, Consultation, and Public Participation. In. Evaluating public participation in policy making, OECD Publications, 2005, pp. 41–82, http://dx.doi.org/10.1787/9789264008960-en.

Gibson, J. Paul/Krimmer, Robert/Teague, Vanessa/Pomares, Julia, A review of E-voting: the past, present and future In Annals of Telecommunications 71/7-8, 2016, pp. 279–286, DOI 10.1007/s12243-016-0525-8.

Große, Katharina, E-participation – the Swiss army knife of politics? In: CeDEM13: Conference for E-Democracy an Open Government, MV-Verlag, 2013, pp. 45–59.

Heussler, Vinzenz/Said, Giti/Sachs, Michael/Schossböck, Judith, Multimodale Evaluierung von Beteiligungsplattformen. In: Leitner, Maria (Ed.) Digitale Bürgerbeteiligung, Springer, Forthcoming 2018.

Janssen, Marijn/Helbig, Natalie, Innovating and changing the policy-cycle: Policy-makers be prepared! In: Government Information Quarterly, 2015, https://doi.org/10.1016/j.giq.2015.11.009.

Krenjova, Jelizaveta/Raudla, Ringa, Participatory Budgeting at the Local Level: Challenges and Opportunities for New Democracies. In: Halduskultuur – Administrative Culture 14 (1), 2013, pp. 18–46.

Krimmer, Robert, Internet Voting: Elections in the (European) Cloud. In: CeDEM Asia 2016: Proceedings of the International Conference for E-Democracy and Open Government, Asia 2016, Deagu, Edition Donau-Universität Krems, 2016, pp.123–125.

Krimmer, Robert, E-Voting.at: Elektronische Demokratie am Beispiel der österreichischen Hochschülerschaftswahlen. Working Papers on Information Processing and Information Management, 05/2002.

Krimmer, Robert/Volkamer, Melanie, Observing Threats to Voter’s Anonymity: Election Observation of Electronic Voting. In: Working Paper Series on Electronic Voting and Participation, 01/2006, pp. 3–13.

Kubicek, Herbert/Aichholzer, Georg, Closing the evaluation gap in e-Participation research and practice. In: Aichholzer Georg/Kubicek Herbert/ Torres, Lourdes (Eds.) Evaluating e-Participation, Springer International Publishing, 2016, pp. 11–45.

Kubicek, Herbert/Lippa, Barbara/Koop, Alexander, Erfolgreich beteiligt. Nutzen und Erfolgsfaktoren internetgestützter Bürgerbeteiligung – Eine empirische Analyse von zwölf Fallbeispielen. Gütersloh, Bertelsmann Stiftung, 2011.

Loukis, Euripidis/Xenakis, Alesandros/Charalabidis, Yannis, An evaluation framework for e-participation in parliaments. In: International Journal of Electronic Governance, 3(1), 2010, pp. 25–47.

Macintosh, Ann, Characterizing e-participation in policy-making. In: System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on. IEEE, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.6150&rep=rep1&type=pdf (accessed 21 October 2017), 2004.

Macintosh, Ann/Whyte, Angus, Towards an evaluation framework for eParticipation. In: Transforming government: People, process and policy, 2(1), 2008, pp. 16–30.

Ministers eGovernment, Ministerial Declaration on eGovernment. https://ec.europa.eu/digital-single-market/sites/digital-agenda/files/ministerial-declaration-on-egovernment-malmo.pdf (accessed 30 October 2017), 2009.

Müller, Philipp, Offene Staatskunst. In: Internet und Gesellschaft Co:llaboratory: Offene Staatskunst. Bessere Politik durch Open Government, https://www.open3.at/wp-content/uploads/IGCollaboratoryAbschlussbericht2OffeneStaatskunstOkt2010.pdf (accessed 20 September 2017), 2010, pp. 11–27.

OECD. Evaluating Public Participation in Policy Making. OECD Publications, 2005, http://dx.doi.org/10.1787/9789264008960-en.

Panopoulou, Eleni/Tambouris, Efthimios/Tarabanis, Konstantinos, Success factors in designing eParticipation initiatives. In: Information and Organization, 24(4), 2014, pp. 195–213.

Parycek, Peter et al., Positionspapier zu E-Democracy und E-Participation in Österreich. https://www.ref.gv.at/fileadmin/_migrated/content_uploads/EDEM-1-0-0-20080525.pdf (accessed 29 October 2010) 2008.

Parycek, Peter/Hochtl, Johann/Ginner, Michael, Open government data implementation evaluation. In: Journal of theoretical and applied electronic commerce research, 9(2), 2014, pp. 80–99.

Parycek, Peter/Sachs, Michael/Sedy, Florian/Schossböck, Judith, Evaluation of an e-participation project: Lessons learned and success factors from a cross-cultural perspective. In: International Conference on Electronic Participation. Springer, Berlin, Heidelberg, 2014, pp. 128–140.

Prosser, Alexander, eParticipation – Did We Deliver What We Promised? In: Advancing Democracy, Government and Governance, 2012, pp. 10–18, https://doi.org/10.1007/978-3-642-32701-8_2.

Scherer, Sabrina/Wimmer, Maria A., A regional model for E-Participation in the EU: evaluation and lessons learned from VoicE. In: International Conference on Electronic Participation. Springer, Berlin, Heidelberg, 2010, pp. 162–173. https://doi.org/10.1007/978-3-642-15158-3_14.

Schossböck, Judith/Rinnerbauer, Bettina/Sachs, Michael/Wenda, Gregor/Parycek, Peter, Identification in e-participation: a multi-dimensional model. In: International Journal of Electronic Governance, 8(4), 2016, pp. 335–355, https://doi.org/10.1504/IJEG.2016.082679.

The White House, Memorandum for the Heads of executive Departments and Agencies. Transparency and Open Government. https://obamawhitehouse.archives.gov/the-press-office/transparency-and-open-government (accessed 30 October 2017) 2009.

Toots, Maarja/Kalvet, Tarmo/Krimmer, Robert, Success in eVoting–Success in eDemocracy? The Estonian Paradox. In: International Conference on Electronic Participation. Springer, Berlin, Heidelberg, 2016 pp. 55–66 https://doi.org/10.1007/978-3-319-45074-2_5.

Toots, Maarja/McBride, Keegan/Kalvet, Tarmo/Krimmer, Robert, Open data as enabler of public service cocreation: exploring the drivers and barriers. In CeDEM 2017 Conference. 2017, IEEE, pp. 102–112, https://doi.org/10.1109/CeDEM.2017.12.

Wimmer, Maria A., Ontology for an e-participation virtual resource centre. In: Proceedings of the 1st international conference on Theory and practice of electronic governance. ACM, 2007, pp. 89–98. ACM. https://doi.org/10.1145/1328057.1328079.