Open Government Data: possibilities and problems

View a .PDF version of this report here

“Government 2.0 involves direct citizen engagement in conversations about government services and public policy through open access to public sector information and new Internet based technologies. It also encapsulates a way of working that is underpinned by collaboration, openness and engagement”[1]

Back ground and context

The Political Issues Analysis System (PIAS) project (view original report .pdf)—in which this work is a sub-set—sought to investigate how citizens in Melbourne, Australia used the Internet to seek political information about key political issues. It also sought to understand how citizens contacted and interacted with their elected representative in relation to these issues. Through workshops, case studies, and the development and testing of prototype software, the research uncovered some notable trends in terms of user engagement with important aspects of the formal political process online.

The PIAS project principally focussed upon citizen information use through investigating interaction with party web-sites and the policy documents that they made available. However, the participants in our study largely found 1), the sites difficult to use 2), the information hard to navigate and compare with other policies and 3), the written policies unreliable and unclear. One of our key recommendations from the study emphasized that polices published by political parties should be made available in a ‘machine readable’ form so that they can be automatically aggregated into other systems to enable citizens to compare the policy positions of the parties. Also, strict metadata publishing standards and frameworks should be used so that the information aggregated is of a high-standard allowing it be re-utilised effectively.

This work compliments the PIAS project through listing some of the key projects and services that available that utilise government data. It also explores in more detail the limited availability of what could be termed ‘democratic data’. For the purposes here, “democratic data” is described as: 1) Hansard: making the working of government available in new ways, 2) Transparency: newer forms of transparency through ‘data’, and 3) Policy: enhance and extend the policy making process through online open consultation.

Why Open Access to government data?

Much of the impetus behind the drive for Open Access to government data stems from a push for greater transparency to the functions of government. However, in the case of Victoria, for instance, much of the data being released within the Gov 2.0 agenda tends to be of an administrative nature and of little democratic potential. Whist the Parliament of Victoria does make an enormous amount of useful material available to the public through its website; it is not made available in a technically sophisticated, machine readable way, to take full advantage of the potential of the Internet. Bills are only available in .pdf or word format and the most important document about the workings of government, Hansard, is also only available as .pdf (although it is possible to do a full-text search of Hansard from 1991 onwards). If these important documents were available in a machine readable form, they could be utilised by application developers in innovative ways.

The Open Access movement is a push to make data both machine readable and interoperable so that it may be linked together and leveraged for all sorts of purposes. This may be for new business opportunities, medical research, or new areas of social research. However, doing this is no easy task as multiple data sources require linking and matching across diverse and complex systems (and ‘cleansing’). The first step in this process is to expose data in a standardised way so that it may be located and machine-read. The Victorian public sector has a policy framework specifically designed to achieve these tasks titled the Victorian Public Sector Action Plan. Two key points are:

  1. Participation: Engaging communities and citizen through using Government 2.0 initiatives to put citizens at the centre and provide opportunities for co-design, co-production and co-delivery.
  2. Transparency: Opening up government through making government more open and transparent through the release of public sector data and information[2]

Making data available in this way can only help to “deepen democratic processes” and promote a strong and healthy democracy (however this is often an aspiration rather than an actuality).[3] Accordingly, there is a promising international trend to promote a two-way dialogue between political representatives and the public through combining ‘’democratic data’’ with citizen produced data through popular social media platforms.[4] Rather than building a completely new platform (as has been the case with a number of somewhat underutilised government initiatives), some projects take advantage of largely existing and heavily used social network platforms and provide tools and services to augment their existing capacity (usually to inform and communicate government policy processes) The large EU funded WeGov project[5] and other projects in the US and Europe are welcome movements in this direction. [6]

Continue reading “Open Government Data: possibilities and problems”

Life beyond the timeline: creating and curating a digital legacy

Abstract: The internet has steadily become integrated with our everyday lives, and it is scarcely worth remarking that the quotidian footprint we leave is increasingly digital. This being the case, the question of what will happen to our digital legacy when we die is an increasing important one. Digital accounts containing emails, photos, videos, music collections, documents of all kinds, social media content, eBooks and the like, all trace the life we have led, and if they are to be conserved and bequeathed, if family and friends are to benefit from this often highly emotive and evocative desiderata, if history is to be recorded, we need to prepare these accounts and assets for the inevitability of death. A difficulty though, is that the demands of curating such a legacy are formidable, the importance of creating digital archives from personal data contained in online accounts is not well-established in the public arena, and the products and services available to facilitate this are largely inadequate. Future generations and future historians are the poorer for this. In this presentation we will point out some of the difficulties involved in curating and bequeathing a digital legacy, and suggest a partial remediation.

Our paper from CIRN Prato, 2013 is now available (Link)

The Digital Humanities: Beyond Computing

I approach this Digital Humanities journal issue with caution. Although admittedly I have only skimmed the articles (and there are some good arguments being made) someone still needs to make good humanities software to help us understand the human condition in new ways (and these ‘hybrid’ scholars are very much in the minority). I’ll go out on a limb here and state that the field within the humanities that has contributed the least to making good software is the field of Cultural Studies (even though they contribute good critical discourse to technical debates). Forgive me if I am wrong, but I cannot name one technical innovation from Cultural Studies; yet there are literally thousands from history and archaeology over many decades (check projects here).  ‘Beyond Computing’ indeed!

_________________________________________________________

We are pleased to announce a new issue of the online, open-access journal Culture Machine:

CULTURE MACHINE 12 (2011)
http://www.culturemachine.net/index.php/cm/issue/current

THE DIGITAL HUMANITIES: BEYOND COMPUTING
edited by Federica Frabetti

The field of the digital humanities embraces various scholarly activities in the humanities that involve writing about digital media and technology as well as being engaged in digital media production. Perhaps most notably, in what some are describing as a ‘computational turn’, it has seen techniques and methods drawn from computer science being used to produce new ways of understanding and approaching humanities texts. But just as interesting as what computer science has to offer the humanities is the question of what the humanities have to offer computer science. Do the humanities really need to draw so heavily on computer science to develop their sense of what the digital humanities might be? These are just some of the issues that are explored in this special issue of Culture Machine.

Contents

Federica Frabetti, ‘Rethinking the Digital Humanities in the Context of Originary Technicity’

Jake Buckley, ‘Believing in the Analogico-(Digital)’

Johanna Drucker, ‘Humanities Approaches to Interface Theory’

Davin Heckman, ‘Technics and Violence in Electronic Literature’

Mauro Carassai, ‘E-Lit Works as ‘Forms of Culture’: Envisioning Digital Literary Subjectivity’

Kathleen Fitzpatrick, ‘The Digital Future of Authorship: Rethinking Originality’

Ganaele Langlois, ‘Meaning, Semiotechnologies and Participatory Media’

Scott Dexter, Melissa Dolese, Angelika Seidel, Aaron Kozbelt, ‘On the Embodied Aesthetics of Code’

Benjamin Schultz-Figueroa, ‘Glitch/Glitsh: (More Power) Lucky Break and the Position of Modern Technology’

David M. Berry, ‘The Computational Turn: Thinking About the Digital Humanities’

Gary Hall, ‘The Digital Humanities Beyond Computing: A Postscript’

————————————————————————————————————————————

ABOUT CULTURE MACHINE

Established in 1999, the Culture Machine journal publishes new work from both established figures and up-and-coming writers. It is fully refereed, and has an International Advisory Board which includes Geoffrey Bennington, Robert Bernasconi, Sue Golding, Lawrence Grossberg, Peggy Kamuf, Alphonso Lingis, Meaghan Morris, Paul Patton, Mark Poster, Avital Ronell, Nicholas Royle and Kenneth Surin.

Culture Machine is part of Open Humanities Press:
http://www.openhumanitiespress.org

For more information, visit the Culture Machine site:
http://www.culturemachine.net