With the ongoing government shutdown, federally operated websites and databases are being turned off, such as NASA’s website, or not updated with current information like the National Institutes of Health PubMed Central’s website. Access to information paid for by taxpayer dollars is inaccessible to the taxpayer, but government shutdowns are not the only way that information becomes unavailable. Back in the spring of 2013, NASA’s technical reports server was down for several weeks while NASA assessed its internal security systems. Thousands and thousands of publicly accessible works covering over 50 years of NASA research were offline. In January of 2012, the government lost funding for the federal program National Biological Information Infrastructure (NBII), and over ten years of data and research was no longer hosted by government servers, and much of it did not find a home elsewhere.
A recent study funded by the European Commission and undertaken by analysts at Science-Metrix, a Montreal-based company that assesses science and technology organizations, has concluded that half of all published academic papers become freely available in no more than two years.
According to the study, the year 2011 is a milestone for open access. By this analysis, 50% of all scientific articles published in 2011 are currently available in some open access form or another, and the trend is toward more and more articles becoming open access.
The Internet can be a powerful tool for finding new scientific articles, theses and dissertations. Google Scholar, Networked Digital Library of Theses and Dissertations, ProQuest Dissertations and Theses are only some of the many web-tools that help researchers, scholars and students to gain access to scientific papers. However, most of the available solutions are in some way limited, or mix open access papers with “closed” ones. Happily, a new tool has been recently developed that can be very useful for people looking uniquely for open access resources.
Participate in the Transatlantic Hangout on Friday October 11th at 11 am EST to find out what is happening in the United Kingdom (UK) and Canada to support open data for businesses and innovators.
Join the discussion online with:
Minister Tony Clement, President of the Treasury Board of CanadaSam Vermette, Co-Founder of the Transit AppSir Nigel Shadbolt, Chairman and Co-Founder of the Open Data InstitutePaul Maltby, Director of Open Data and Government Innovation in the Cabinet Office of the UK
Raw is an open web tool developed at the DensityDesign Research Lab (Politecnico di Milano) to create custom vector-based visualizations on top of the amazing d3.js library by Mike Bostock. Primarily conceived as a tool for designers and vis geeks, Raw aims at providing a missing link between spreadsheet applications (e.g. Microsoft Excel, Apple Numbers, Google Docs, OpenRefine, …) and vector graphics editors (e.g. Adobe Illustrator, Inkscape, ...).
Raw works with delimiter-separated values (i.e. csv and tsv files) as well as with copied-and-pasted texts from other applications (e.g. Microsoft Excel, TextWrangler, TextEdit,…). Based on the svg format, visualizations can be easily edited with vector graphics applications for further refinements, or directly embedded into web pages.
The Data J Lab is a virtual lab on data journalism trends and techniques. Created in 2013 as an initiative of the Data Journalism Master’s Programme at Tilburg University, it is based in the Department of Communication and Information Sciences and benefits from a long-standing tradition of interdisciplinary research at the intersection of communication sciences and applied informatics. The Data J Lab team is composed of five faculty members with expertise ranging from computer science, to journalism, to sociology and includes a number of students and external experts. The Data J Lab has working partnerships with a variety of international and Dutch media outlets, such as Inter Press Service and De Correspondent.
Making data visualizations – A survival guide from vis4 Avoid 3D charts, extend bar charts to zero & other key tips Target audience: Data designers, metrics and analytics experts, nonprofits, cause organizations, foundations, NGOs, social...
Open data is data that can be freely used, shared and built-on by anyone, anywhere, for any purpose. This is the summary of the full Open Definition which the Open Knowledge Foundation created in 2005 to provide both a succinct explanation and a detailed definition of open data.
As the open data movement grows, and even more governments and organisations sign up to open data, it becomes ever more important that there is a clear and agreed definition for what “open data” means if we are to realise the full benefits of openness, and avoid the risks of creating incompatibility between projects and splintering the community.
Sunlight Foundation's local policy team is excited to be teaming up today with the National League of Citiesand Harvard Ash Center to host a free webinar examining open data policies in local governments and their impacts.
More cities are sharing their data online to increase municipal transparency -- a movement that prompted us to update our Open Data Guidelines to reflect the advances that have been made and highlight emerging opportunities. This webinar will look at those advances and opportunities, in addition to defining what open data is and how policies can guide its release. We're pleased to announce that Oakland City Councilmember Libby Schaaf will join the discussion to provide insight on the city's experience with releasing more open data and its impacts in the community.
Last week the School of Data team was in Geneva facilitating a data expedition on garment factories during OKCon. Our aim was to teach people how to run their own expedition but we ended up learning a lot ourselves.
Rather than sharing the outcomes, which deserve several blog posts on their own, I will share now some reflections on the process of organising a data expedition.
Following on from the success of the previous OS OpenData masterclass series, individuals, developers, community groups, social entrepreneurs, commercial and government organisations are invited to attend the latest national series of free masterclasses, run by Ordnance Survey and kindly supported by Horizon Digital Economy Research.
Organizations typically keep their data walled off from outsiders, and within companies, business units often operate in silos with little interaction outside their domains. But organizations stand to gain by adopting a strategy of open data and collaboration, both internally or with external customers and stakeholders.
That was the message of a Thursday keynote at Interop New York by Mark Headd, chief data officer for the city of Philadelphia, and Michelle Lee, co-founder and CEO of Textizen, a platform for text-message surveys. An open strategy can foster innovation and generate business value, they said.
On Friday, 11 October at 11am EST (4pm BST) a prominent group of open data supporters from the UK and Canada participated in a transatlantic Google Hangout. The public is encouraged to join and find out what is happening in the UK and Canada to support open data for businesses and innovators.
Ben Balter argues that open data today is exactly where open source was some two decades ago, and wants to see if it's possible to fast forward the community a bit. Imagine if every time the government posted a dataset, rather than posting the data as a zip file or to a proprietary data portal, the agency treated the data as open source.
As self-effacement goes, it's hard to beat Isaac Newton: "If I have seen further it is by standing on the shoulders of giants." Yet while modern scientists continue to build on the concepts and ideas of their forerunners, they face a unique problem that Newton or his peers would not have anticipated – the inability to access crucial research data generated by other people's work.
Scientific research today typically yields enormous volumes of information, with individual projects easily able to generate gigabytes or terabytes of data. The problem for the scientific community is that the vast majority of this information never makes it into published research which tends, by necessity, to be limited to topline conclusions or summaries of the key findings. The raw data – including the data from hundreds of unsuccessful experiments – is left out, and is lost to the scientific community, and future researchers.
We gather once a month to promote the idea of sharing and opening research resources for both academics and non-academics alike. We are keen to hear, discuss, and learn about successful models as well as try and find solutions for difficulties on the way. Workshops and hackdays are also on the menu for a more active involvement.
The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content. The Open Archives Initiative has its roots in an effort to enhance access to e-print archives as a means of increasing the availability of scholarly communication. Continued support of this work remains a cornerstone of the Open Archives program. The fundamental technological framework and standards that are developing to support this work are, however, independent of the both the type of content offered and the economic mechanisms surrounding that content, and promise to have much broader relevance in opening up access to a range of digital materials. As a result, the Open Archives Initiative is currently an organization and an effort explicitly in transition, and is committed to exploring and enabling this new and broader range of applications. As we gain greater knowledge of the scope of applicability of the underlying technology and standards being developed, and begin to understand the structure and culture of the various adopter communities, we expect that we will have to make continued evolutionary changes to both the mission and organization of the Open Archives Initiative.
Eric Busboom may not be the kind of guy who comes to mind when you think of a librarian. But maybe he’s what a librarian should be as more of us rely on smartphones and computers to make sense of the world.
The potential of ICT for modernisation of education and training has become a key priority for the European Union. In its Communication on Opening Up Education (procedure 2013/2182(INI)), the European Commission proposes actions at EU and national levels to “support the development and availability of Open Educational Resources (OER)” in education and skills development.
OERs are “digital learning resources offered online freely and openly to teachers, educators, students, and independent learners in order to be used, shared, combined, adapted, and expanded in teaching, learning and research.”, OECD, 2012.
Art Market for Dummies is a data visualisation I made last spring, and one of the winners of this year's Data Journalism Awards.
The visualisation tells the story of a topic that is little known outside expert circles - the international art market - and that I myself would have liked to see as a reader. Everyone knows Picasso or Warhol, but few people know that art represents a bigger market than the cinema and the music industry. I am not an expert on this topic myself, so I wanted to create an application that enables users like me to understand the subject quickly and easily, in three minutes. And so I embarked on the creation of this data journalism piece, together with the French publisher Askmedia.
The World Bank has joined forces with the Open Data Institute and the Open Knowledge Foundation in a 3-year project designed to help policy makers and citizens in developing countries understand and exploit the benefits of open data.