Share ideas that matter on the social web and experience
the benefits of curating the world's best content.
I don't have a Facebook, a Twitter or a LinkedIn account
In September this year, we published a pilot version of the Open Data Index, a ranking of the 61 countries measured by the Web Index but focusing only on indica
Are you sure you want to delete this scoop?
A letter of withdrawal has been sent to the Commissioners involved in Licenses for Europe explaining the reason that these stakeholders can no longer participate in the dialogue and the wish to instigate a broader dialogue around creating the conditions to realise the full potential of text and data mining for innovation in Europe.
Licences for Europe was announced in the Communication on Content in the Digital Single Market (18 December 2012) and is a joint initiative led by Commissioners Michel Barnier (Internal Market and Services), Neelie Kroes (Digital Agenda) and Androulla Vassiliou (Education, Culture, Multilingualism and Youth) to “deliver rapid progress in bringing content online through practical industry-led solutions”.
Today, we are excited to announce that our work with the US Federal Government (data.gov) has gone live at catalog.data.gov! You can also read the announcement from the data.gov blog with their description of the new catalog.
The Open Knowledge Foundation’s Services team, which deploys CKAN, have been working hard on a new unified catalog to replace the numerous previously existing catalogs of data.gov. All geospatial and raw data is federated into a single portal where data from different portals, sources and catalogs is displayed in a beautiful standardized user interface allowing users to search, filter and facet through thousands of datasets.
This is a key part of the U.S. meeting their newly announced Open Data Policy and marks data.gov’s first major step into open source. All the code is available on Github and data.gov plan to make their CKAN / Drupal set-up reusable for others as part of OGPL.
A step by step thing. Just what the way I like!
Data is a valuable national resource and a strategic asset to the U.S. Government, its partners, and the public. Managing this data as an asset and making it available, discoverable, and usable – in a word, open– not only strengthens our democracy and promotes efficiency and effectiveness in government, but also has the potential to create economic opportunity and improve citizens’ quality of life.
For example, when the U.S. Government released weather and GPS data to the public, it fueled an industry that today is valued at tens of billions of dollars per year. Now, weather and mapping tools are ubiquitous and help everyday Americans navigate their lives.
The ultimate value of data can often not be predicted. That’s why the U.S. Government released a policy that instructs agencies to manage their data, and information more generally, as an asset from the start and, wherever possible, release it to the public in a way that makes it open, discoverable, and usable.
The White House developed Project Open Data – this collection of code, tools, and case studies – to help agencies adopt the Open Data Policy and unlock the potential of government data. Project Open Data will evolve over time as a community resource to facilitate broader adoption of open data practices in government. Anyone – government employees, contractors, developers, the general public – can view and contribute. So dive right in and help to build a better world through the power of open data.
Driven by member and author feedback, technical professional association IEEE has announced that all of its peer-reviewed journals — more than 100 — now offer open-access publishing options. The growth of open-access scholarly research publishing enables technologists and the general public to read articles online for free, as opposed to the traditional model of paying for a subscription. Removing access barriers can advance research and scientific applications by exposing new concepts to a broader audience. Studies have shown that this approach may increase article citations. As of June 2012, more than 7600 open-access journals were being published in 117 countries, according to a report from the UK-based Working Group on Expanding Access to Published Research Findings.
Robert Ghrist, a professor of mathematics and electrical and systems engineering at the University of Pennsylvania, knows that wielding vast networks on behalf of nonuniversity benefactors can be tricky business.
Mr. Ghrist specializes in applied topology, an abstract math field. In practice, topological math can help someone harness huge collections of sensory inputs—like those collected by cellphones, for example—to model large environments and solve problems.
The Department of Defense has enlisted Mr. Ghrist to do research along those lines. The Penn professor knows he has little power over how the Pentagon might use his insights. But he says that no longer bothers him.
In recent years advances in technology, particularly the internet have precipitated a move towards a model of open access in scholarly publishing with the aim of removing barriers to access, particularly cost.
Increasing numbers of peer-reviewed journals are being made available online on an open access basis and there has also been an associated move towards the practice of self-archiving by the academic community whereby scholars place their research output in a publicly available online archive.
The Institutional Repository has become the established technology deployed at universities and other institutions to enable scholars to self-archive and has the potential to meet a number of institutional needs:
On May 9, 2013, President Barack Obama sign an executive order making the default for government data “open and machine readable“. Stating that open access to government data will “fuel entrepreneurship, innovation, and scientific discovery that improves Americans’ lives and contributes significantly to job creation”, the order mandates that “wherever possible and legally permissible, to ensure that data are released to the public in ways that make the data easy to find, accessible, and usable.”
Released on the same day is the White House’s Open Data Policy. The open data policy sets up the principles that data should be easily discoverable, usable, complete, timely, and described.
As David Cameron prepares for the UK’s presidency of the G8, the issue of supply chain transparency is already gaining considerable momentum, driven largely by high profile supply chain disasters and scandals. Even UN secretary general Ban Ki-moon has called for companies to take responsibility for their supply chains in a comment reacting to the recent Bangladeshi garment factory collapse.
It is interesting to note that there are some businesses that claim to have product traceability in their supply chains but more often than not this tends to be about electronic label tracking rather than actual understanding of supplier sustainability. More businesses need to take this issue seriously.
However, understanding suppliers, their ability to be consistently responsible and to ensure they are running their businesses efficiently and not passing on unnecessary cost and risk to customers can be a difficult and costly process. So what role can open data have in helping businesses get to grips with suppliers and suppliers’ suppliers?
The final draft of the Data Catalogue Vocabulary Application Profile for data portals in Europe(DCAT-AP) is open for public review until 10 June 2013.
DCAT-AP is a specification based on DCAT for describing public sector datasets in Europe. Its basic use case is to enable a cross-data portal search for data sets and make public sector data better searchable across borders and sectors. This can be achieved by the exchange of descriptions of data sets among data portals.You can find the draft and leave your comments (register and log-in) on the following page:https://joinup.ec.europa.eu/node/66194. All issues will be discussed by the DCAT Application Profile Working Group, in the Virtual Meeting which will take place on 12 June 2013.
Governments are used to being the authoritative source of data and access to the data is typically (heavily) regulated. When one starts to talk about opening up data with governments, many alarms go off: privacy, security, confidentiality, loss of control, and quality among others. I find there are a strong sense of ownership and a fear of opening up - probably due to all these concerns.
Furthermore, it is often not within the formal mission of a government ministry, agency or department to provide access to data. That is why open data can often be seen as a distant, not interesting and not easily understandable problem.
Breaking down such resistance can be done via top-down mandate (as has happened in several cases) but, whilst top-down support is clearly useful, it is not enough. A successful shift in culture needs to be built at all levels of government, and this needs time, sensitivity, respect and, frankly savoir faire.
Governments must understand that placing information on the Web solely as an informative resource, although important and required by policies in many cases, is not enough anymore. Citizens and civil societies are asking for access to the raw data so that they can use it in new and valuable ways.
Interesante entrevista a Josema Alonso:
"Los gobiernos están acostumbrados a ser la fuente autorizada de datos y el acceso a los datos normalmente está regulado (en gran medida). Cuando uno empieza a hablar de la apertura de los datos con los gobiernos, muchas alarmas se apagan: la privacidad, la seguridad, la confidencialidad, la pérdida de control, y la calidad, entre otros. Me parece que hay un fuerte sentido de pertenencia y un miedo de abrir - probablemente debido a todas estas inquietudes"
“Inevitably, there will be questions about what we are each prepared to sign up to,” said British Prime Minister David Cameron in January, in his letter to his fellow G8 leaders. For months later, Russia has made clear it clear what it wasn’t willing to sign onto: the Open Government Partnership (OGP). The most recent update on Russia is that the Kremlin will be pursuing “open government” on its own terms. Russia has withdrawn the letter of intent that it submitted on April 2012 in Brazil, at the first annual meeting of the Open Government Partnership.
At Tacoma Community College (TCC) in Washington, faculty and staff noticed a trend: fewer students were purchasing the required textbooks for classes.
Instead, students were checking out related materials from the library and trying to recreate information on their own. The students couldn’t afford the hefty textbook costs, and it was affecting classroom performance.
“It hurts their engagement in the classroom, it hurts their ability to stay in school and it leaves them at a bigger disadvantage than they are at already,” said Quill West, open educational resources (OER) project director at TCC.
The college, with the cooperation of faculty and students, made a move toward using OERs. The two-year project began in April 2012 and is supported by student technology fees. The goal was to embed OERs into the 10 classes with the highest enrollments and to save students $250,000.
A year later, 39 sections of 19 individual classes—from biology, to English, to computer courses—use digital materials rather than traditional textbooks. Faculty isn’t required to participate, but the number of teachers using OERs is growing. To date, the college has saved students $266,000.
John Wiley & Sons, Inc., has launched a trial of Altmetric, a service that tracks and measures the impact of scholarly articles and datasets on both traditional and social media. The six month trial will run on a number of subscription and open access journals published by Wiley including Advanced Materials, Angewandte Chemie, BJU International, Brain and Behavior, Methods in Ecology and Evolution and EMBO Molecular Medicine.
As part of the trial Altmetric will track social media sites like Twitter, Facebook, Google+, Pinterest, blogs, newspapers, magazines and online reference managers like Mendeley and CiteULike for mentions of scholarly articles published in the journals included in the trial. Altmetric will create and display a score for each article measuring the quality and quantity of attention that the particular article has received. The Altmetric score is based on three main factors: the number of individuals mentioning a paper, where the mentions occurred and how often the author of each mention talks about the article.
The public relations hook of the administration's open data push ties into its economic potential. Todd Park, the federal chief technology officer and one of the most enthusiastic evangelists for open data, claims that unlocking the global positioning system for commercial use helped generate $100 billion in economic value. At a recent conference, he told a luncheon audience that the government is "sitting on a treasure trove of economic opportunity with the data we hold."
It's been exactly one year since the release of the Digital Government Strategy that included open data provisions, and four years since the launch of the Data.gov portal run by the General Services Administration. Data.gov started with 47 datasets, and now there are about 400,000.
Each day, the Charts of Note series from the Economic Research Service (ERS) delivers an innovative, visual display of research findings. Wouldn’t it be great if these charts could be easily grabbed for use on your own website or blog? Well, now they can.
The new Federal Open Data Policy asks agencies to use machine-readable formats when they build and disseminate information. At ERS, we are already traveling down that track…for Charts of Note and more. Our goal is to improve the reach, accessibility, and utility of important research findings.
There’s an epidemic going on in science: experiments that no one can reproduce, studies that have to be retracted, and the emergence of a lurking data reliability iceberg.
This report tracks the development of 'massive open online courses' (MOOCs) from a small selection of specialist courses to major online platforms, offering hundreds of courses with millions of users. The report explores MOOCs' surge in popularity and discusses whether this signals the beginning of a significant transformation in higher education, similar to those seen in other sectors, such as the newspaper industry. It pulls together the recent trends in online education delivery and looks at how universities can respond to the changing online environment.
The core task for Danny Werfel, the new acting commissioner of the IRS, is to repair the agency’s tarnished reputation and achieve greater efficacy and fairness in IRS investigations. Mr. Werfel can show true leadership by restructuring how the IRS handles its tax-exempt enforcement processes.
One of Mr. Werfel’s first actions on the job should be the immediate implementation of the groundbreaking Presidential Executive Order and Open Data policy, released last week, that requires data captured and generated by the government be made available in open, machine-readable formats. Doing so will make the IRS a beacon to other agencies in how to use open data to screen any wrongdoing and strengthen law enforcement.
Last Friday, Barry Eichengreen, professor of Economics and Political Science at Berkeley, wrote about “Open Access Economics” at the prestigious commentary, analysis and opinion page Project Syndicate, where influential professionals, politicians, economists, business leaders and Nobel laureates share opinions about current economic and political issues.
He reaffirmed that indeed the results of the Reinhart and Rogoff study were used by some politicians to justify austerity measures taken by governments around the world with stifling public debt.
Our next Open Humanities Hangout will take place next Tuesday, 28th May. This is the latest in the series of regular hangouts we've been organizing over the past few months with people interested in tapping in to the growing amount of open cultural data and content.
Twitter network. Their teacher is School of Data “data wrangler” Michael Bauer, whose organization teaches journalists and non-profits basic data skills. At the recent International Journalism Festival, Bauer showed journalists how to analyze Twitter networksusing OpenRefine, Gephi, and the Twitter API.
Bauer's route into teaching hacks how to hack data was a circuitous one. He studied medicine and did postdoctoral research on the cardiovascular system, where he discovered his flair for data. Disillusioned with health care, Bauer dropped out to become an activist and hacker and eventually found his way to the School of Data. I asked him about the potential and pitfalls of data analysis for everyone.
At rOpenSci we are creating packages that allow access to data repositories through the R statistical programming environment that is already a familiar part of the workflow of many scientists. We hope that our tools will not only facilitate drawing data into an environment where it can readily be manipulated, but also one in which those analyses and methods can be easily shared, replicated, and extended by other researchers. While all the pieces for connecting researchers with these data sources exist as disparate entities, our efforts will provide a unified framework that will be quickly connect researchers to open data.
Today we are very happy and excited to announce the final release of CKAN 2.0. This is the most significant piece of CKAN news since the project began, and represents months of hectic work by the team and other contributors since before the release of version 1.8 last October, and of the 2.0 beta in February. Thank you to the many CKAN users for your patience – we think you’ll agree it’s been worth the wait.