Build engaged audiences through publishing by curation.
Sign up with Facebook
Sign up with Twitter
I don't have a Facebook or a Twitter account
Start a free trial of Scoop.it Business
PRONOM is The National Archives’ technical registry – we plan to release the data it holds, in a linked open data format, and make it easier to reuse.
Are you sure you want to delete this scoop?
Semantic Web design patterns give you recipes and best practices for how to apply Semantic Web technologies to address typical scenarios that come up when modeling data and building applications. These design patterns provide invaluable guidance in how to make good decisions when developing with Semantic Web technologies.
"Semantic Web design patterns give you recipes and best practices for how to apply Semantic Web technologies to address typical scenarios that come up when modeling data and building applications. These design patterns provide invaluable guidance in how to make good decisions when developing with Semantic Web technologies."
Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format. JSON is a very simple and popular data format, especially in modern Web applications. Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model. An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now. Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax. In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.”
Three Recommendations were published today to enhance data interoperability, especially in government data.
In this interview, Jeanne Holm shares her views on the future developments in the realm of open and linked data, describing in particular the need for a cultural shift in governments and companies.
Some 895 experts responded to the invitation of the Pew Research Center’s Internet & American Life Project and Elon University’s Imagining the Internet Center to predict the likely progress toward achieving the goals of the semantic web by the year 2020. Asked to think about the likelihood that Berners-Lee and his allies will realize their vision, often called Web 3.0, these technology experts and stakeholders were divided and often contentious.
Technology experts and stakeholders who participated in a recent survey believe online information will continue to be organized and made accessible in smarter and more useful ways in coming years, but there is stark dispute about whether the improvements will match the visionary ideals of those who are working to build the semantic web.
Logainm.ie, the bilingual database of Irish place names has recently been enhanced by the Linked Logainm project by incorporating Linked Data. The newly available Linked Logainm dataset provides Irish place name data in structured, computer-readable formats for use by web developers, computer scientists, the heritage community, information professionals and more. The linked dataset is now ready for use by Irish information professionals in the library, museum and archives domains.
The focus on the semantic web was fun, but ultimately missed the big picture, which is people care not about knowledge graphs but about the people and current events happening in their social graphs.
Earlier this morning Martin Malmsten of the National Library of Sweden asked an interesting question on Twitter: Do you need help hosting your LOD somewhere else? Could be a valuable excercise in LOD stability http://t.
I recently spoke with Shoaib Mufti, YarcData Vice President of R&D, about Big Data and Semantic Web technology. YarcData is Cray subsidiary, accustomed to crunching lots of data. YarcData recently joined W3C.
Businesses have been told that data is the ‘next big thing’ for a number of years now. And in recent weeks the world has witnessed the capability of massive systematic analysis of digital data following exposure of the methods used by the US and UK intelligence agencies.
Most organisations, including universities, know that data can help to develop more effective processes or identify new markets and products. Our report looking at efficiency and effectiveness in higher education emphasises this point. For good or bad, what the security services programmes have demonstrated is the step change in the potential of data as digital technologies become ever more integrated into our lives.
The British Library is developing a version of the British National Bibliography which it is making available as Linked Open Data via a TSO platform.
The initial offering includes published books (including monographs published overtime) and serial publications, with future releases extending coverage to include multipart works, integrating resources (e.g. loose leaf publications), kits and forthcoming publications.
SPARQL and RDF are very quickly becoming the (Open) standard for linking and accessing database works. Readers of my blog I have been searching the corners of what can and cannot be achieved with this for some time now.Triggered by some nice visualization work at the BioHackathon on ChEMBL content, I picked up visualization of RDF data (see this 2010 post where I asked people to visualize data using SPARQL). So, and since d3.js is cool nowadays (it was processing.js in the past), so I had a go at the learning curve.
SPARQL and RDF are very quickly becoming the (Open) standard for linking and accessing databaseworks. Readers of my blog I have been searching the corners of what can and cannot be achieved with this for some time now.Triggered by some nice visualization work at the BioHackathon on ChEMBL content, I picked up visualization of RDF data (see this 2010 post where I asked people to visualize data using SPARQL). So, and since d3.js is cool nowadays (it was processing.js in the past), so I had a go at the learning curve.I started with a pie chart and this example code. Because I was working on the SPARQL queries for metabolites in WikiPathways (using Andra's important WP-RDF work, doi:10.1038/npre.2011.6300.1).
The Semantic Web Blog reached out to the World Wide Web Consortium’s current and former semantic leads to get their perspective on the roads The Semantic Web has traveled and the value it has so far brought to the Web’s table: Phil Archer, W3C Data Activity Lead coordinating work on the Semantic Web and related technologies; Ivan Herman, who last year transitioned roles at the W3C from Semantic Activity Lead to Digital Publishing Activity Lead; and Eric Miller, co-founder and president of Zepheira and the leader of the Semantic Web Initiative at the W3C until 2007.
Search engines have increasingly been incorporating elements of semantic search to improve some aspect of the search experience — for example, using schema.org markup to create enhanced displays in SERPs (as in Google’s rich snippets).
Most significantly, OCLC has now released 194 Million Linked Open Data Bibliographic Work descriptions. According to Wallis, “A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.” In his post, he uses the example of “Zen and the Art of Motorcycle Maintenance” as a Work.
PUBLINK is the Linked Open Data Consultancy backed by the consortia of the EU-FP7 LOD2 Integrated Project. In order to lower the entrance barrier for potential data publishers and tool providers, the LOD2 consortium offer the free PUBLINK Linked Open Data Consultancy to up to 5 selected organizations supporting them with the publishing of Linked Open Data with an overall effort of 10-20 days of support from highly skilled Linked Data professionals.
Data to be used for this concerns of course information about learning opportunities, including for example (from our data catalogue) about courses in various universities (e.g., The Open University, The University of Aalto, or degrees in the UK from the Key Information Set), about open education resources (e.g., from Organic Edunet, mEducator, the Open Courseware consortium, or specific organisations) or any other resource that might help to learn about a certain topic (e.g., videos/podcasts from The University of Southampton or reading resources from the TERENCE reading comprehension dataset). The challenge here is of course to connect and compare such heterogeneous resources with each other and with the learner’s context.
JITA is a classification schema of Library and Information Science (LIS). It is used by E-LIS, an international open repository for scientific papers in Library and Information Science, for indexing and searching. Currently JITA is available in English and has been translated into 14 languages (tr, el, nl, cs, fr, it, ro, ca, pt, pl, es, ar, sv, ru). JITA is also accessible as Linked Open Data, containing 3500 triples.
The ongoing debate around the question whether ‘there is money in linked data or not’ has now been formulated more poignantly by Prateek Jain (one of the authors of the original article) recently: He is asking, ‘why linked open data hasn’t been used that much so far besides for research projects?‘.
Full disclosure: I’m one of the primary authors and editors of the JSON-LD specification. I am also the chair of the group that created JSON-LD and have been an active participant in a number of Linked Data initiatives:RDFa (chair, author, editor), JSON-LD (chair, co-creator), Microdata (primary opponent), and Microformats (member, haudio and hvideo microformat editor). I’m biased, but also well informed.
Linked data is an amazing yet elusive idea — elusive for at least two reasons. It can be difficult for people who don't live and breathe linked data to grasp. It also remains difficult for institutions to know where to start in providing linked data views of their collections. At DigitalNZ we have been exploring these issues both internally and with some of our content partners. This blog post provides a brief overview of what linked data is and offers some reflections on a recent linked open data summit.
“Where the Web has been, the enterprise is going,” says Dr. David Wood, CTO of 3 Round Stones, in his 3 Minute Executive Briefing video on Linked Data (embedded below). The truth of that statement is underscored by the recent trends of ‘Mobile’ and ‘Social’ for the enterprise. Though the force of Mobile in commerce has been unquestionable for years, many corporations are just now undertaking their first significant projects behind the firewall; making their employee portals device-friendly, for example. The case is similar for Social. Many world leading organizations have only recently deployed a Social platform or are only just now evaluating options. It seems the enterprise is always about 5 years behind the Web. So, what’s cutting edge on the Web today that’s likely to be the big ticket in your enterprise tomorrow?
The PRELIDA project aims at building bridges across the Digital Preservation and Linked Data communities, raising awareness of already existing outcomes of Digital Preservation in the Linked Data communities, while at the same time posing new research questions for the preservation domain.
In January 2013 the European Commission launched the project PRELIDA – Preserving Linked Data, a two year Coordination Action of the VII Framework Programme. The main goal of PRELIDA is to build bridges between the areas of Linked Data and Digital Preservation, with two principal objectives: making the Linked Data community aware of the existing results of the Digital Preservation community, and identifying the issues and problems raised by the need to preserve Linked Data which pose new research challenges. To achieve these goals, the project will target stakeholders of the Linked Data community (eg data providers, service and technology providers, as well as end user communities), who have not been traditionally targeted by the Digital Preservation community.