Semantic Web design patterns give you recipes and best practices for how to apply Semantic Web technologies to address typical scenarios that come up when modeling data and building applications. These design patterns provide invaluable guidance in how to make good decisions when developing with Semantic Web technologies.
Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format. JSON is a very simple and popular data format, especially in modern Web applications. Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model. An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now. Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax. In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.”
Some 895 experts responded to the invitation of the Pew Research Center’s Internet & American Life Project and Elon University’s Imagining the Internet Center to predict the likely progress toward achieving the goals of the semantic web by the year 2020. Asked to think about the likelihood that Berners-Lee and his allies will realize their vision, often called Web 3.0, these technology experts and stakeholders were divided and often contentious.
Irina Radchenko's insight:
Technology experts and stakeholders who participated in a recent survey believe online information will continue to be organized and made accessible in smarter and more useful ways in coming years, but there is stark dispute about whether the improvements will match the visionary ideals of those who are working to build the semantic web.
Logainm.ie, the bilingual database of Irish place names has recently been enhanced by the Linked Logainm project by incorporating Linked Data. The newly available Linked Logainm dataset provides Irish place name data in structured, computer-readable formats for use by web developers, computer scientists, the heritage community, information professionals and more. The linked dataset is now ready for use by Irish information professionals in the library, museum and archives domains.
Earlier this morning Martin Malmsten of the National Library of Sweden asked an interesting question on Twitter: Do you need help hosting your LOD somewhere else? Could be a valuable excercise in LOD stability http://t.
SNPedia is a wiki investigating human genetics. We share information about the effects of variations in DNA, citing peer-reviewed scientific publications. It is used by Promethease to create a personal report linking your DNA variations to the information published about them.
The Britishi Braodcasting Corporation (BBC) has launced a new page detailing their internal data models. The page provides access to the ontologies the BBC is using to support its audience facing applications such as BBC Sport, BBC Education, BBC Music, News projects and more.
Lemon is an RDF model for representing lexical information relative to ontologies. We assume that you are familiar with RDF and Turtle, if not consider reading the tutorial here. Note, that we will use Turtle for this tutorial, however Lemon can also be serialized in any RDF format, such as RDF/XML
The Semantic Web Blog reached out to the World Wide Web Consortium’s current and former semantic leads to get their perspective on the roads The Semantic Web has traveled and the value it has so far brought to the Web’s table: Phil Archer, W3C Data Activity Lead coordinating work on the Semantic Web and related technologies; Ivan Herman, who last year transitioned roles at the W3C from Semantic Activity Lead to Digital Publishing Activity Lead; and Eric Miller, co-founder and president of Zepheira and the leader of the Semantic Web Initiative at the W3C until 2007.
Search engines have increasingly been incorporating elements of semantic search to improve some aspect of the search experience — for example, using schema.org markup to create enhanced displays in SERPs (as in Google’s rich snippets).
Most significantly, OCLC has now released 194 Million Linked Open Data Bibliographic Work descriptions. According to Wallis, “A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.” In his post, he uses the example of “Zen and the Art of Motorcycle Maintenance” as a Work.
PUBLINK is the Linked Open Data Consultancy backed by the consortia of the EU-FP7 LOD2 Integrated Project. In order to lower the entrance barrier for potential data publishers and tool providers, the LOD2 consortium offer the free PUBLINK Linked Open Data Consultancy to up to 5 selected organizations supporting them with the publishing of Linked Open Data with an overall effort of 10-20 days of support from highly skilled Linked Data professionals.
The goal of the tutorial is to introduce the audience into the basics of the technologies used for Linked Data. This includes RDF, RDFS, main elements of SPARQL, SKOS, and OWL. Some general guidelines on publishing data as Linked Data will also be provided, as well as real-life usage examples of the various technologies.
Data to be used for this concerns of course information about learning opportunities, including for example (from our data catalogue) about courses in various universities (e.g., The Open University, The University of Aalto, or degrees in the UK from the Key Information Set), about open education resources (e.g., from Organic Edunet, mEducator, the Open Courseware consortium, or specific organisations) or any other resource that might help to learn about a certain topic (e.g., videos/podcasts from The University of Southampton or reading resources from the TERENCE reading comprehension dataset). The challenge here is of course to connect and compare such heterogeneous resources with each other and with the learner’s context.
JITA is a classification schema of Library and Information Science (LIS). It is used by E-LIS, an international open repository for scientific papers in Library and Information Science, for indexing and searching. Currently JITA is available in English and has been translated into 14 languages (tr, el, nl, cs, fr, it, ro, ca, pt, pl, es, ar, sv, ru). JITA is also accessible as Linked Open Data, containing 3500 triples.
The ongoing debate around the question whether ‘there is money in linked data or not’ has now been formulated more poignantly by Prateek Jain (one of the authors of the original article) recently: He is asking, ‘why linked open data hasn’t been used that much so far besides for research projects?‘.