We’re all waiting to see what’s next for the Web of Data – that is, the Semantic Web – after a year that had numerous highlights including: the official publication of HTML5, which boasts the ability to describe the structure of a web document with standard semantics; a flurry of activity for schema.org, including a new actions vocabulary, new types such as roles, and several community- and partner-led improvements in areas from bibliographies to sports to events; and, the continuing progression of Knowledge Graphs from the likes of Google, Facebook, Microsoft, and Yahoo.
SNPedia is a wiki investigating human genetics. We share information about the effects of variations in DNA, citing peer-reviewed scientific publications. It is used by Promethease to create a personal report linking your DNA variations to the information published about them.
The Britishi Braodcasting Corporation (BBC) has launced a new page detailing their internal data models. The page provides access to the ontologies the BBC is using to support its audience facing applications such as BBC Sport, BBC Education, BBC Music, News projects and more.
Lemon is an RDF model for representing lexical information relative to ontologies. We assume that you are familiar with RDF and Turtle, if not consider reading the tutorial here. Note, that we will use Turtle for this tutorial, however Lemon can also be serialized in any RDF format, such as RDF/XML
The Semantic Web Blog reached out to the World Wide Web Consortium’s current and former semantic leads to get their perspective on the roads The Semantic Web has traveled and the value it has so far brought to the Web’s table: Phil Archer, W3C Data Activity Lead coordinating work on the Semantic Web and related technologies; Ivan Herman, who last year transitioned roles at the W3C from Semantic Activity Lead to Digital Publishing Activity Lead; and Eric Miller, co-founder and president of Zepheira and the leader of the Semantic Web Initiative at the W3C until 2007.
Search engines have increasingly been incorporating elements of semantic search to improve some aspect of the search experience — for example, using schema.org markup to create enhanced displays in SERPs (as in Google’s rich snippets).
Most significantly, OCLC has now released 194 Million Linked Open Data Bibliographic Work descriptions. According to Wallis, “A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.” In his post, he uses the example of “Zen and the Art of Motorcycle Maintenance” as a Work.
PUBLINK is the Linked Open Data Consultancy backed by the consortia of the EU-FP7 LOD2 Integrated Project. In order to lower the entrance barrier for potential data publishers and tool providers, the LOD2 consortium offer the free PUBLINK Linked Open Data Consultancy to up to 5 selected organizations supporting them with the publishing of Linked Open Data with an overall effort of 10-20 days of support from highly skilled Linked Data professionals.
Semantic Web Company launches groundbreaking functionalities for Knowledge Graph Management and Text Analytics. With the new 5.0 version of PoolParty Semantic Suite, Semantic Web Company introduces innovative methods of taxonomy and ontology management and highly precise entity extraction through the entire lifecycle of semantic knowledge graphs.
In case you missed last Friday’s webinar, Yosemite Project Part 6 “Data-Driven Biomedical Research with Semantic Web Technologies” delivered by Dr. Michel Dumontier, the recording and slides are now available (and posted below). The webinar was co-produced by SemanticWeb.com and DATAVERSITY.net and runs for one hour, including a Q&A session with the audience that attended the live broadcast.
Semantic Web design patterns give you recipes and best practices for how to apply Semantic Web technologies to address typical scenarios that come up when modeling data and building applications. These design patterns provide invaluable guidance in how to make good decisions when developing with Semantic Web technologies.
Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format. JSON is a very simple and popular data format, especially in modern Web applications. Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model. An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now. Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax. In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.