Big Data Technolo...
Find tag "taxonomy"
10.0K views | +0 today
Big Data Technology, Semantics and Analytics
Trends, success and applications for big data including the use of semantic technology
Curated by Tony Agresta
Your new post is loading...
Your new post is loading...
Scooped by Tony Agresta!

Semantic Publishing for News & Media: Enhancing Content and Audience Engagement - Ontotext

Semantic Publishing for News & Media: Enhancing Content and Audience Engagement - Ontotext | Big Data Technology, Semantics and Analytics |
Borislav Popov, head of Ontotext Media & Publishing, will show you how news & media publishers can use semantic publishing technology to more efficiently generate content while increasing audience engagement through personalisation and recommendations.
Tony Agresta's insight:

This webinar is recommended for those interested in how to apply core semantic technology to structured and unstructured data.   In this webinar you will learn:

  • The importance of text analysis, entity extraction and semantic indexing - all directly linked to a  graph database
  • The significance of training text mining algorithms to create accuracy in extraction and classification
  • The power of semantic recommendations - delivering highly relevant content using a blend of semantic analysis, reader profiles and past browsing history
  • How "Semantic Search" can be applied to isolate the most meaningful content

This webinar will show live demonstrations of semantic technology for news and media.  But if you are in government, financial services, healthcare, life sciences or education, I would still recommend the webinar.  The concepts are directly applicable and most of the technology can be adapted to meet your needs. 

No comment yet.
Scooped by Tony Agresta!

Ontotext Announces Strategic Hires for Ontotext USA - Ontotext

Ontotext Announces Strategic Hires for Ontotext USA - Ontotext | Big Data Technology, Semantics and Analytics |
Strategic hires for Ontotext USA indicates Ontotext's expansion in the North American marketplace.
Tony Agresta's insight:

Ontotext has long had a presence in North America but recently expanded operations for a number of reasons:  support for the growing install base in this region, expanding into key US markets and building out alliances.   Success in EMEA and wide adoption of Ontotext has driven this growth.    Recently, Ontotext released 6.0 of its native RDF triplestore, GraphDB.  GraphDB is widely regarded as the most powerful RDF triplestore in the industry and has support for inferencing, optimized support for data integration through owl:sameAs, enterprise replication cluster, connectors to Lucene, SoLR & Elasticsearch, query optimization, SPARQL 1.1 support, RDF rank to order query results by relevance or other measures, simultaneous high performance loading, queries and inference and much more. 

Free versions of the Lite edition have been available for quite some time.   But Ontotext recently also started making the Standard and Enterprise versions available for testing (

Organizations have gravitated toward Ontotext more so than other NoSQL vendors and pure triplestore players because of the broad portfolio of semantic technology Ontotext provides that goes beyond GraphDB.  This includes Natural Language Processing, Semantic Enrichment, Semantic Data Integration, Curation and Authoring tools.    Experience Ontotext has working with Linked Open Data sets extends back to the beginning of the LOD movement.    When these tools and technologies are blended with GraphDB, they offer a powerful combination of semantic technologies that deliver a solution using a single vendor while lowering maintenance costs, shortening time to delivery and delivering proven deployment options.  

No comment yet.
Rescooped by Tony Agresta from Digital Delights!

Learning Analytics - A New Discipline and Bits of Semantics

The talk, motivated by the present state of learning and education, identifies a need for a systematic change of the present preactice.


Via Ana Cristina Pratas
Tony Agresta's insight:

Learn analytics - especially adaptive learning using semantic analysis - is starting to gain momentum with some of the academic, scientific and educational publishers.   Some of the core concepts are covered here. 

No comment yet.
Scooped by Tony Agresta!

How Big Data Is Changing Medicine

How Big Data Is Changing Medicine | Big Data Technology, Semantics and Analytics |
Used to be that medical researchers came up with a theory, recruited subjects, and gathered data, sometimes for years. Now, the answers are already there in data collections on the cloud. All researchers need is the right question.
Tony Agresta's insight:

Through semantic analysis of free flowing text and the indexing of results, fine grained details about diseases, treatments, symptoms, clinical trials and current research can be made accessible to medical practitioners in real time.   How does this work?   It typically involves creating a text mining or natural language processing "pipeline" that is used to analyze the text, identify entities (even complex bio medical terms), classify them, develop relationships between them and then "index everything."

The way we have done this successfully is by using proven text mining algorithms and tuning them to highly specific domains like life sciences, healthcare and biotech.   We use curation tools and trained curators to read the text, annotate it and gain agreement on the annotations.  Then the results are used to refine the text mining algorithms, test and validate.

This process may seem cumbersome to some but the reality is, when done by trained pros, it is not.  It has the added benefit of being done one time and then being applied for long periods of time without interruption.   Results are highly accurate.  

Seeing is believing.  You can try it for yourself here: 

  1. Go to:
  2. Click on "Demo for Free"
  3. Paste text into the box from an article or research paper on healthcare or life sciences - make sure the article is replete with complex bio medical terms that you don't think any automated algorithm can figure out.
  4. Select Bio Medical Tagger (by the way, you can also do this for general news or Tweets)
  5. Click Execute
  6. Analyze the results

Pretty cool.

Organizations that don't semantically enrich their content are operating at a disadvantage.  The benefits are real - saving patients lives, finding new treatment strategies, developing drugs faster and much more.

If you would like to learn more about semantics, we suggest you visit where's there's a wealth of information, demos, customer stories and news about this important subject. 

No comment yet.
Scooped by Tony Agresta!

Semantics: The Next Big Issue in Big Data

Semantics: The Next Big Issue in Big Data | Big Data Technology, Semantics and Analytics |
State Street s David Saul argues big data is better when it s smart data.
Tony Agresta's insight:

Banking, like many industries, faces challenges in the area of data consolidation.  Addressing this challenge can require the use of semantic technology to accomplish the following:


  • A common taxonomy across banking divisions allowing everyone to speak the same language
  • Applications that integrate data including structured data with unstructured data and semantic facts about trading instruments, transactions that pose risk and derivatives
  • Ways to search all of the data instantly and represent results using different types of analysis, data visualization or through relevance rankings that highlight risk to the bank.


"What's needed is a robust data governance structure that puts underlying meaning to the information.  You can have the technology and have the standards, but within your organization, if you don't know who owns the data, who's responsible for the data, then you don't have good control."


Some organizations have built data governance taxonomies to identify the important pieces of data that need to be surfaced in rich semantic applications focused on risk or CRM, for example.  Taxonomies and ontologies understand how data is classified and relationships between the types of data.  In turn, they can be used to create facts about the data which can be stored in modern databases (enterprise NoSQL) and used to drive smart applications. 


Lee Fulmer, a London-based managing director of cash management for JPMorgan Chase says the creation of [data governance] standards is paramount for fueling adoption, because even if global banks can work out internal data issues, they still have differing regulatory regimes across borders that will require that the data be adapted.


"The big paradigm shift that we need, that would allow us to leverage technology to improve how we do our regulatory agenda in our banking system.  If we can come up with a set of standards where we do the same sanction reporting, same format, same data transcription, same data transmission services, to the Canadians, to the Americans, to the British, to the Japanese, it would reduce a huge amount of costs in all of our banks."


Semantic technology is becoming an essential way to govern data, create a common language, build rich applications and, in turn, reduce risk, meet regulatory requirements and reduce costs. 


No comment yet.