Big Data Technolo...
Follow
Find
8.6K views | +0 today
 
Scooped by Tony Agresta
onto Big Data Technology, Semantics and Analytics
Scoop.it!

Oakland Crimespotting

Oakland Crimespotting | Big Data Technology, Semantics and Analytics | Scoop.it
Tony Agresta's insight:

Here's an interesting data visualization for the city of Oakkand, CA that allows users to analyze crime by time of day, day of week, periods of time and type of crime.  This has to be useful in detecting patterns and allocating resources effectively 


There are hundreds of other data sets that are publicly available on this site for those interested.  They cover a wide variety of topics.


A big collection of sites and services for accessing data


more...
No comment yet.
Big Data Technology, Semantics and Analytics
Trends, success and applications for big data including the use of semantic technology
Curated by Tony Agresta
Your new post is loading...
Your new post is loading...
Scooped by Tony Agresta
Scoop.it!

Ontotext Delivers Semantic Publishing Solutions to the World’s Largest Media & Publishing Companies

Ontotext Delivers Semantic Publishing Solutions to the World’s Largest Media & Publishing Companies | Big Data Technology, Semantics and Analytics | Scoop.it
Washington DC (PRWEB) August 27, 2014 -- Ontotext Media & Publishing delivers semantic publishing solutions to the world’s largest media and publishing companies including automated content enrichment, data management, content and user analytics and natural language processing. Recently, Ontotext Media and Publishing has been enhanced to include contextually-aware reading recommendations based on content and user behavior, delivering an even more powerful user experience.
Tony Agresta's insight:

Semantic Recommendations are all about personalized, contextual recommendations based on a blend of search history, users profiles and, most importantly, semantically enriched content.  This refers to content that has been analyzed using natural language processing. Entities are extracted from the text, classified and indexed inside a graph database.  When a visitor comes to a website or information portal,  "Semantic Recommendations" understands more than just the past browsing history.  It understands what other articles have relevant, contextual information of interest to the reader.  This, in turn, creates a fantastic user experience because visitors get much more than they originally thought would be available in search results.  This news release talks more about Semantic Recommendations and Ontotext Media and Publishing. By the way, this same technology can be used for any website, any information product, any search and discovery application.  The basic premise is that once all of your content has been semantically enriched, search engines deliver highly relevant results. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Thought Leaders in Big Data: Atanas Kiryakov, CEO of Ontotext (Part 1)

Thought Leaders in Big Data: Atanas Kiryakov, CEO of Ontotext (Part 1) | Big Data Technology, Semantics and Analytics | Scoop.it
Next»» Next»» This segment is part 1 in the series : Thought Leaders in Big Data: Atanas Kiryakov, CEO of Ontotext1 2 3 4 5
Tony Agresta's insight:

This interview was with Atanas Kiryakov, founder and CEO of Ontotext.  He is an expert in semantic technology and discusses use cases for text mining, graph databases, semantic enrichment and content curation.   This is a five part series and I would recommend this to anyone interested in taking the next step in big data - semantic analysis of text leading to contextual search and discovery applications. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Text Mining & Graph Databases - Two Technologies that Work Well Together - Ontotext

Text Mining & Graph Databases - Two Technologies that Work Well Together - Ontotext | Big Data Technology, Semantics and Analytics | Scoop.it
Graph databases, also known as triplestores, have a very powerful capability – they can store hundreds of billions of semantic facts (triples) from any subject imaginable.  The number of free semantic facts on the market today from sources such as DbPedia, GeoNames and others is high and continues to grow every day.   Some estimates have this total between 150 and 200 billion right now.   As a result, Linked Open Data can be a good source of information with which to load your graph databases. Linked Open Data is one source of data. When does it become really powerful?  When you create your own semantic triples from your own data and use them in conjunction with linked open data to enrich your database.  This process, commonly referred to as text mining,  extracts the salient facts from free flowing text and typically stores the results in some database.  With this done, you can analyze your enriched data, visualize it, aggregate it and report on it.  In a recent project Ontotext undertook on behalf of FIBO (Finanical Information Business Ontology), we enhanced the FIBO ontologies with Linked Open Data allowing us to query company names and stock prices at the same time to show the lowest trading prices for all public stocks in North America in the last 50 years.   To do this, we needed to combine semantic data sources,  something that’s easy to do with the Ontotext Semantic Platform. We have found that the optimal way to apply text mining is in conjunction with a graph database.  Many of our customers use our Text Mining to do just that. Some vendors only sell graph databases and leave it up to you to figure out how to mine the text.  Other vendors only sell the text mining part and leave it up to…
Tony Agresta's insight:

Here's a summary of how text mining works with graph databases.  It describes the major steps in the text mining process and ends with how entities, articles and relationships are indexed inside the graph database.  The blend of these two major classes of technology allow all of your unstructured data to be discoverable.  Search results are informed by much more than just the metadata associated with the document or e-mail.  They are informed by the meaning inside the document, the text itself which contains important insights about people, places, organizations, events and their relationship to other things. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

September 30th at 11 AM EDT: Not All Graph Databases Are Created Equally - An Interview with Atanas Kiryakov - Ontotext

September 30th at 11 AM EDT: Not All Graph Databases Are Created Equally - An Interview with Atanas Kiryakov - Ontotext | Big Data Technology, Semantics and Analytics | Scoop.it
Graph databases help enterprise organizations transform the management of unstructured data and big data.
Tony Agresta's insight:

Graph databases store semantic facts used to describe entities and relationships to other entities.  This educational webinar will be hosted by Ontotext and will be an interview format with Atanas Kiryakov, an expert in this field.   If you want to learn about use cases for graph databases and how you can extract meaning from free flowing text and store results in the graph databases, this webinar is must. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Ontotext Releases Text Mining & Semantic Technology Running in the Cloud – Welcome to “S4″

Ontotext Releases Text Mining & Semantic Technology Running in the Cloud – Welcome to “S4″ | Big Data Technology, Semantics and Analytics | Scoop.it
Sofia, Bulgaria (PRWEB) August 21, 2014 -- The Self Service Semantic Suite (S4) provides a complete set of tools that developers can use to build text mining and semantic applications. Fully hosted, low cost and on demand, S4 includes proven text mining technology, Linked Open Data for enrichment, the world's most powerful RDF triplestore (GraphDB™) and a set of tools for developers to build and deploy cloud-based semantic applications.
Tony Agresta's insight:

Small and mid size business can take advantage of text mining and semantic analysis of documents using S4, the Self Service Semantic Suite.   This allows developers to build custom applications that run in  he cloud but you can upload documents and web pages for free to test it out.  The very same enterprise technology that powers some of the largest semantic applications in the world is part of S4.

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Open Policy: Knowledge Makes Document Searches Smarter - YouTube

The OpenPolicy semantic search tool is a powerful way to instantaneously locate related content across scores of complex documents. See how OpenPolicy unlock...
Tony Agresta's insight:

Here's a great use case on the use of semantic technology to discover  documents hosted in a knowledge base.  The application was built by LMI and Ontotext and recently won 2014 Northern Virginia Technology Council award.  Search results against massive page counts are returned in seconds.  The combined use of ontologies and a scalable triple store from Ontotext make this application possible.  

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Defining Big Data Visualization and Analysis Use Cases -- TDWI -The Data Warehousing Institute

Defining Big Data Visualization and Analysis Use Cases -- TDWI -The Data Warehousing Institute | Big Data Technology, Semantics and Analytics | Scoop.it
Use these five use cases to spark your thinking about how to combine big data and visualization tools in your enterprise.
Tony Agresta's insight:

One form of data visualization that is underutilized by sales and marketing professionals is a relationship graph which shows you connections between people, places, things, events...any attributes you want to see in the graph.  This form of visualization has long been used by the intelligence community to find bad guys and identify fraud networks.  But it also has practical applications in sales and marketing. 

 

Let's say you're trying to improve your lead conversion process and accelerate sales cycles.   Wouldn't it be important to analyze relationships between campaigns, qualified leads created, the business development people that created the leads and how fast each lead progressed through sales stages?

 

Imagine a network graph that showed the campaigns, business development people that worked the lead pool, qualified leads and the number of opportunities created.  Imagine if components of the graph (nodes) were scaled based on the amount of money spent on each campaign, the number of leads each person worked and the value of each opportunity.  Your eye would be immediately drawn to a number of insights.  

 

You could quickly see which campaigns provided the most bang for your buck - the ones with relatively low cost and high qualified lead production.   You could quickly see which business development reps generated a high volume of qualified leads and how many turned into real opportunities.  Now imagine if you could play the creation of the graph over time.   You could see when campaigns started to generate qualified leads.  How long did it take?   How soon could sales expect to get qualified leads?   Should your campaign planning cycles change?   Are your more expensive campaigns having the impact you expected?   Is this all happening fast enough to meet sales targets?  

 

This form of data visualization is easier to apply than you think.  There are tools on the market that allow you to connect to CSV files exported from your CRM system and draw the graph in seconds.   As data visualization becomes more common in business, sales and marketing professionals will start to use this approach to measure performance of campaigns and employees while better understanding influencing factors in each stage of the sales cycles.  

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

IBM’s new data discovery and visualization cloud offerings with predictive analytics

IBM’s new data discovery and visualization cloud offerings with predictive analytics | Big Data Technology, Semantics and Analytics | Scoop.it

IBM is introducing new data discovery software that enables business users to visually interact with and apply advanced analytics to their data without any specialized skills to get deeper insights about their business. The new software will help close the analytics skills gap that makes current data discovery tools inaccessible for the everyday business user and make it possible to go from raw information to answers hidden deep within structured and unstructured information in minutes. 


- See more at: http://www.vizworld.com/2013/11/ibms-data-discovery-visualization-cloud-offerings-predictive-analytics/#sthash.Ux5doBkj.dpuf


Tony Agresta's insight:

It was bound to happen and IBM seems headed in the right direction - Predictive Analytics and Data Visualization converge in the cloud.  In a post 911 era, analysts recognized that revealing insights required the human mind to explore data in an unconstrained manner.   If they had the chance to interact with disparate data sets, visualize that data in different forms and navigate to connection points directed by their experience, they could quickly pinpoint relationships that matter.

Today, groundbreaking approaches in intelligence analysis have their foundations built on unconstrained data discovery.   Insurance organizations are applying interactive data visualization to uncover patterns that clearly point to fraud and collusion.  eCommerce organizations are using these techniques to examine both employee and vendor behavior as they connect the dots highlighting networks of interest.

This revolution in analysis is only just beginning.   Imagine what can be accomplished when predictive models and rules are applied to big data in real time yielding more focused data sets for discovery.  


Four years ago I spoke with a global top 10 bank that applied predictive  models to detect fraudulent transactions.   When I asked if they had combined this approach with data visualization to pick up on any error in the models, they responded that their analysts couldn't use those tools because they were too complex.   They couldn't identify fraud networks using relationship graphs.  After nearly $2 billion in fines, I wonder if they are rethinking this approach?   The fact is, they could have detected money laundering among their account holders by joining the results of predictive analysis with human data discovery. 

Within 5 years, I would be surprised if every major bank, insurance company, retailer and healthcare organization wasn't following in the footsteps of the intelligence community.   As these analytic methods converge, the criminal's chances of hiding the truth diminish dramatically. 

more...
Henry Pan's curator insight, November 9, 2013 9:50 AM

Is this a better tool set?

Scooped by Tony Agresta
Scoop.it!

The Age of Predictive Analytics: From Patterns to Predictions - April 2012

OIPC of Canada paper on the issues of Big Data & predictive analytics--can be "creepy"
http://t.co/eUBXtu6p0k
Tony Agresta's insight:

This paper includes a great set of definitions around the subject of predictive analytics including a number of important applications - fraud prevention, location tracking, targeted advertising, law enforcement and intelligence.    For anyone interested in predictive analytics, I suggest scanning topic headlines to identify areas of focus.   Many of the fundamentals of predictive analytics are described in this paper. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Triplestores Rise In Popularity

Triplestores Rise In Popularity | Big Data Technology, Semantics and Analytics | Scoop.it

Triplestores:  An Introduction and Applications

Tony Agresta's insight:

Triplestores are gaining in popularity.  This article does a nice job at describing what triple stores are and how they differ from graph databases.  But there isn't much in the article on how triple stores are used.  So here goes:

Some organizations are finding that when they apply a combination of semantic facts (triples) with other forms of unstructured and structured data, they can build extremely rich content applications.   In some cases, content pages are constructed dynamically.    The context based applications deliver targeted, relevant results creating a unique user experience.  Single unified architectures that can store and search semantic facts, documents and values at the same time require fewer IT and data processing resources resulting in shorter time to market.  Enterprise grade technology provides the security, replication, availability, role based access and the assurance no data is lost in the process.  Real time indexing provides instant results.

Other organizations are using triples stores and graph databases to  visually show connections useful in uncovering intelligence about your data.    These tools connect to Triplestores and NoSQL databases easily allowing users to configure graphs to show how the data is connected.   There's wide applicability for this but common use cases include identifying fraud and money laundering networks, counter-terrorism, social network analysis, sales performance, cyber security and IT asset management.  The triples, documents and values provide the fuel for  the visualization engine allowing for comprehensive data discovery and faster business decisions.

Other organizations focus on semantic enrichment and then ingest resulting semantic facts into triplestores to enhance the applications mentioned above.  Semantic enrichment extracts meaning from free flowing text and identifies triples.

Today, the growth in open data - pre-built triple stores - is allowing organizations to integrate semantic facts to create richer content applications.   There are hundreds of sources of triple stores that contain tens of billions of triples, all free.

What's most important about these approaches?  Your organization can easily integrate all forms of data in a single unified architecture.  The data is driving smart websites, rich search applications and powerful approaches to data visualization.   This is worth looking at more closely since the end results are more customers, lower costs, greater insights and happier users.

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

The Next Generation of Databases | SmartData Collective

The Next Generation of Databases | SmartData Collective | Big Data Technology, Semantics and Analytics | Scoop.it
The past decades organisations have been working with relational databases to store their structured data. In the big data era however, these types of databases are not sufficient anymore.
Tony Agresta's insight:

"The amount of available NoSQL databases is growing rapidly and currently there are, as this website shows, over 150 of them. One of the more well known is MarkLogic and recently they announced MarkLogic 7, an Enterprise NoSQL that shows the vast possibilities of NoSQL databases for organisations."


One of many new features in MarkLogic 7 is Semantics including support for a triple store, SPARQL to query the triples, a triples index and cache for enhanced performance and updated APIs for developers.  


With this release, Enterprise NoSQL has taken another huge step forward.  Why is this so important?   For the first time ever, a single unified architecture exists allowing organizations to query and apply documents, others types of unstructured data, structured data (values, for example) and semantic facts at the same time.  In other words, composite queries can be built which return any combination of data allowing developers to create very rich content applications.


From the article:  "Semantic triples enable relationships between pieces of data and are related more closely to the way humans think. If you combine them with Linked Open Data (facts that are freely available and in a form that is easily consumed by machines) or information from DBPedia (Wikipedia but in a structured format understandable by machines), these triples suddenly receive a meaning and give data the context required to be valuable in a semantic environment. As the founder of MarkLogic, Christopher Lindblad, explained during the summit: “Data is not information, what you have to do to get from data to information is add context.”


We have already seen customers developing SPARQL queries that extract semantic facts alongside documents to graph the connections using relationship graphs. "Context" takes on new meaning when analysts can see how people are connected to places, publications, events, organizations and much more.


"Semantic search allows you to perform a combination of queries ranging from text queries, document queries, range queries or pose a query that goes over multiple data sources. It can return results that might not even contain the exact term you used in a query, but which is very closely linked to what you are looking for and therefore still relevant.  It is a new way to find what you are looking for and in combination with Enterprise NoSQL the new way to understand and find corporate data."

 

There are some very good customer applications cited in the article by Mark van Rijmenam. 


If you want to read more about this technology, I suggest you look at the review recently done by Gartner entitled Magic Quadrant

for Operational Database Management Systems.

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

MarkLogic Rolls Out the Red Carpet for Semantic Triples - Datanami

MarkLogic Rolls Out the Red Carpet for Semantic Triples - Datanami | Big Data Technology, Semantics and Analytics | Scoop.it
You write a query with great care, and excitedly hit the "enter" button, only to see a bunch of gobbledygook spit out on the screen.
Tony Agresta's insight:

"The real power of this approach becomes evident when one considers the hugely disparate nature of information on the Internet. An RDF powered application can build links between different pieces of data, and effectively 'learn' from the connections created by the semantic triples. This is the big (and as yet unrealized) pipe dream of the semantic Web."

Customers can now use a combination of documents, values and semantic facts in the form of triples to create very rich content applications.  The semantics world will appreciate the fact that linked open triples can be imported into MarkLogic's triple store or they can use semantic facts and taxonomies that already exist in their organization to populate the triple store.  On the other hand, the document world will appreciate that semantic facts already embedded in documents or created from authoring tools can be used to populate the triple store.   Those already using semantic enrichment products can create triples from free flowing text and now apply MarkLogic to ingest, manage, search and create rich applications using those facts.   We have already seen some early access users take a simple taxonomy, create triples, run SPARQL queries and then graph the connections between authors and publications using a third party graphing tool.

The applications for this use case in intelligence, fraud analysis, anti-money laundering, cyber security and other areas is at top of mind for most organizations today.   The beauty of the MarkLogic approach is that the relationships in the facts can be combined with documents and values.  When combined with 3rd party relationship graphing features, users can reveal hidden insights that can only be discovered using this approach.




more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Is MarkLogic the Adult at the NoSQL Party? - Datanami

Is MarkLogic the Adult at the NoSQL Party? - Datanami | Big Data Technology, Semantics and Analytics | Scoop.it
If big data is a big party, Hadoop would be at the center and be surrounded by Hive, Giraph, YARN, NoSQL, and the other exotic technologies that generate so much excitement.
Tony Agresta's insight:

MarkLogic continues to enhance it's approach to NoSQL further confirming it's the adult at the party.  MarkLogic 7 includes enhancements to enterprise search as well as Tiered Storage and integration with Hadoop. 

MarkLogic Semantics, also part of MarkLogic 7, provides organizations with the ability to enhance the content experience for users by including an even richer experience that includes semantic facts, documents and values in the same search experience.   By doing this, organizations can surface semantic facts stored in MarkLogic when users are searching for a topic or person of interest.  For example, if a user searches all unstructured data on a topic, facts about authors, publication dates, related articles and other facts about the topic would be part of the search results.

This could be applied in many ways.  Intelligence analysts may be interested in facts about people of interest.  Fraud and AML analysts could be interested in facts about customers with unusual transaction behavior.   Life Sciences companies may want to include documents, facts about the drug manufacturing process and values about pharma products as part of the search results.

Today, traditional search applications are being replaced by smarter, content rich semantic search.    This addition to MarkLogic continues to confirm that all of this can be done within a single, unified architecture saving organizations development time, money and resources while delivering enterprise grade technology used in the most mission critical applications today. 



more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Why are graph databases hot? Because they tell a story... - Ontotext

Why are graph databases hot? Because they tell a story... - Ontotext | Big Data Technology, Semantics and Analytics | Scoop.it
Graph databases, text mining and inference allow you extract meaning from text, perform semantic analysis and aid in knowledge management and data discovery
Tony Agresta's insight:

Inference is the ability to infer new facts using existing facts.  For example, if you know that Susan lives in Texas and Texas is in the USA, you can infer that Susan lives in the USA.   Inference can take on much more complex scenarios the results of which can be stored inside a graph database.  As these new facts are "materialized" they can inform websites, search applications and various forms of analysis.  This is where the real power of inference comes into play.


Do this "at scale" requires a high performance graph database that can infer new facts while users are simultaneously querying the database and new facts are being loaded - all within an enterprise resilient environment. This blog post explains more about graph databases, inference and how the semantic integration of data can improve productivity and results.

more...
No comment yet.
Rescooped by Tony Agresta from Linked Data and Semantic Web
Scoop.it!

Linked data. Connecting and exploiting big dataLinked data


Via Irina Radchenko
Tony Agresta's insight:

Great ideas on how to collect and exploit linked data.

more...
Fàtima Galan's curator insight, July 2, 12:42 AM

"A functional view on big data"

Scooped by Tony Agresta
Scoop.it!

Ontotext Improves Its RDF Triplestore, GraphDB™ 6.0: Enterprise Resilience, Faster Loading Speeds and Connectors to Full-Text Search Engines Top the List of Enhancements

Ontotext Improves Its RDF Triplestore, GraphDB™ 6.0:  Enterprise Resilience, Faster Loading Speeds and Connectors to Full-Text Search Engines Top the List of Enhancements | Big Data Technology, Semantics and Analytics | Scoop.it
Sofia, Bulgaria (PRWEB) August 20, 2014 -- Today, Ontotext released GraphDB™ 6.0 including enhancements to the high availability enterprise replication cluster, faster loading speeds, higher update rates and connectors for Lucene, SOLR and Elasticsearch. GraphDB™ 6.0 is the next major release of OWLIM – the triplestore known for its outstanding support for OWL 2 and SPARQL 1.1 that already powers some of the most impressive RDF database showcases.
Tony Agresta's insight:

This press release from PRWEB summarizes the latest enhancements to GraphDB from Ontotext including improvements in load speeds, enterprise high availability replication cluster and connectors to Lucene SoRL and Elasticsearch.  

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

The Truth About Triplestores & GraphDB, the Meaningful Database

The Truth About Triplestores & GraphDB, the Meaningful Database | Big Data Technology, Semantics and Analytics | Scoop.it
Tony Agresta's insight:

There are two free white papers that I recommend you download.  The Truth About Triplestores discusses the top 8 things you need to consider when evaluating a triplestore.   The Meaningful Database is a product watch written by the Bloor Group about the most scalabe RDF triplestore.  

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

LMI Named a Winner in Destination Innovation Competition - Semanticweb.com

LMI Named a Winner in Destination Innovation Competition - Semanticweb.com | Big Data Technology, Semantics and Analytics | Scoop.it
Tony Agresta's insight:

More news about Open Policy was just published on SemanticWeb.com.    With Ontotext inside..."LMI has developed a tool—OpenPolicy™—to provide agencies with the ability to capture the knowledge of their experts and use it to intuitively search their massive storehouse of policy at hyper speeds. Traditional search engines produce document-level results. There’s no simple way to search document contents and pinpoint appropriate paragraphs. OpenPolicy solves this problem. The search tool, running on a semantic-web database platform, LMI SME-developed ontologies, and web-based computing power, can currently host tens of thousands of pages of electronic documents. Using domain-specific vocabularies (ontologies), the tool also suggests possible search terms and phrases to help users refine their search and obtain better results.”

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Part 2: Investigating the Investigations - X Marks the Spot

Part 2: Investigating the Investigations - X Marks the Spot | Big Data Technology, Semantics and Analytics | Scoop.it
Posted by Douglas Wood, Editor.  Most of the financial crimes investigators I know live in a world where they dream of moving things from their Inbox to their Outbox. Oh, like everyone else, they a...
Tony Agresta's insight:

Here's another good post from Doug Wood of www.fightfinancialcrimes.com.   Advances in technology are revolutionizing how fraud investigations are being done today.

more...
Ellie Kesselman Wells's curator insight, April 28, 11:24 PM

The field is enterprise fraud detection. Investigating is the starting point. Adjudication is the final outcome of fraud detection and analysis.


Data (repositories such as enterprise data warehouses)  +

Technology (secure sharing across jurisdictions, automated link discovery, non-obvious relationship detection and identity resolution) are used to uncover insights which result in adjudication and closure of a complex incident investigation.

Scooped by Tony Agresta
Scoop.it!

Importance of NoSQL to Discovery - A Data Analysis Road Map You Can Apply Today

Importance of NoSQL to Discovery - A Data Analysis Road Map You Can Apply Today | Big Data Technology, Semantics and Analytics | Scoop.it
When you use the analytical process known as discovery, I recommend that you look for tools and environments that allow you connect to NoSQL platforms
Tony Agresta's insight:

The convergence of data visualization and NoSQL is becoming a hotter topic every day.  We're at the very beginning of this movement  as organizations integrate many forms of data with technology to visualize relationships and detect patterns across and within data sets.  There aren't many vendors that do this well today and demand is growing.  Some organizations are trying to achieve big data visualization through data science as a service.   Some software companies have created connectors to NoSQL (and other) data sources to reach this goal.  As you would expect, deployment options run the gamut. 


Examples of companies that offer data visualization generated from a variety of data sources including NoSQL are Centrifuge Systems who displays results in the form of relationship graphs, Pentaho who provides a full array of analytics including data visualization and predictive analytics and Tableau who supports dozens of data sources along with great charting and other forms of visualization.   Regardless of which you choose (and there are others), the process you apply to select and analyze the data will be important.  


In the article, John L Myers discusses some of the challenges users face with data discovery technology (DDT).  Since DDT operates from the premise that you don’t know all the answers  in advance, it’s more difficult to pinpoint the sources needed in the analysis.    Analysts discover insights as they navigate through the data visualizations.  This challenge isn’t too distant from what predictive modelers face as they decide what variables they want to feed into models.  They oftentimes don’t know what the strongest predictors will be so they apply their experience to carefully select data.  They sometimes transform specific fields allowing an attribute to exhibit greater explanatory power.   BI experts have long struggled with the same issue as they try and decide what metrics and dashboards will be most useful to the business.  


Here are some guidelines that may help you solve the problem.   They can be used to plan your approach to data analysis.


  • Start by writing down a hypothesis you want to prove before you connect to specific sources.  What do you want to explore?  What do you want to prove?  In some cases, you'll want to prove many things. That's fine.   Write down your top ones.
  • For each hypothesis create a list of specific questions you want to ask the data that could prove or disprove the hypothesis.   You may have 20 or 30 questions for each hypothesis.
  • Find the data sources that have the data you need to answer the questions.  What data will you need to arrive at a conclusion? 
  • Begin to profile each field to see how complete the data is.   In other words, take an inventory of the data checking to see if there are a missing values, data quality errors or values that make the specific source a good one. This may point back to changes in data collection needed by your current systems or processes. 
  • Go a layer deeper in your charting and profiling beyond histograms to show relationships between variables you believe will be helpful as you attempt to answer your list of questions and prove or disprove your hypothesis.  Show some relationships between two or more variables using heat maps, cross tabs and drill charts.
  • Reassess your original hypothesis.  Do you have the necessary data?  Or do you need to request additional types of data?
  • Once you are set on the inventory of data and you have the tools to connect to those sources, create a set of visualizations to resolve the answers to each of the questions.  In some cases, it may be 4 or 5 visualizations for each question.  Sometimes, you will be able to answer the question with one visualization.
  • Assemble the results for each question to prove or disprove the hypothesis.    You should arrive at a nice storyboard approach that, when assembled in the right order, allows you to articulate the steps in the analysis and draw conclusions needed to run your business.     


If you take these steps upfront and work with a tool that allows you to easily connect to a variety of data sources, you can quickly test your theory, profile and adjust the variables used in your analysis and create meaningful results the organization can use.  But if you go into the exercise without any data planning, without any goals in mind, you are bound to waste cycle times trying to decide what to include in your analysis and what not to include.    Granted, you won't be able to account for every data analysis issue your department or company has.   The purpose of this exercise is to frame the questions you want to ask of the data in support of a more directed approach to data visualization. 


Intelligence-led-decisions should be well received by your cohorts and applied more readily with this type of up front planning.  The steps you take to analyze the data will run more smoothly.   You will be able to explain and better defend the data visualization path you've taken to arrive at conclusions.  In other words, the story will be more clear when you present it. 


Consider the types of visualizations supported by the analytics technology when you do this. Will you need temporal analysis?   Will you require relationship graphs that show connections between people, events, organizations and more?    Do you need geospatial visualizations to prove your hypothesis?  A little bit of planning when using data discovery and NoSQL technology will go a long way in meeting your analytical needs. 


more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Big Oil Drills Into Big Data - Wall Street Journal (blog)

Big Oil Drills Into Big Data - Wall Street Journal (blog) | Big Data Technology, Semantics and Analytics | Scoop.it
Big Oil Drills Into Big Data
Wall Street Journal (blog)
Big Oil is the latest industry to turn to Big Data software to shave costs.
Tony Agresta's insight:

Now here's a compelling reason to apply big data technology  - Down for 2 days, down $1 million.  Big Oil needs to collect and analyze massive amounts of data generated by equipment to reduce down time and anticipate outages.  It's  critical in Big Oil - both offshore or onshore. 


A related application that is not referenced in the post is the use of data visualization to identify the locations and connection points for replacement parts.   By analyzing the inventory of parts using network graphs and maps, analysts can identify locations and the shortest paths that need to be taken to optimize delivery times. 

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Understanding The Various Sources of Big Data – Infographic

Understanding The Various Sources of Big Data – Infographic | Big Data Technology, Semantics and Analytics | Scoop.it
Big data is everywhere and it can help organisations any industry in many different ways.
Tony Agresta's insight:

If you have not had the chance to review some of the free sources of big data that can enhance your content applications, take a look at the Linked Open Data Graph.  It's updated daily and you can learn more by searching for the CKAN API.   This graph represents tens of billions of semantic facts about a diverse set of topics.   These facts have been used to enhance many content driven web sites allowing users to learn more about music, geography, populations and much more.  

more...
Henry Pan's curator insight, November 9, 2013 8:19 AM

Too many data source......

Scooped by Tony Agresta
Scoop.it!

Magic Quadrant for Operational Database Management Systems

Magic Quadrant for Operational Database Management Systems | Big Data Technology, Semantics and Analytics | Scoop.it
Tony Agresta's insight:

Here are some key findings in this report vis a vis MarkLogic:


  • Features — MarkLogic's offering includes replication, rollback, automated failover, point-in-time recovery, backup/restore, backup to Amazon S3, JSON, Hadoop Distributed File System use, parallelized ingest, role-based security, full text search, geospatial, converter for MongoDB, RDF and SPARQL.
  • Solid customer base — We estimate over 235 commercial customers, 5,000 licenses and strong financial backing.
  • Customer satisfaction — Survey ranked MarkLogic high for the experience of doing business with it.


What's interesting is that these findings are all related - you can't achieve a license install base of this magnitude with extraordinary levels of customer satisfaction unless you have enterprise features.   With MarkLogic, users don't need to build these features since they already exist.  What's the end result?    -  Time savings, security, more information products, higher levels of customer satisfaction and a competitive advantage in the market. 


more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

7 Pioneers of Data Visualization | DataRemixed

7 Pioneers of Data Visualization | DataRemixed | Big Data Technology, Semantics and Analytics | Scoop.it
Tony Agresta's insight:

Here's a post that showcases the founding fathers (and mothers) of data visualization using a timeline analysis.  Click on timeline bars to see the digital representation of their work.

more...
No comment yet.
Scooped by Tony Agresta
Scoop.it!

Semantics: The Next Big Issue in Big Data

Semantics: The Next Big Issue in Big Data | Big Data Technology, Semantics and Analytics | Scoop.it
State Street s David Saul argues big data is better when it s smart data.
Tony Agresta's insight:

Banking, like many industries, faces challenges in the area of data consolidation.  Addressing this challenge can require the use of semantic technology to accomplish the following:

 

  • A common taxonomy across banking divisions allowing everyone to speak the same language
  • Applications that integrate data including structured data with unstructured data and semantic facts about trading instruments, transactions that pose risk and derivatives
  • Ways to search all of the data instantly and represent results using different types of analysis, data visualization or through relevance rankings that highlight risk to the bank.

 

"What's needed is a robust data governance structure that puts underlying meaning to the information.  You can have the technology and have the standards, but within your organization, if you don't know who owns the data, who's responsible for the data, then you don't have good control."

 

Some organizations have built data governance taxonomies to identify the important pieces of data that need to be surfaced in rich semantic applications focused on risk or CRM, for example.  Taxonomies and ontologies understand how data is classified and relationships between the types of data.  In turn, they can be used to create facts about the data which can be stored in modern databases (enterprise NoSQL) and used to drive smart applications. 

 

Lee Fulmer, a London-based managing director of cash management for JPMorgan Chase says the creation of [data governance] standards is paramount for fueling adoption, because even if global banks can work out internal data issues, they still have differing regulatory regimes across borders that will require that the data be adapted.

 

"The big paradigm shift that we need, that would allow us to leverage technology to improve how we do our regulatory agenda in our banking system.  If we can come up with a set of standards where we do the same sanction reporting, same format, same data transcription, same data transmission services, to the Canadians, to the Americans, to the British, to the Japanese, it would reduce a huge amount of costs in all of our banks."

 

Semantic technology is becoming an essential way to govern data, create a common language, build rich applications and, in turn, reduce risk, meet regulatory requirements and reduce costs. 

 




more...
No comment yet.