The Rise of the Algorithmic Medium
19.4K views | +4 today
Follow
The Rise of the Algorithmic Medium
Algorithms are the new medium between people and people, people and data, data and data. Artificial intelligence, cognitive computing, machine learning, neural networks, internet of things, smart cities, cryptocurrencies...
Curated by Pierre Levy
Your new post is loading...
Your new post is loading...

Popular Tags

Current selected tag: 'visualization'. Clear.
Scooped by Pierre Levy
Scoop.it!

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data | The Rise of the Algorithmic Medium | Scoop.it
Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and…
more...
No comment yet.
Rescooped by Pierre Levy from Big Data and Personalization
Scoop.it!

Operational Semantics - From Text Mining to Triplestores – The Full Semantic Circle

Operational Semantics - From Text Mining to Triplestores – The Full Semantic Circle | The Rise of the Algorithmic Medium | Scoop.it
In the not too distant past, analysts were all searching for a “360 degree view” of their data.  Most of the time this phrase referred to integrated RDBMS data, analytics interfaces and customers. But with the onslaught…

Via Tony Agresta, Edward Chenard
more...
Tony Agresta's curator insight, February 15, 2015 6:22 PM

Semantic pipelines allow for the identification, extraction, classification and storage of semantic knowledge creating a knowledge base of all your data.   Most organizations have struggled to create these pipelines primarily because the plumbing hasn't existed.  But now it does. 


This post discusses how free flowing text streams into graph databases using concept extraction processes.  A well coordinated feed of data is written to the underlying graph database while updates are tracked on a continuous basis to ensure database integrity.  


Other important pipeline plumbing includes tools for disambiguation (used to resolve the definition of entities inside the text), classification of the entities, structuring relationships between entities and determining sentiment.


Organizations that deploy well functioning semantic pipelines have an added advantage over their competitors.   They have instant access to a completed knowledge base of data.  Research functions spend less time searching and more time analyzing.  Alerting notifies critical business functions to take immediate action.  Service levels are improved using accurate, well structured responses.  Sentiment is detected allowing more time to react to changing market conditions.



In general, the REST Client API calls out a GATE-based annotation pipeline and sends back enriched data in RDF form. Organizations typically customize these pipelines which consist of any GATE-developed set of text mining algorithms for scoring, machine learning, disambiguation or any of the other wide range of text mining techniques.

It is important to note that these text mining pipelines create RDF in a linear fashion and feed GraphDB™. Once the RDF is enriched in this fashion and stored in the database, these annotations can then be modified, edited or removed. This is particularly useful when integrating with Linked Open Data (LOD) sources. Updates to the database are populated automatically when the source information changes.

For example, let’s say your text mining pipeline is referencing Freebase as its Linked Open Data source for organization names. If an organization name changes or a new subsidiary is announced in Freebase, this information will be updated as reference-able metadata in GraphDB™.

In addition, this tightly-coupled integration includes a suite of enterprise-grade APIs, the core of which is the Concept Extraction API. This API consists of a Coordinator and Entity Update Feed. Here’s what they do:

  • The Concept Extraction API Coordinator module accepts annotation requests and dispatches them towards a group of Concept Extraction Workers. The Coordinator communicates with GraphDB™ in order to track changes leading to updates in each worker’s entity extractor. The API Coordinator acts as a traffic cop allowing for approved and unique entities to be inserted in GraphDB™ while preventing duplicates from taking up valuable real estate.
  • The Entity Update Feed (EUF) plugin is responsible for tracking and reporting on updates about every entity (concept) within the database that has been modified in any way (added, removed, or edited). This information is stored in the graph database and query-able via SPARQL. Reports can be run notifying a user of any and all changes.

 

Other APIs include Document Classification, Disambiguation, Machine Learning, Sentiment Analysis & Relation Extraction. Together, this complete set of technology allows for tight integration and accurate processing of text while efficiently storing resulting RDF statements in GraphDB™.

As mentioned, the value of this tightly-coupled integration is in the rich metadata and relationships which can now be derived from the underlying RDF database. It’s this metadata that powers high performance search and discovery or website applications – results are compete, accurate and instantaneous.

- See more at: http://www.ontotext.com/text-mining-triplestores-full-semantic-circle/#sthash.fg1RVcQN.dpuf

In general, the REST Client API calls out a GATE-based annotation pipeline and sends back enriched data in RDF form. Organizations typically customize these pipelines which consist of any GATE-developed set of text mining algorithms for scoring, machine learning, disambiguation or any of the other wide range of text mining techniques.

It is important to note that these text mining pipelines create RDF in a linear fashion and feed GraphDB™. Once the RDF is enriched in this fashion and stored in the database, these annotations can then be modified, edited or removed. This is particularly useful when integrating with Linked Open Data (LOD) sources. Updates to the database are populated automatically when the source information changes.

For example, let’s say your text mining pipeline is referencing Freebase as its Linked Open Data source for organization names. If an organization name changes or a new subsidiary is announced in Freebase, this information will be updated as reference-able metadata in GraphDB™.

In addition, this tightly-coupled integration includes a suite of enterprise-grade APIs, the core of which is the Concept Extraction API. This API consists of a Coordinator and Entity Update Feed. Here’s what they do:

  • The Concept Extraction API Coordinator module accepts annotation requests and dispatches them towards a group of Concept Extraction Workers. The Coordinator communicates with GraphDB™ in order to track changes leading to updates in each worker’s entity extractor. The API Coordinator acts as a traffic cop allowing for approved and unique entities to be inserted in GraphDB™ while preventing duplicates from taking up valuable real estate.
  • The Entity Update Feed (EUF) plugin is responsible for tracking and reporting on updates about every entity (concept) within the database that has been modified in any way (added, removed, or edited). This information is stored in the graph database and query-able via SPARQL. Reports can be run notifying a user of any and all changes.

 

Other APIs include Document Classification, Disambiguation, Machine Learning, Sentiment Analysis & Relation Extraction. Together, this complete set of technology allows for tight integration and accurate processing of text while efficiently storing resulting RDF statements in GraphDB™.

As mentioned, the value of this tightly-coupled integration is in the rich metadata and relationships which can now be derived from the underlying RDF database. It’s this metadata that powers high performance search and discovery or website applications – results are compete, accurate and instantaneous.

- See more at: http://www.ontotext.com/text-mining-triplestores-full-semantic-circle/#sthash.fg1RVcQN.dpuf
Rescooped by Pierre Levy from Data and Algorithms. Everyday life and culture
Scoop.it!

The city as network - Social Physics | #dataviz #UrbanFlows

The city as network - Social Physics | #dataviz #UrbanFlows | The Rise of the Algorithmic Medium | Scoop.it
Traditionally, cities have been viewed as the sum of their locations – the buildings, monuments, squares and parks that spring to mind when we think of ‘New York’, ‘London’ or ‘Paris’. In The new science of cities (Amazon US| Amazon UK), Michael Batty argues that a more productive approach is to think of cities in terms of …

Via luiy, sandra alvaro
Pierre Levy's insight:

By the way, geographs knew this a long time ago.

more...
luiy's curator insight, March 23, 2014 8:16 AM

Cities and network analysis.

 

Viewing cities as networks allows us to use the toolbox of network analysis on them, employing concepts such as ‘cores’ and ‘peripheries’, ‘centrality’, and ‘modules’. Batty says that an understanding of how different types of network intersect will be the key that really unlocks our understanding of cities.

 

Cities, like many other types of network, also seem to be modular, hierarchical, and scale-free – in other words, they show similar patterns at different scales. It’s often said that London is a series of villages, with their own centres and peripheries. but the pattern also repeats when you zoom out and look at the relationships between cities. One can see this in the way that London’s influence really extends across Europe, and in the way that linked series of cities, or ‘megalopolises‘, are growing in places such as the eastern seaboard of the US, Japan’s ‘Taiheiyō Belt‘, or the Pearl River Delta in China.

Eli Levine's curator insight, March 23, 2014 12:55 PM

And there you have it.

 

The blue prints for understanding empirically a city, a society, a nation.

Think about it.

sandra alvaro's curator insight, March 24, 2014 8:48 AM

Flows are not just the connectors between these important locations. Rather, the locations become important because – at least in part – they’re at the intersections.

Scooped by Pierre Levy
Scoop.it!

Sorting Algorithms Are Mesmerising When Visualised

Sorting Algorithms Are Mesmerising When Visualised | The Rise of the Algorithmic Medium | Scoop.it
ados_setKeywords('wraparound'); ados_addInlinePlacement(5570, 28883, 163).setZone(32250).loadInline(); ados_load();

If you’re u...
more...
No comment yet.
Rescooped by Pierre Levy from e-Xploration
Scoop.it!

Visualizing The Explosive Growth Of The #Robotics Industry | #SNA #dataviz

Visualizing The Explosive Growth Of The #Robotics Industry | #SNA #dataviz | The Rise of the Algorithmic Medium | Scoop.it
Meet the overlords of your new overlords.

Via luiy
more...
luiy's curator insight, November 3, 2014 8:54 AM

Over the past several years, a new wave of robot technology has offered us a glimpse at our heavily automated future, featuring drones, telepresence bots, therapeutic robots, and surgery bots that assist (or maybe take over for) doctors. The robotics industry has actually been around for a long time, with numerous companies working on warehouse automation, industrial robotics, and the like since the middle of the last century.

 

But as this interactive visualization from the Boston Consulting Group (BCG) and Quid reveals, the industry is expanding at near-warp speed.

 

Scooped by Pierre Levy
Scoop.it!

Supercomputers Capture Turbulence in the Solar Wind

Supercomputers Capture Turbulence in the Solar Wind | The Rise of the Algorithmic Medium | Scoop.it
With help from Berkeley Lab's visualization experts and NERSC supercomputers, astrophysicists can now study turbulence in unprecedented detail, and the results may hold clues about some of the processes that lead to destructive space weather events.
more...
No comment yet.