Public Datasets - Open Data -
9.3K views | +0 today

 Rescooped by luiy from Big Data, IoT and other stuffs onto Public Datasets - Open Data -

# A Programmer's Guide to #DataMining I #OpenBook #DataScience

luiy's insight:

This book’s contents are freely available as PDF files. When you click on a chapter title below, you will be taken to a webpage for that chapter. The page contains links for a PDF of that chapter and for any sample Python code and data that chapter requires. Please let me know if you see an error in the book, if some part of the book is confusing, or if you have some other comment. I will use these to revise the chapters.

Chapter 1: Introduction

Finding out what data mining is and what problems it solves. What will you be able to do when you finish this book.

Chapter 2: Get Started with Recommendation Systems

Introduction to social filtering. Basic distance measures including Manhattan distance, Euclidean distance, and Minkowski distance. Pearson Correlation Coefficient. Implementing a basic algorithm in Python.

Chapter 3: Implicit ratings and item-based filtering

A discussion of the types of user ratings we can use. Users can explicitly give ratings (thumbs up, thumbs down, 5 stars, or whatever) or they can rate products implicitly–if they buy an mp3 from Amazon, we can view that purchase as a ‘like’ rating.

Chapter 4: Classification

In  previous chapters we used  people’s ratings of products to make recommendations. Now we turn to using attributes of the products themselves to make recommendations. This approach is used by Pandora among others.

Chapter 5: Further Explorations in Classification

A discussion on how to evaluate classifiers including 10-fold cross-validation, leave-one-out, and the Kappa statistic. The k Nearest Neighbor algorithm is also introduced.

Chapter 6: Naïve Bayes

An exploration of Naïve Bayes classification methods. Dealing with numerical data using probability density functions.

Chapter 7: Naïve Bayes and unstructured text

This chapter explores how we can use Naïve Bayes to classify unstructured text. Can we classify twitter posts about a movie as to whether the post was a positive review or a negative one? (new version coming November 2013)

Intriguing Networks's curator insight,

Cheers thanks for this handy for all budding DH students

# Public Datasets - Open Data -

- Open Data -
Curated by luiy

## Popular Tags

 Scooped by luiy

## The #datascience ecosystem, part 3: Data applications | #openSource #tools

The third part in a series on the data science ecosystem looks at the applications that turn data into insights or models.
luiy's insight:
Open-source tools

Probably because this category has the most ongoing research, there is quite a rich collection of open-source modeling and insights tools. R is an essential tool for most data scientists and works both as a programming language and an interactive environment for exploring data. Octave is a free, open-source port of matlab that works very well. Julia is becoming increasingly popular for technical computing. Stanford has an NLP library that has tools for most standard language processing tasks. Scikit, a machine learning package for python, is becoming very powerful and has implementations of most standard modeling and machine learning algorithms.

In the end, data application tools are what make data scientists incredibly valuable to any organization. They're the exact thing that allows a data scientist to make powerful suggestions, uncover hidden trends and provide tangible value. But these tools simply don't work unless you have good data and unless you enrich, blend and clean that data.
No comment yet.
 Scooped by luiy

## Collecting #Twitter Data: Storing Tweets in #MongoDB | #bigdata #NoSQL

luiy's insight:

In the first three sections of the Twitter data collection tutorial, I demonstrated how to collect tweets using both R and Python and how to store these tweets first as JSON files then having R parse them into a .csv file. The .csv file works well, but tweets don’t always make good flat .csv files, since not every tweet contains the same fields or the same structure. Some of the data is well nested into the JSON object. It is possible to write a parser that has a field for each possible subfield, but this might take a while to write and will create a rather large .csv file or SQL database.

No comment yet.
 Scooped by luiy

## @DataFrance : Plateforme de visualisation de données ouvertes | #opendata #dataviz

Éducation, économie, transports publics, etc. Visualisez facilement et partout en France plus de 50 jeux de données sur notre carte interactive.

Nous voulons faciliter l’accès à l'open data

Nous avons pour ambition de rendre accessible les données ouvertes au plus grand nombre via une interface simple de consultation. DataFrance s’inscrit dans le mouvement open data et veut contribuer à promouvoir une plus large ouverture des données dans l’intérêt de tout public.

luiy's insight:

Selection de données:

- Logement vacants à Paris

- Accesibilité aux médecins généralistes en Gironde

- Vols de voitures en France

- Lycées par taux de reussite au bac à Marseille

- Risque de séisme en France

- Musées de France à Lyon

No comment yet.
 Rescooped by luiy from Ouverture des données

## Comment publier les données des musées dans le Linked #OpenData ? | #DH

Cet article décrit le processus de publication des données de l'ensemble de la collection du Smithsonian American Art Museum  (SAAM), soit 41000 objets et 8000 artistes, dans le web des données liées et ouvertes.

Via François Le Pivain
luiy's insight:

Modéliser, Lier, Vérifier :

Les musées, à travers le monde, ont construit des bases de données avec des méta-données sur des millions d’objets, leur histoire, les personnes qui les ont créés et les entités qui les représentent, mais ces données sont stockées dans des bases de données propriétaires et ne sont pas immédiatement utilisables. Récemment, les musées ont adopté le web sémantique dans l'intention de rendre ces données accessibles.

Parmi les projets notables du Linked Open Data, on peut citer The Europeana project qui a publié les données de 1500 musées, librairies et archives en Europe, le musée d'Amsterdam qui a publié les données de 73000 objets et le LODAC museum qui a publié les données de 114 musées au Japon.

Mais l’expérience a montré que publier les données des musées dans le web des données liées et ouvertes était difficile : les bases de données sont larges et complexes, l’information est très variée d’un musée à l’autre et il s’avère difficile de les relier à d’autres jeux de données.

No comment yet.
 Scooped by luiy

## #Commons Transition: Policy Proposals for an #Open Knowledge Society

luiy's insight:

Commons Transition: Policy Proposals for an Open Knowledge Society is our free downloadable e-book. Featuring the three newly updated Commons Transition plans by Michel Bauwens, John Restakis and George Dafermos in an easy to read format, the book is also complemented with special introductory material by Michel Bauwens and John Restakis and an exclusive interview with Commons Transition researcher Janice Figueiredo.

No comment yet.
 Rescooped by luiy from Vulgarisation et médiation scientifiques

## L'arbre de l'évolution à l'heure numérique | #dataviz #openscience

Quand Charles Darwin rencontre Larry Page Le premier arbre phylogénétique, tel qu'il apparaît en 1859 dans De l'origine des espèces au moyen de la sélection naturelle de Charles...

"Dessine-moi un mouton", demandait le Petit Prince à l'aviateur. La représentation graphique a toujours été vecteur de connaissance, mais aujourd'hui, dans une culture qui abandonne peu à peu l'écrit pour l'image, elle devient une composante essentielle de la transmission du savoir. Un second défi se dresse : comment rendre compte de la masse d'informations disponibles pour décrire fidèlement la complexité du monde dans lequel nous évoluons, et notamment la diversité de la biosphère ? La conjugaison de ces deux ambitions a donné naissance au projet OneZoom (www.onezoom.org), un outil numérique qui permet de visualiser une version numérique de l'arbre de la vie. Une immersion dans la biodiversité, et une belle invitation à la curiosité. (...)  - par Guillaume Frasca, La science infuse, 16/10/2012

Source : J. Rosindell et L.J. Harmon, OneZoom: A Fractal Explorer for the Tree of Life, PLoS Biology, 16 octobre 2012.

Via Julien Hering, PhD
luiy's insight:

The authors introduce a new phylogenetic tree viewer that allows interactive display of large trees. The key concept of our solution is that all the data is on one page so that all the user has to do iszoom to reveal it—hence the name OneZoom (http://www.onezoom.org). Our interface is analogous to Google Earth, where one can smoothly zoom into any local landmark from a start page showing the whole globe, recognizing familiar landmarks at different scales along the way (e.g., continents, countries, regions, and towns). Equivalently, OneZoom can zoom smoothly to one tip of the tree of life—say, human beings—passing the familiar clades of animals, vertebrates, mammals, and primates at different scales along the way

No comment yet.
 Scooped by luiy

## The First Interactive Network and Graph Data #Repository with Interactive Graph Analytics and Visualization | #opendata #SNA

The First Interactive Network Data Repository with Real-time Interactive Visualization and Analytics
luiy's insight:

Network Data Repository. Exploratory Analysis & Visualization.

A network and graph data repository containing hundreds of real-world networks and benchmark datasets. This large comprehensive collection of network graph data is useful for making significant research findings as well as benchmark data sets for machine learning and network science. All data sets are easily downloaded into a standard consistent format. We also have built a multi-level interactive graph analytics engine that allows for visualizing the structure of networks as well as many global graph statistics and local node level properties.

No comment yet.
 Scooped by luiy

## All the Open #Datasets from New York City Visualized in a Single View | #opendata #dataviz

luiy's insight:

"Visualizing NYC's Open Data" [chriswhong.com] by self-proclaimed urbanist, map maker and data junkie Chris Wong provides a single view of the more than 1,100 open datasets made available by New York City.

The visualization of the "dataset of datasets" consists of a force-directed graph, of which the nodes are colored according to whether the according dataset is a table, chart, map, a file or a user-created view (colored blue).

The graph acts as an alternative portal to explore the available data, while demonstrating its scale and diversity.

No comment yet.
 Scooped by luiy

## #DataMining: Practical Machine Learning #Tools and Techniques | #Weka #datascience #openaccess

luiy's insight:

Teaching material

Slides for Chapters 1-5 of the 3rd edition can be found here.

Slides for Chapters 6-8 of the 3rd edition can be found here

These archives contain .pdf files as well as .odp files in Open Document Format that were generated using OpenOffice 2.0. Note that there are several free office programs now that can read .odp files. There is also a plug-in for Word made by Sun for reading this format. Corresponding information is on this Wikipedia page.

No comment yet.
 Rescooped by luiy from Politique des algorithmes

## Google has #open sourced a #tool for inferring cause from correlations | #algorithms #datascience

Google open sourced a new package for the R statistical computing software that’s designed to help users infer whether a particular action really did cause subsequent activity. Google has been using the tool, called CausalImpact, to measure AdWords campaigns but it has broader appeal.

Via Dominique Cardon
luiy's insight:

Google announced on Tuesday a new open source tool that can help data analysts decide if changes to products or policies resulted in measurable change, or if the change would have happened anyway. The tool, called CausalImpact, is a package for the R statistical computing software, and Google details it in a blog post.

According to blog post author Kay H. Brodersen, Google uses the tool — created it, in fact — primarily for quantifying the effectiveness of AdWords campaigns. However, he noted, the same method could be used to gauge everything from whether adding a new feature caused an increase in app downloads to questions involving events in medical, social or political science.

No comment yet.
 Scooped by luiy

## Seven Ways to Create a Storymap | #opendata #maps #ddj

Evidence is Power
luiy's insight:

The above examples describe a wide range of geographical and geotemporal storytelling models, often based around quite simple data files containing information about individual events. Many of the tools make a strong use of image files as pat of the display. it may be interesting to complete a more detailed review that describes the exact data models used by each of the techniques, with a view to identifying a generic data model that could be used by each of the different models, or transformed into the distinct data representations supported by each of the separate tools.

No comment yet.
 Scooped by luiy

## The #OpenData Research network | #opengov

luiy's insight:

The Open Data Research network.

Governments, civil society organisations and companies across the world are actively engaging with open data: publishing and using datasets to promote innovation, development and democratic change.

The Open Data Research network has been established to connect researchers from across the world working to explore the implementation and impact of open data initiatives. It is a joint project of IDRC and the Web Foundation, and is seeking to develop wider partnerships over the coming year.

The network currently hosts the 'Exploring the Emerging Impacts of Open Data in Development Countries (ODDC)' programme.

No comment yet.
 Scooped by luiy

## Digital Ecologies Research Partnership | #openResearch #opendata

The Digital Ecologies Research Partnership (DERP) is a joint initiative to promote academic inquiry into social dynamics of the web.
luiy's insight:

Launched in 2014, the Digital Ecologies Research Partnership (DERP) is a joint initiative by an alliance of community websites to promote open, publicly accessible, and ethical academic inquiry into the vibrant social dynamics of the web.

DERP seeks to solve two problems in the academic research space:

First, it is difficult for academic researchers to easily obtain data for their work beyond the confines of the largest social media platforms. DERP is a single point of contact for researchers to get in touch with relevant team members across a range of different community sites. We envision that this will lower the friction to investigating these sites in more depth, and broaden the scope of research happening within the academic community.

Second, it remains difficult to conduct good cross-platform analyses in academic research. By bringing a number community of sites together under a single cooperative effort, we intend to lower the friction of doing so, as well as better enable the sites themselves to coordinate with one another on supporting researchers.

DERP focuses on providing public data to academic researchers while facilitating an active online research community of Fellows. DERP will only support research that respects user privacy, responsibly uses data, and meets IRB approval. All research supported by DERP will be released openly and made publicly available. Partner platforms may also have additional guidelines and privacy commitments that apply to the research they support.

No comment yet.
 Scooped by luiy

## Twitter Data Mining Round Up | #python #ddj #openaccess

luiy's insight:

Since the release of Mining the Social Web, 2E in late October of last year, I have mostly focused on creating supplemental content that focused on Twitter data. This seemed like a natural starting point given that the first chapter of the book is a gentle introduction to data mining with Twitter’s API coupled with the inherent openness of accessing and analyzing Twitter data (in comparison to other data sources that are a little more restrictive.) Twitter’s IPO late last year also focused the spotlight a bit on Twitter, which provided some good opportunities to opine on Twitter’s underlying data model that can be interpreted as an interest graph.

No comment yet.
 Scooped by luiy

## The First #Ecological Land Units #Map of the World | #opendata #dataviz

luiy's insight:

This map as well as the data layers used to create it can be explored in a new story map that introduces ecological land units. The data is available in the form of services and can enrich any GIS effort.

The collaborative partnership between Esri and USGS resulted in a dynamic online map representing the world’s ecological diversity at unprecedented detail and authority. This work leveraged quantitative methods, geographic science, and big data produced by government agencies and the scientific community.  To create this map the data were processed in Esri’s ArcGIS cloud computing environment. This map provides new knowledge and understanding of geographic patterns and relationships by distinguishing the geography of the planets’ ecosystems.

No comment yet.
 Scooped by luiy

## Is Cape Town's new #opendata portal any good? | #dataviz

Code4SA's Adi Eyal takes a look at Africa's first open data city.
luiy's insight:

... the City chose to build their portal in-house rather than using one of the existing platforms. The UK open data portal for instance uses an open source platform CKAN. A commercial offering called Socrata is also available if you feel like you need the security of using a closed source solution. The benefit of using an existing platform is that you’re benefiting from the wisdom of those who came before you. It is also often much cheaper than doing it on your own. Our own modest data portal uses Socrata. As a developer, for me the killer feature here is that every dataset comes packaged with a pre-built API and documentation. It also allows non-technical users to create graphs and maps without having to download the data and opening up the spreadsheets. The City’s website in contrast, is not much more than a collection of links.

No comment yet.
 Scooped by luiy

## Crimes de guerre et décryptage de données : nouvelles révélations de Snowden | #NSA #opendata

De nouvelles révélations d'Edward Snowden mettent en lumière les agissement des pays de l'Otan en Afghanistan et le décryptage de données par la NSA américaine.
luiy's insight:

(De Hambourg) C’était un passage attendu du 31c3, le 31ème Chaos Communication Congress qui se déroule en ce moment à Hambourg : dimanche soir, devant 3 500 personnes, la journaliste Laura Poitras et le hacker Jacob Appelbaum ont révélé des documents d’Edward Snowden jusque-là inconnus du public.

Simultanément, plusieurs articles contenant plusieurs dizaines de ces documents ont été publiés (en anglais et en allemand) sur le site de l’hebdomadaire allemand Der Spiegel.

Des révélations touchant à des domaines très variés, allant de la guerre en Afghanistan aux capacités qu’a la NSA de décrypter les données circulant sur le Web.

http://www.spiegel.de/media/media-35508.pdf

No comment yet.
 Scooped by luiy

## TubeKit: A Youtube #Crawling Toolkit | #datascience #tools #bigdata

#bigdata

luiy's insight:

TubeKit is a toolkit for creating YouTube crawlers. It allows one to build one's own crawler that can crawl YouTube based on a set of seed queries and collect up to 16 different attributes.

TubeKit assists in all the phases of this process starting database creation to finally giving access to the collected data with browsing and searching interfaces. In addition to creating crawlers, TubeKit also provides several tools to collect a variety of data from YouTube, including video details and user profiles

No comment yet.
 Scooped by luiy

## Project #BigData. Expanding on Project C to look at a different use case | #datascience #opendata

luiy's insight:

Project Big Data is an interactive tool which enables you to visualize and explore the funding patterns of over 600 companies in the Big Data ecosystem! It is based on the work I did for Project C (which you see and can read about here). The list of companies and their classification into categories is based on a dozen published sources and rough text analytics of the Crunchbase database. Crunchbase is a curated crowed sourced database of over 285k companies.

As for the data, there are 645 public & private companies in the data set. From Teradata and IBM to Actuate & Zoomdata. I began by harvesting data from Crunchbase by using their free API w/ Python. As of September, Crunchbase had 1250 funding events for 410 of the companies on my list. I've grouped these companies into 18 categories, allowing you to compare peers as well as trends across categories. Some of the categories are broken down further. For example, the tool allows you to differentiate between cloud-based and on premise solutions or SQL vs. NoSQL databases. I gathered additional data from a variety of sources. For example, LinkedIn was used to find the number of employees.

OPENACCESS Workbook: Project Big Data v1.0

No comment yet.
 Rescooped by luiy from Data is big

## Mining of Massive Datasets | #datascience #freebook

Via ukituki
luiy's insight:

Preface and Table of Content

Chapter 1. Data Mining

Chapter 2. Map-Reduce and the New Software Stack

Chapter 3. Finding Similar Items

Chapter 4. Mining Data Streams

Chapter 6. Frequent Itemsets

Chapter 7. Clustering

Chapter 8. Advertising on the Web

Chapter 9. Recommendation Systems

Chapter 10. Mining Social-Network Graphs

Chapter 11. Dimensionality Reduction

Chapter 12. Large-Scale Machine Learning

http://infolab.stanford.edu/~ullman/mmds/book.pdf

ukituki's curator insight,

The book is based on Stanford Computer Science course CS246: Mining Massive Datasets (and CS345A: Data Mining).

 Rescooped by luiy from Geo-visualization

## Visualizing Publicly Available US Government Data Online | #dataviz #opengov

luiy's insight:

Brightpoint Consulting recently released a small collection of interactive visualizations based on open, publicly available data from the US government. Characterized by a rather organic graphic design style and color palette, each visualization makes a socially and politically relevant dataset easily accessible.

 Scooped by luiy

## Europeana Labs: 30 million metadata records linking to millions of openly licensed media objects | #opendata #datasets

luiy's insight:

Data

Our database contains over 30 million metadata records linking to millions of openly licensed media objects - books, photos, art, artefacts, audio clips and more. We'll be featuring some of our very best content here.

Europeana Labs combines rights-cleared images, videos, audio and text files with technical expertise, tools, services and business knowledge.

No comment yet.
 Scooped by luiy

## Data Repositories - Mother's Milk for Data Scientists | #datasets #opendata

Mothers are life givers, giving the milk of life. While there are so very few analogies so apropos, data is often considered the Mother's Milk of Corporate Valuation. So, as a data scientist, we sh...
luiy's insight:

Here are a few repositories from KDnuggets that are worth taking a look at:

No comment yet.
 Scooped by luiy

## #OpenData Barometer Data | #opengov #dataviz

luiy's insight:

The Open Data Barometer takes a multidimensional look at the current adoption level of open data policy and practice around the world. Three main categories are considered as part of the barometer:

- Readiness - identifies how far a country has in place the political, social and economic foundations for realising the potential benefits of open data. The Barometer covers the readiness of government, entrepreneurs and business, and citizen and civil society.

- Implementation - identifies the extent to which government has published a range of key datasets to support innovation, accountability and more improved social policy. The barometer covers 14 datasets split across three clusters to capture datasets commonly used for: securing government accountability; improving social policy; and enabling innovation and economic activity.

- Emerging impacts - identifies the extent to which open data has been seen to lead to positive political, social and environment, and economic change. The Barometer looks for political impacts – including transparency & accountability, and improved government efficiency and effectiveness; economic impacts – through supporting start-up entrepreneurs and existing businesses; and social impacts – including environmental impacts, and contributing to greater inclusion for marginalised groups in society.

These factors have been combined onto a Radar chart, this represents the countries barometer.

No comment yet.
 Scooped by luiy

## Quantifying Memory: Mapping the #GDELT data in #R (and some Russian protests, too) | #opendata #datascience

luiy's insight:

The Guardian recently published an article linking to a database of 250 million events. Sounds too good to be true, but as I'm writing a PhD on recent Russian memory events, I was excited to try it out. I downloaded the data, generously made available by Kalev Leetaru of the University of Illinois, and got going. It's a large 650mb zip file (4.6gb uncompressed!), and this is apparently the abbreviated version. Consequently this early stage of the analysis was dominated by eager anticipation, as the Cambridge University internet did its thing.

Meanwhile I headed over to David Masad's writeup on getting started with GDELT in python

No comment yet.