Public Datasets -...
Follow
Find
8.5K views | +0 today
Public Datasets - Open Data -
Your new post is loading...
Your new post is loading...
Scooped by luiy
Scoop.it!

La politique Open Access de l’Université de Liège fait des bébés - MyScienceNews

La politique Open Access de l’Université de Liège fait des bébés - MyScienceNews | Public Datasets - Open Data - | Scoop.it
Le 22 avril les Universités de Luxembourg et de Liège annonçaient la naissance de ORBilu, le nouveau serveur d’archivage des publications de l’université luxembourgeoise.
more...
No comment yet.
Scooped by luiy
Scoop.it!

' #Open ' : Season in High Tech

' #Open ' : Season in High Tech | Public Datasets - Open Data - | Scoop.it
Proof of cloud computing’s impact on the tech industry: Some big incumbent firms, known for expensive in-house products and zealous guarding of their intellectual property, are claiming the open source religion.
luiy's insight:

Things must be getting really competitive in high technology. So many big competitive companies are striving to show how good they are at sharing.

In particular, they share the word “open.” In the world of cloud computing, we now have OpenFlow switching specification; the OpenStack cloud platform;Open Compute, dedicated to efficient computing infrastructure; the Open Virtualization Alliance, to create software that makes one computer server do the work of many; and OpenDaylight, to generate similar magic on computer networks.

 

For good measure, there is also the Open Grid Forum, which works on large-scale corporate and scientific computing, and the Open Cloud Consortium, which backs cloud computing for scientific and medical research.

 

That pattern you may have noticed in the names is no accident. By calling your group “open” in tech, you intimate that you believe in the free sharing of information to create superior products. It evokes the open source spirit that made the Linux operating system, as well as less-known Internet standards like the Apache Web server, so successful.

more...
No comment yet.
Rescooped by luiy from Didactics and Technology in Education
Scoop.it!

Online Venn Diagram maker

Online Venn Diagram maker | Public Datasets - Open Data - | Scoop.it

Online Venn Daigram maker. Add in cirles and label them then create items to place in and around. Saves and downloads as PDF file


Via gideonwilliams, Rui Guimarães Lima
more...
Robert Sims's curator insight, May 1, 2013 9:24 PM

Visual aid to help learning!

galit ben ishay's curator insight, May 14, 2013 2:40 AM

כלים נוספים

http://www.scoop.it/t/learning-2-0-tools

Rescooped by luiy from Politique des algorithmes
Scoop.it!

Open data et données personnelles : mythes et réalité... mouvante - Lagazette.fr

Open data et données personnelles : mythes et réalité... mouvante - Lagazette.fr | Public Datasets - Open Data - | Scoop.it
L’open data a récemment été accusé de représenter un danger pour les données personnelles. Dans les faits, le cadre légal est pourtant bien posé. Un mauvais procès qui occulte de vraies ...

Via Dominique Cardon
luiy's insight:

En janvier 2013, le sénateur (PS) Gaëtan Gorce, membre de la Cnil, s’est fait remarquer en demandant l’arrêt de l’open data, s’alarmant des « menaces considérables qu’il représente déjà pour le respect de la vie privée », voire « la perspective d’un fichage généralisé à des fins privées », par « recoupement des données brutes [...] voire avec celles (le Big-Data) dont des entreprises sont déjà en possession ».

 

Dans une question écrite publiée au Journal Officiel, il a renchéri en demandant « la mise en place de règles protectrices des personnes ». Dans la foulée, la Cnil a annoncé une consultation sur le sujet.

 

Un “non-sujet” - Cette sortie spectaculaire a de quoi jeter le doute dans l’esprit des collectivités territoriales qui voudraient se lancer, d’autant que l’open data deviendra une obligation pour les collectivités de plus de 3500 habitants, comme le prévoit le troisième texte du projet de loi de décentralisation. Pourtant, les règles qu’exige le sénateur existent bien déjà. « Pour nous, c’est un non-sujet », tranche Tangui Morlier, de Regards citoyens. « Il n’y a pas à notre connaissance de précédent de profilage dans l’histoire de l’open data », rajoute Benjamin Gans, de Data publica, une société qui développe des jeux de données pour ses clients.

L’open data concilie l’accès à l’information publique, encadrée par laloi du 17 juillet 1978 qui crée un droit de réutilisation des informations publiques, réaffirmé par la directive PSI de 2003, transposée en 2005 en droit français et dont la révision a été approuvée par le Conseil de l’UE, le  10 avril 2013 . La loi Informatique et libertés du 6 juin 1978assure de son côté que la vie privée est bien protégée. De jure, les données personnelles sont exclues de l’open data.

 

La Cnil rappelle que « “les informations figurant dans des documents produits ou reçus par les administrations, peuvent être utilisées par toute personne qui le souhaite à d’autres fins que celles de la mission de service public pour les besoins de laquelle les documents ont été produits ou reçus” sauf si leur “communication porte atteinte à la protection de la vie privée, au secret médical et au secret en matière commerciale et industrielle.”

 

La réutilisation est possible dans trois cas :

la personne concernée y a consenti,les données ont été anonymisées,une disposition législative ou réglementaire le permet. »
more...
No comment yet.
Rescooped by luiy from Social Foraging
Scoop.it!

Bioengineers Build Open Source Language for Programming Cells

Bioengineers Build Open Source Language for Programming Cells | Public Datasets - Open Data - | Scoop.it

Drew Endy wants to build a programming language for the body.

 

Endy is the co-director of the International Open Facility Advancing Biotechnology — BIOFAB, for short — where he’s part of a team that’s developing a language that will use genetic data to actually program biological cells. That may seem like the stuff of science fiction, but the project is already underway, and the team intends to open source the language, so that other scientists can use it and modify it and perfect it.

 

The effort is part of a sweeping movement to grab hold of our genetic data and directly improve the way our bodies behave — a process known as bioengineering. With the Supreme Court exploring whether genes can be patented, the bioengineering world is at crossroads, but scientists like Endy continue to push this technology forward.

 

Genes contain information that defines the way our cells function, and some parts of the genome express themselves in much the same way across different types of cells and organisms. This would allow Endy and his team to build a language scientists could use to carefully engineer gene expression – what they call “the layer between the genome and all the dynamic processes of life.”


Via Ashish Umre
luiy's insight:

Nonetheless, this is what Endy is shooting for — right down to Sun’s embrace of open source software. The BIOFAB language will be freely available to anyone, and it will be a collaborative project.

 

Progress is slow — but things are picking up. At this point, the team can get cells to express up to ten genes at a time with “very high reliability.” A year ago, it took them more than 700 attempts to coax the cells to make just one. With the right programming language, he says, this should expand to about a hundred or more by the end of the decade. The goal is to make that language insensitive to the output genes so that cells will express whatever genes a user wants, much like the print function on a program works regardless of what set of characters you feed it.

What does he say to those who fear the creation of Frankencells — biological nightmares that will wreak havoc on our world? “It could go wrong. It could hurt people. It could be done irresponsibly. Assholes could misuse it. Any number of things are possible. But note that we’re not operating in a vacuum,” he says. “There’s history of good applications being developed and regulations being practical and being updated as the technology advances. We need to be vigilant as things continue to change. It’s the boring reality of progress.”

 

He believes this work is not only essential, but closer to reality than the world realizes. “Our entire civilization depends on biology. We need to figure out how to partner better with nature to make the things we need without destroying the environment,” Endy says. “It’s a little bit of a surprise to me that folks haven’t come off the sidelines from other communities and helped more directly and started building out this common language for programming life. It kind of matters.”

more...
Harshal Hayatnagarkar's curator insight, April 22, 2013 3:43 AM

Ok that's how artificial selection will be driven. Cool !

Paulo Fazendeiro's curator insight, April 22, 2013 4:53 AM

Could this be the inception of a new CI paradigm?

Rescooped by luiy from Open Government Daily
Scoop.it!

Philippine Agriculture Department boosts transparency with Open Data portal

Philippine Agriculture Department boosts transparency with Open Data portal | Public Datasets - Open Data - | Scoop.it
FutureGov – Transforming Government | Education | Healthcare

Via Ivan Begtin
luiy's insight:

As support to President Aquino’s call for transparency, the Department of Agriculturelaunched its open data portal called DAAN (Department of Agriculture Accountability Network) website which aims to promote public awareness of its community-focused projects and activities nationwide.

PHOTOS

View photos

RELATED ARTICLESThe Philippines improves project management with geo-taggingPhilippine city improves land use plan with GISThe Philippines launches manpower database for transparencyRELATED CATEGORIESGOVERNMENT GISOPEN GOVERNMENTFROM THIS SECTIONNEWS

The portal provides a library of the agency’s on-going and completed projects nationwide. It also details their fund allocations and cumulative disbursements, completion period, percentage of accomplishment and other relevant data, including regularly updated photos, which were provided by the attached agencies, corporations and regional field units of the Department of Agriculture.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Facebook heads down efficiency gauntlet with real-time data and open-source ... - GigaOM

Facebook heads down efficiency gauntlet with real-time data and open-source ... - GigaOM | Public Datasets - Open Data - | Scoop.it
AllFacebook
Facebook heads down efficiency gauntlet with real-time data and open-source ...
more...
No comment yet.
Scooped by luiy
Scoop.it!

Data For All! How New Tools Democratize Visualization

Data For All! How New Tools Democratize Visualization | Public Datasets - Open Data - | Scoop.it
The real value of new data viz tools is that they help even non-data scientists gain comfort with and find insight in big data.
luiy's insight:

The first value proposition is the obvious one, which is enabling users to create better visuals that bring their data and their analysis to life. This is the value proposition that most people focus upon and that gets the most attention. It is also the primary reason organizations invest in visualization tools.

 

The second value proposition, which is often either overlooked or vastly under-credited, is that visualization tools democratize big data by giving users wide flexibility to analyze data within a self-service business intelligence environment. Visualization tools allow users to explore, summarize, and visualize data in the way they see fit as opposed to the way someone else saw fit to allow them. By having the flexibility to join different data sources as desired, view patterns on the fly, and iterate, users can discover important insights and trends more easily and more rapidly.

 

Users may be able to access massive data sources in traditional environments, but they can only do so via predefined paths. On the other hand, common desktop tools such as PowerPoint or Excel that enable charting and graphing either require data extracts, which must be small, or more complex configurations than many are comfortable with. They are too complex and the visuals they generate aren't very robust or interactive.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Big Data: Tools and Access

Big Data: Tools and Access | Public Datasets - Open Data - | Scoop.it
This is a group post from a session held at the Big Data: Rewards and Risks for the Social Sciences conference in March (http://www.oii.ox.ac.uk/events/?id=557). Participants in the group were Chri...
luiy's insight:

Open access and open source are two separate principles applicable to many aspects of social science big data. The open source and open access needs of social scientific big data research can be usefully framed within a data lifecycle model, in this case Borgman (1996). Ideally, all stages of this research, ranging from initial data collection, to final data access and distribution, and associated tools and services, should take place in open access environments and configurations (however defined). This involves at least two distinct approaches:

 

-       data sets and analytical tools available to the researchers for carrying out big data research

 

-       tools/platforms for distributing big data results and data sets. At the moment any open access/source environment for big data research exists only in partial form.

more...
No comment yet.
Rescooped by luiy from Ma santé et le digital francophone
Scoop.it!

« Le portail des données du cancer est dans la droite ligne de l’open data »

« Le portail des données du cancer est dans la droite ligne de l’open data » | Public Datasets - Open Data - | Scoop.it
Tribune du Dr Philippe-Jean BOUSQUET, Responsable du Département Observation, Veille, Documentation de l’Institut National du Cancer (INCa)

Via Emmanuel Capitaine
more...
No comment yet.
Rescooped by luiy from Big Data Analysis in the Clouds
Scoop.it!

European governments agree to open up public data

European governments agree to open up public data | Public Datasets - Open Data - | Scoop.it
Good news for startups hoping to draw on public road traffic and weather data, among other types: changes agreed on Wednesday should allow the use of such data for free or at very low cost.

Via Pierre Levy
luiy's insight:

Member states of the European Union have endorsed new rules for opening up publicly-funded data to developers, businesses and citizens.

The 27 countries agreed on the rule change on Wednesday, according to the European Commission, which is behind the proposed revision of a 2003 directive on public sector information. If the European Parliament adds its stamp of approval, national governments will then transpose the changes into their laws sometime in the next 18 months or so.

 
more...
No comment yet.
Rescooped by luiy from Données Ouvertes
Scoop.it!

Petit Lexique de l’Open Data | Data Publica

Data Publica, Les données pour votre business

Via StéphanieLucien-Brun
luiy's insight:

Pour vous aider à y voir plus clair dans la terminologie liée au mouvement d’ouverture des données, Data Publica vous a préparé ce petit lexique. Pour toute suggestion, commentaire ou demande d’ajout, n’hésitez pas à nous écrire à contact@data-publica.com

more...
StéphanieLucien-Brun's curator insight, April 10, 2013 10:00 AM

Pour vous aider à y voir plus clair dans la terminologie liée au mouvement d’ouverture des données, Data Publica vous a préparé ce petit lexique

Hugo Vanmalle's curator insight, April 11, 2013 4:41 AM

Ca peut toujours servir :)

Scooped by luiy
Scoop.it!

Wikileaks US Embassy Cables on Datavisualization.ch

Wikileaks US Embassy Cables on Datavisualization.ch | Public Datasets - Open Data - | Scoop.it
Wikileaks began on Sunday November 28th publishing 251,287 leaked United States embassy cables, the largest set of confidential documents ever to be released into the public domain.
luiy's insight:
Wikileaks began on Sunday November 28th publishing 251,287 leaked United States embassy cables, the largest set of confidential documents ever to be released into the public domain. Here’s how media outlets strive to make the data more accessible than its original form.

 

While the data will be released in stages over the next few months to the general public, five publications around the world have had prior access to the material. New York Times, The Guardian, Le Monde, El País and Der Spiegel were given access on condition that they observed common deadlines over the timings of release.

 

Wikileaks have created a set of interactive visualizations to give an overview over the amount, origin subject, categorization, program, topic and classification of the leaked documents. The visualizations are created using Tableau Public which seems to have a good adoption in the online journalism space lately.

more...
No comment yet.
Rescooped by luiy from datavisualization
Scoop.it!

The Open Data Movement

The Open Data Movement | Public Datasets - Open Data - | Scoop.it
The evolution and role of API in building influential and essential tools and applications for web users, plus the demographics of public data usage a...

Via audrey L
more...
No comment yet.
Rescooped by luiy from Global Brain
Scoop.it!

New algorithm helps evaluate, rank scientific literature | KurzweilAI

New algorithm helps evaluate, rank scientific literature | KurzweilAI | Public Datasets - Open Data - | Scoop.it
CTD text mining technical overview (credit: Allan Peter Davis et al./PLoS ONE) Keeping up with current scientific literature is a daunting task, considering

Via Spaceweaver
luiy's insight:

Keeping up with current scientific literature is a daunting task, considering that hundreds to thousands of papers are published each day. Now researchers from North Carolina State University have developed a computer program to help them evaluate and rank scientific articles in their field.

 

The researchers use a text-mining algorithm to prioritize research papers to read and include in their Comparative Toxicogenomics Database (CTD), a public database that manually curates and codes data from the scientific literature describing how environmental chemicals interact with genes to affect human health.

 

“Over 33,000 scientific papers have been published on heavy metal toxicity alone, going as far back as 1926,” explains Dr. Allan Peter Davis, a biocuration project manager for CTD at NC State who worked on the project and co-lead author of an article on the work. “We simply can’t read and code them all. And, with the help of this new algorithm, we don’t have to.”

 

To help select the most relevant papers for inclusion in the CTD, Thomas Wiegers, a research bioinformatician at NC State and the other co-lead author of the report, developed a sophisticated algorithm as part of a text-mining process. The application evaluates the text from thousands of papers and assigns a relevancy score to each document. “The score ranks the set of articles to help separate the wheat from the chaff, so to speak,” Wiegers says.

 

But how good is the algorithm at determining the best papers? To test that, the researchers text-mined 15,000 articles and sent a representative sample to their team of biocurators to manually read and evaluate on their own, blind to the computer’s score. “The results were impressive,” Davis says. The biocurators concurred with the algorithm 85 percent of the time with respect to the highest-scored papers.

Using the algorithm to rank papers allowed biocurators to focus on the most relevant papers, increasing productivity by 27 percent and novel data content by 100 percent. “It’s a tremendous time-saving step,” Davis explains. “With this we can allocate our resources much more effectively by having the team focus on the most informative papers.”

more...
No comment yet.
Rescooped by luiy from Big Data Technology, Semantics and Analytics
Scoop.it!

Visa Says Big Data Identifies Billions of Dollars in Fraud

Visa Says Big Data Identifies Billions of Dollars in Fraud | Public Datasets - Open Data - | Scoop.it
Visa’s chief enterprise risk officer, Ellen Richey, says “you see the criminal capability evolving on the technology side.” She gives CIO Journal an inside look at how the company has used Big Data to make its network more secure...

Via Tony Agresta
luiy's insight:

“From the strategic point of view, we are achieving an amazing improvement, year over year, in our ability to detect fraud,” says Richey. “It’s not just our ability to analyze our transactions, but our ability to add new kinds of data, such as geo-location, to that analysis. With every new type of data, we increase the accuracy of our models. And from a strategic point of view we can think about taking and additional step change of fraud out of our system.”

In the future, Big Data will play a bigger role in authenticating users, reducing the need for the system to ask users for multiple proofs of their identify, according to Richey, and 90% or more of transactions will be processed without asking customers those extra questions, because algorithms that analyze their behavior and the context of the transaction will dispel doubts. “Data and authentication will come together,” Richey said.

The data-driven improvement in security accomplishes two strategic goals at once, according to Richey. It improves security itself, and it increases trust in the brand, which is critical for the growth and well-being of the business, because consumers won’t put up with a lot of credit-card fraud. “To my mind, that is the importance of the security improvements we are seeing,” she said. “Our investments in data and analysis are baseline to our ability to thrive and grow as a company.”

more...
Tony Agresta's curator insight, April 25, 2013 2:21 PM



The approach Visa takes in identifying fraud is grounded in 16 different predictive models and allows for new independent variables to be added to the model.  This improves accuracy while alowing the models to be kept up to date.  Here's an excerpt from the WJS Article:

 

"The new analytic engine can study as many as 500 aspects of a transaction at once. That’s a sharp improvement from 2005, when the company’s previous analytic engine could study only 40 aspects at once. And instead of using just one analytic model, as it did in 2005, Visa now operates 16 models, covering different segments of its market, such as geographic regions."

 

The article also states that the analytics engine has the card number and not the personal information about the transaction - likley stored in a different system.  I wonder if Visa, at some point in the process, also takes the fraud transactions and analyzes them visually to identify connections and linkages based on address, other geographic identifiers, 3rd party data, employer data and more?  Are two or more of the fraud cases in some way connected?  Does this represent a ring of activity presening higher risk to merchants, customers and Visa?

 

The tools on the market to do this work are expanding.   The data used to analyze this activity (including unstructured data) is being stored in databases that allow for the visual analysis of big data.  Graph databases replete with underlying intelligence extracted from text that identify people, places and events can be used to extend the type of analysis that Visa is doing and prioritize investigations.   Through more efficient allocation of investigation resources, fraud prevention can jump to a higher level.


Scooped by luiy
Scoop.it!

Open datasets in visualizing.org

Open datasets in visualizing.org | Public Datasets - Open Data - | Scoop.it
Download data sets provided by leading NGO's and government agencies.
luiy's insight:

Open datasets in visualizing.org

more...
No comment yet.
Rescooped by luiy from The World of Open
Scoop.it!

Bioengineers Build Open Source Language for Pro...

Bioengineers Build Open Source Language for Pro... | Public Datasets - Open Data - | Scoop.it
Endy is the co-director of the International Open Facility Advancing Biotechnology — BIOFAB, for short — where he’s part of a team that’s developing a language that will use genetic data to actually program biological cells.

Via cafonso
luiy's insight:

The BIOFAB project is still in the early stages. Endy and the team are creating the most basic of building blocks — the “grammar” for the language. Their latest achievement, recently reported in the journal Science, has been to create a way of controlling and amplifying the signals sent from the genome to the cell. Endy compares this process to an old fashioned telegraph.

 

“If you want to send a telegraph from San Francisco to Los Angeles, the signals would get degraded along the wire,” he says. “At some point, you have to have a relay system that would detect the signals before they completely went to noise and then amplify them back up to keep sending them along their way.”

 

And, yes, the idea is to build a system that works across different types of cells. In the 90s, the computing world sought to create a common programming platform for building applications across disparate systems — a platform called the Java virtual machine. Endy hopes to duplicate the Java VM in the biological world.

 

“Java software can run on many different hardware operating system platforms. The portability comes from the Java virtual machine, which creates a common operating environment across a diversity of platforms such that the Java code is running in a consistent local environment,” he says.

 

“In synthetic biology, the equivalent of a Java virtual machine might be that you could create your own compartment in any type of cell, [so] your engineered DNA wouldn’t run willy-nilly. It would run in a compartment that provided a common sandbox for operating your DNA code.”

more...
No comment yet.
Rescooped by luiy from Embodied Zeitgeist
Scoop.it!

Your data is your interface

Your data is your interface | Public Datasets - Open Data - | Scoop.it

We all view the world differently and on our own terms. Each of us use different words to describe the same book, movie, favorite food, person, work of art, or news article. We express our uniqueness by reviewing, tagging, commenting, liking, and rating things online. Taken together, all of this data can be viewed as a reflection of ourselves.

But on Amazon, Facebook, Youtube, IMDb and Yelp our unique interpretations and descriptions of the world are trapped inside separate boxes. The things I love on one service don’t apply to the next app that I download. By isolating my unique contributions, these services make my personal data “small” instead of “big.”

Less data leads to lower quality user experience. There’s no consistency or continuity between different apps and environments. Every time I create a new profile or download a new app I feel like I’m starting all over again. At first I’m reduced to a stereotype who needs to sign in to see irrelevant content or meaningless ads. Fragmented data and inconsistent algorithms provide noise instead of signal.

My interfaces to information are not optimized for me.


Via Wildcat2030, Xaos
luiy's insight:

What will be the next big thing, a great paradigm shift that follows “Portal,” “Search” and “Social Feed”? Will it be a service or a device that safely unites my scattered personal data and provides a better, more accurate interfaces to the world around me...


— by understanding who I really am?

more...
PlasmaBorneElectric's comment, April 18, 2013 9:58 AM
great article
Dr.VR MANOJ's curator insight, April 18, 2013 11:53 AM

This is a slightly wierd article but offers a lot of postmodern insight !

Rescooped by luiy from Open Knowledge
Scoop.it!

PythonBooks - Learn Python the easy way ! The best free Python resources

PythonBooks showcase the bests free ebooks about the Python programming language. The easiest way to learn Python for free!


Via Irina Radchenko
more...
ebookzdownload's curator insight, May 29, 2013 7:01 AM

http://www.ebookzdownload.com/

 provide all education free ebooks tutorial in PDF format ready to download with high speed

Rescooped by luiy from The Rise of the Algorithmic Medium
Scoop.it!

Data Center Storage Algorithm from MIT, Bell Labs, and Alcatel-Lucent Promises to Cut Electricity Use in Data Centers by 35 Percent for Streaming Services

Data Center Storage Algorithm from MIT, Bell Labs, and Alcatel-Lucent Promises to Cut Electricity Use in Data Centers by 35 Percent for Streaming Services | Public Datasets - Open Data - | Scoop.it
Storing video and other files more intelligently reduces the demand on servers in a data center.

Via Pierre Levy
more...
Pierre Levy's curator insight, April 17, 2013 11:20 AM

The new technology, called network coding, cuts way back on the redundancy without sacrificing the smooth experience. Algorithms transform the data that makes up a video into a series of mathematical functions that can, if needed, be solved not just for that piece of the video, but also for different parts. This provides a form of backup that doesn’t rely on keeping complete copies of the data. Software at the data center could simply encode the data as it is stored and decode it as consumers request it.

Rescooped by luiy from Open Knowledge
Scoop.it!

Open data highlights from European Data Forum 2013 in Dublin | Open Knowledge Foundation Blog

Open data highlights from European Data Forum 2013 in Dublin | Open Knowledge Foundation Blog | Public Datasets - Open Data - | Scoop.it

Over 500 data professionals gathered last week at European Data Forum conference in Dublin. This is the annual meeting place for industry, research, policy makers, and community initiatives to discuss the challenges and opportunities of Big Data in Europe. One of the main sentiments throughout the event was a profound interest inopenly licensed data and developments in the field of linked data.

The Open Knowledge Foundation was represented by Sander van der Waal and myself, and we took part with reference to the LOD2 project (an EU-funded project on Linked Open Data) and the Apps for Europe project (supporting apps competitions around Europe) – as well as to stimulate open data discussions in general. That seemed to have an increasingly fertile ground, as one of the main sentiments throughout the conference was a profound general interest not only in linking data, but also making them legally and technically open.


Via Irina Radchenko
more...
Juan Luis Jimeno's curator insight, April 17, 2013 12:06 PM

Post de la OKF con los principales titulares recién llegados del European Data Forum 2013 de Dublín, en el que se reunieron cerca de 500 profesionales del Open Data.

Rescooped by luiy from The Long Poiesis
Scoop.it!

#cliodynamics : Mathematicians Predict the Future With Data From the Past

#cliodynamics : Mathematicians Predict the Future With Data From the Past | Public Datasets - Open Data - | Scoop.it
In Issac Asimov's classic science fiction saga Foundation, mathematics professor Hari Seldon predicts the future using what he calls psychohistory.

Via Xaos
luiy's insight:

Turchin — a professor at the University of Connecticut — is the driving force behind a field called “cliodynamics,” where scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future. It’s named after Clio, the Greek muse of history.

 

These academics have the same goals as other historians — “We start with questions that historians have asked for all of history,” Turchin says. “For example: Why do civilizations collapse?” — but they seek to answer these questions quite differently. They use math rather than mere language, and according to Turchin, the prognosis isn’t that far removed from the empire-crushing predictions laid down by Hari Seldon in the Foundation saga. Unless something changes, he says, we’re due for a wave of widespread violence in about 2020, including riots and terrorism.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#democraData : Data For All! How New Tools Democratize Visualization

#democraData : Data For All! How New Tools Democratize Visualization | Public Datasets - Open Data - | Scoop.it
The real value of new data viz tools is that they help even non-data scientists gain comfort with and find insight in big data.
luiy's insight:

Users may be able to access massive data sources in traditional environments, but they can only do so via predefined paths. On the other hand, common desktop tools such as PowerPoint or Excel that enable charting and graphing either require data extracts, which must be small, or more complex configurations than many are comfortable with. They are too complex and the visuals they generate aren't very robust or interactive.

While many users of the new visualization tools spend most of their time generating basic output, they get really excited about their new-found freedom to navigate the data and view it from any angle desired. While the graphics generated may be simple, users are much more confident that they contain the right content.

 

The implication is that many organizations may not be getting the full benefit of their big data and visualization investments. But it'd be a mistake to make those tools available only to those users with advanced data skills. Using the tools should help even non-numerate users gain greater comfort with the data (one hopes) and along with that comes growing ability to draw increasingly sophisticated insights. And that's when the big data investments really start to pay off.

more...
No comment yet.
Rescooped by luiy from Big Data Analysis in the Clouds
Scoop.it!

We need a data democracy, not a data dictatorship

We need a data democracy, not a data dictatorship | Public Datasets - Open Data - | Scoop.it
A data democracy built to last needs tools that empower everyone to work with data rather than relying on apps and data scientists. Tableau helped ignite the data revolution, and its IPO could help it keep going.

Via Pierre Levy
luiy's insight:
The democratic revolution is underway

The good news is that there’s a whole new breed of startups trying to empower the data citizenry, whatever their role. Companies such as 0xdata, Precog andBigML are trying to make data science more accessible to everyday business users. There are next-generation business intelligence startups such as SiSense,Platfora and ClearStory rethinking how business analytics are done in an area of HTML5 and big data. And then there are companies such as Statwing, Infogramand Datahero (which will be in beta mode soon, by the way) trying to bring data analysis to the unwashed non-data-savvy masses.

 

Combined with a growing number of publicly available data sets and data marketplaces, and more ways of collecting every possible kind of data —  personal fitness, web analytics, energy consumption, you name it — these self-service tools can provide an invaluable service. In January, I highlighted how a number of them can work by using my own dietary and activity data, as well as publicly available gun-ownership data and even web-page text. But as I explained then, they’re still not always easy for laypeople to use, much less perfect.

more...
Pierre Levy's curator insight, April 8, 2013 12:34 PM

The democratization of data is a real phenomenon, but building a sustainable data democracy means truly giving power to the people. The alternative is just a shift of power from traditional data analysts within IT departments to a new generation of data scientists and app developers. And this seems a lot more like a dictatorship than a democracy — a benevolent dictatorship, but a dictatorship nonetheless