e-Xploration
Follow
Find
26.1K views | +7 today
e-Xploration
antropologiaNet, dataviz, collective intelligence, algorithms, social learning, social change, digital humanities
Curated by luiy
Your new post is loading...
Your new post is loading...
Scooped by luiy
Scoop.it!

ANAMIA: How to visualize a corpus of online questionnaires | #dataviz #ethnography

One of the most innovative tools designed by Fragmented.fr and the ANAMIA project team. This dynamic visualization allows to visualize, explore, analyze pers...
luiy's insight:

To know more about this tool: http://www.anamia.fr/

--

Anamia Corpus Processing Dataviz for Anamia ANR was developed by Quentin Bréant (http://www.fragmented.fr/) and is licensed under a Creative Commons Attribution—Noncommercial 3.0 Unported License (http://creativecommons.org/licenses/b...).

more...
No comment yet.
Rescooped by luiy from Humanities and their Algorithmic Revolution
Scoop.it!

Young Researchers in Digital Humanities: A Manifesto | #DH

Young Researchers in Digital Humanities: A Manifesto | #DH | e-Xploration | Scoop.it

Via Pierre Levy
luiy's insight:

The Humanities and Social Sciences are a vital component of human culture and offer an essential insight into the world in which we live. The Digital Humanities reflect the transition of the Humanities to the digital age. However, they do not only bring with them new technical means, but also new forms of knowledge creation and dissemination within, across and outside academic disciplines.

 

In the field of Digital Humanities, experimental practices, reflexivity and the collaborative elaboration of standards are deeply interconnected. They are, therefore, an occasion to rethink and extend the Humanities through new materials, methods and hermeneutics. Furthermore, they represent an opportunity to redefine our relationship to society through open access to cultural heritage and the development of collaborative projects which also engage non-academic audiences. Thus, we see them as pivotal in the future of the Humanities.

more...
Intriguing Networks's curator insight, July 4, 2013 5:57 PM

pass this on if you know others who might benefit from this thanks

Klaus Meschede's curator insight, July 5, 2013 3:12 AM

sehr fortschrittlich und wichtig!

Scooped by luiy
Scoop.it!

La vigilancia en Internet avanza con la complicidad de los gobiernos | #privacidad #surveillance

La vigilancia en Internet avanza con la complicidad de los gobiernos | #privacidad #surveillance | e-Xploration | Scoop.it

La cantidad de datos personales y de información que los ciudadanos subimos a Internet convierte en preocupante la vigilancia de las comunicaciones que practican los gobiernos. Programas como el recientemente destapado en Estados Unidos, Prism, hacen preguntarse sobre lo que los países pueden hacer y especialmente sobre lo que ya están haciendo, pasando desapercibidos.

Un informe de la Asamblea de Naciones Unidas (PDF) fechado el 17 de abril de 2013 destaca la facilidad tecnológica existente en la actualidad para vigilar las comunicaciones. Existen herramientas para monitorizar el tráfico web, así como las llamadas y los mensajes de texto. De la misma manera se puede intervenir la red de individuos concretos, teniendo acceso a sus datos y conexiones privadas. También señala que ha sido una ambición de los estados desde hace tiempo el interceptar este tipo de comunicaciones.

Los argumentos de la seguridad nacional y la lucha contra el crimen son los más recurrentes para justificar cambios en las legislaciones que permitan la vigilancia. El documento de la ONU reconoce la importancia que pueden tener estas comunicaciones para los propósitos anteriores. Pero las leyes que regulan la intervención de los estados en el control de Internet suelen ser inadecuadas o inexistentes. “Los marcos legales inadecuados a nivel nacional crean un suelo fértil para violaciones arbitrarias e ilegales del derecho a la privacidad en las comunicaciones. Y, consecuentemente, también amenazan la protección del derecho a la libertad de opinión y de expresión”, destaca el informe.

El caso de Prism es el último conocido y uno de los más polémicos debido a la magnitud del espionaje. También por su capacidad para llegar a ciudadanos de fuera de Estados Unidos. Pero es especialmente grave por la implicación de compañías tecnológicas tan populares como Google, Facebook o Apple. Al principio negaron su colaboración, pero poco a poco han ido revelando que atendieron a las solicitudes de información. Sin embargo, el caso de Estados Unidos no es una excepción. Tal vez su red de espionaje cuente con más recursos y experiencia que la mayoría de países, pero la vigilancia de la Red se produce en muchos lugares.

 

Europa: algo más que retención de datos

La Unión Europea aprobó en 2006 la Directiva de Retención de datos, mediante la cual se obliga a los proveedores de telecomunicaciones a conservar información de la actividad de sus clientes durante un periodo entre seis meses y dos años. Los números de teléfono de las llamadas, direcciones IP, localización, destinatarios de correos electrónicos y otros detalles para identificar las comunicaciones de los usuarios, así como su duración, pueden estar a disposición de las autoridades si lo necesitan.

Muchos países complementaron esta directiva con legislaciones nacionales. Incluso algunos de fuera de la Unión Europea, como Serbia e Islandia, han adoptado leyes basadas en este modelo. En España, el texto procedente de Bruselas se concretó en la Ley 25/2007, de conservación de datos relativos a las comunicaciones electrónicas y a las redes públicas de comunicaciones. En ella se exige a los operadores que conserven una serie detallada de datos orientados a identificar a los sujetos que realizan una conexión, cuánto dura ésta, cuándo se produce y desde qué lugares.

En Alemania, Rumanía, República Checa, Chipre y Bulgaria ha habido tribunales que han declarado inconstitucionales las leyes que estos países promulgaron basándose en la directiva Europea. Juristas en el Parlamento Europeo han defendido que la normativa promueve la vigilancia de la sociedad y perjudica a los derechos humanos. Uno de los estados que más apoyó en su momento el texto legal europeo fue Reino Unido, con el respaldo de Estados Unidos.

En estos momentos en Reino Unido está en proceso de aprobación un proyecto de ley con una de las legislaciones más intrusivas que se han dado en telecomunicaciones. Joe McNamee, director ejecutivo de Edri.org, que lucha por los derechos digitales en la UE, explica que la‘Communication Data Bill’ constituye la medida más alarmante que un país democrático ha propuesto. “Básicamente permite al estado acceder a múltiples bases de datos de compañías privadas, para generar perfiles de personas más detallados incluso que los que las propias compañías pueden producir. Por ejemplo, el estado podría obtener datos del perfil de Facebook, datos de localización del operador móvil, detalles de los contactos de email de un proveedor de correo”, señala.

En España también está pendiente una propuesta para reformar el Código Procesal Penal con el fin de poder introducir software espía en ordenadores personales, aunque tendría que autorizarlo un juez primero.

La excusa de la seguridad nacional

Uno de los argumentos que defendieron el establecimiento de la directiva europea fue la salvaguarda de la seguridad nacional y la lucha contra el crimen organizado. El debate y la posterior aprobación se produjeron tras los atentados del 11-M, el año 2004 en Madrid, y los del 7-J, en el 2005, en Londres. En Estados Unidos, el Patriot Act, que dota a las agencias del país de poderes más amplios de vigilancia para combatir el terrorismo, fue aprobado apenas un mes y medio después del 11-S.

Otros países también han utilizado este argumento para modificar endurecer sus legislaciones sobre vigilancia. En India, un mes después de los atentados de Bombay de 2008 se modificó la Ley de Tecnología de la Información. No hubo debate en el Parlamento. En 2011 se volvió a reformar la legislación y las autoridades obtuvieron capacidad para escuchar llamadas telefónicas, controlar los mensajes de texto y el tráfico web. Recientemente se ha puesto en marcha un sistema para centralizar toda la vigilancia. Instituciones como Hacienda o los servicios de inteligencia tienen acceso a esta información. Según Reporteros sin fronteras, India también lleva tiempo presionando a compañías como BlackBerry, Google o Skype para que le den acceso a las comunicaciones que realizan sus usuarios.

Otro de los países que esgrime la seguridad nacional para controlar las redes es Rusia. El servicio federal de supervisión de las comunicaciones y la tecnología de la información (Roskomnadzor) está lealmente capacitado para llevar a cabo una monitorización a gran escala. Esta autoridad admite la instalación de software online para identificar contenido que se considere “extremista”. Su política traspasa fronteras, ya que Kazajstán o Bielorrusia se ven influenciados por ella.

La institucionalización de la vigilancia

El país que tiene más organizada la vigilancia de las redes es China. Es obligación de las compañías locales, con una gran participación estatal, controlar sus redes. La intención es impedir que proliferen mensajes contrarios al régimen y para ello se practica una censura permanente. Donde más se dejan notar estas acciones quizá sea en Weibo, el Twitter chino, en el que se suprimen comentarios caso a tiempo real con ayuda de más de 4.000 censores. Además, desde marzo de 2012 los usuarios de microblogging tienen que registrarse bajo su verdadero nombre y dar su número de teléfono.

Esta obligación de certificar la identidad real también se ha establecido para los usuarios de WeChat (el WhatsApp chino), que deben dar su número de DNI, número de móvil y enviar una fotocopia de su DNI, según el informe Enemigos de Internet 2013, de Reporteros sin Fronteras. Desde Citizen Lab detectaron la presencia de servidores de tipo PacketShaper, que identifican y controlar el tráfico web, construidos por la empresa especialista en vigilancia de redes Bluecoat.

La situación no es tan grave en Irán, pero el modelo de control tiene similitudes. La legislación establece que se puede monitorizar el correo electrónico, las conversaciones VoIP y los chats. Las páginas requieren una licencia de la Telecommunication Company of Iran (el proveedor de Internet mayoritario, controlado parcialmente por el Estado) y los blogs otra del Ministerio de Cultura y Orientación Islámica. En enero de 2013 las autoridades anunciaron que estaban creando una tecnología para monitorizar mejor las redes sociales, cuya utilidad para organizar protestas políticas ha quedado de manifiesto durante los últimos años. Las compañías chinas Huawei y ZTE proporcionan servicios de DPI (deep packet inspection) a proveedores iraníes, para que puedan interceptar las comunicaciones de los usuarios.

Las otras consecuencias de la Primavera Árabe

La apertura que trajo la Primavera Árabe también ha motivado a los gobiernos de la región a afianzar su control sobre la población. Dado que uno de los vehículos de difusión más destacados de las protestas fue Internet, la vigilancia de las redes se ha incrementado considerablemente. EnBahréin, el régimen de la familia real filtra el contenido online, evitando que estén accesibles temas como la pornografía, pero también las opiniones políticas o religiosas que no comulgan con el Gobierno.

Los servicios de inteligencia controlan a los miembros de la oposición y disidentes a través de las redes sociales, según la información de Reporteros sin Fronteras. El régimen utiliza los servicios de tres de las compañías más famosas por sus productos para vigilar las redes: Blue Coat, Gamma y Trovicor. La segunda de ellas estaría en tratos con Egipto para venderle su suite de spyware FinFisher, aunque según la empresa aún no se ha firmado ningún acuerdo.

La guerra civil en Siria ha permitido al Gobierno de Bachar Al Asad actuar con impunidad en lo que se refiere al control de Internet. El filtrado de contenido y la monitorización de las comunicaciones están a la orden del día. Se han descubierto servidores de Blue Coat que utilizan técnicas de DPI para analizar las actividades de los usuarios sirios. El régimen controla a la Syrian Telecommunications Establishment (STE) y a la Syrian Computer Society (SCS), proveedores de conexiones fijas a Internet y de la red 3G, respectivamente.

Espionaje político fuera de la ley

Se han descubierto casos de otros países cuyo principal objetivo parece ser espiar directamente a la oposición política. Para ello han utilizado tácticas propias del cibercrimen, como el envío de troyanos. Hauke Gierow, responsable del área de libertad de Internet en Reporteros sin Fronteras Alemania, apunta a la implicación de la compañía alemana Gamma International. “Se ha informado de casos sobre periodistas y activistas que han sido espiados en Etiopía, presuntamente usando  software intrusivo alemán”, explica.

Citizen Lab ha detectado malware avanzado que utiliza imágenes de miembros de un grupo de la oposición (Ginbot 7) como anzuelo. Se identificó el producto FinSpy, de la compañía Gamma. También se ha descubierto software espía en el ordenador de un miembro de la disidencia deAngola. Se identificó cuando se encontraba en una conferencia anual sobre derechos humanos, en Oslo, durante un taller de formación para enseñar a los asistentes a protegerse de la vigilancia de los gobiernos.

Los cambios de leyes en Latinoamérica

En América Latina algunos países están modificando sus legislaciones para dar cada vez más poder a las autoridades. En México, desde hace más de un año, la policía puede acceder a los datos de localización de los usuarios en tiempo real y sin necesidad de orden judicial. El gigante económico de la región, Brasil, también aprobó una norma que permite a la policía y a los fiscales exigir a los proveedores de Internet la información de registro de usuarios. Esto se puede producir mediante “una simple solicitud, sin orden judicial, en las investigaciones penales por lavado de dinero”, señala la activista de la Electronic Frontier Foundation (EFF) Katitza Rodríguez. Quien destaca la existencia de un proyecto de ley para ampliar esta medida a todos los casos de crímenes.

La situación en Colombia merece igualmente atención, tras la aprobación – sin debate público – de una legislación que recuerda al programa Prism de Estados Unidos. “El 15 de agosto de 2012, el Ministerio de Justicia y Tecnología de Colombia expidió el Decreto 1704 para obligar a los proveedores de telecomunicaciones, incluyendo los proveedores de servicios de Internet, a crear puertas traseras que harían más fácil a la policía espiar a los colombianos”, indica Rodríguez.

El marco legal como arma

El derecho internacional establece unos límites dentro de los cuales un estado legalmente puede restringir el derecho a la vida privada de sus ciudadanos de forma excepcional. Para ello es necesario que la vigilancia de las comunicaciones esté prevista por ley. Ésta debe tener un nivel de claridad y precisión suficiente para garantizar que las personas conozcan por adelantado la restricción y que se pueda prever su aplicación. La ejecución de este control tiene que ser estrictamente necesaria para alcanzar un objetivo legítimo y no se debe emplear si existen técnicas menos invasivas o no se han agotado otras vías para obtener información.

“El problema que vemos es que los Estados están adoptando normas que no cumplen con estos principios. Más bien, varias normas permiten la vigilancia masiva de las comunicaciones de todas las personas en un país. Es decir la vigilancia masiva de todo ciudadano ordinario y no sólo la vigilancia selectiva en base a una causa y una presunta responsabilidad que son sujeto de investigación”, argumenta Katitza Rodríguez. En su informe de abril de 2013 sobre derechos digitales, Naciones Unidas apunta que los marcos legales no están en consonancia con las nuevas tecnologías.

Los estados se están rigiendo por leyes antiguas y por marcos legales que no tienen en cuenta las posibilidades de las nuevas tecnologías. Éstas tienen un alcance mucho mayor de lo que las legislaciones anteriores se proponían. La falta de mediación judicial y las excepciones por casusas de seguridad nacional son preocupaciones destacadas que cita el informe de la ONU. La obligación de identificar a las personas que están detrás de los usuarios también se menciona. En muchos estados la legislación requiere registrar con su DNI . “En muchos estados, las leyes requieren la provisión de identificación en los cibercafés. En países en desarrollo muchas personas usan los cibercafés a menudo pues no cuentan con computadoras en sus casas”, denuncia Rodríguez.

El informe también hace referencia a la permisividad de las leyes con la vigilancia extraterritorial.  Estados Unidos ampara desde hace tiempo el espionaje a ciudadanos de fuera de sus fronteras, pero otros países están empezando a legislar ahora en este sentido. El pasado diciembre de 2012 la Asamblea Nacional de Pakistán aprobó la Ley de Garantías Judiciales para poder actuar en el exterior. “Existe una tendencia alarmante hacia la ampliación de las competencias de vigilancia más allá de las fronteras territoriales, aumentando el riesgo de acuerdos de cooperación entre la policía estatal y las agencias de seguridad para permitir la evasión de las restricciones legales nacionales”, explica Rodríguez.

“Esto plantea graves preocupaciones con respecto a la comisión extraterritorial de violaciones de derechos humanos y a la incapacidad de las personas de saber que ellas que podrían ser objeto de una vigilancia extranjera”, declara Rodríguez, haciendo hincapié en lo diluido que queda el derecho a la defensa cuando el espionaje se produce desde fuera del país.

more...
No comment yet.
Scooped by luiy
Scoop.it!

The End of the Web, Search, and Computer as We Know It | #lifestream

The End of the Web, Search, and Computer as We Know It | #lifestream | e-Xploration | Scoop.it
It all began with the “lifestream,” a phenomenon that I predicted in the 1990s and shared in the pages of Wired almost exactly 16 years ago.
luiy's insight:

The space-based web we currently have will gradually be replaced by a time-based worldstream. It’s already happening, and it all began with the lifestream, a phenomenon that I (with Eric Freeman) predicted in the 1990s and shared in the pages of Wired almost exactly 16 years ago.

 

This lifestream — a heterogeneous, content-searchable, real-time messaging stream — arrived in the form of blog posts and RSS feeds, Twitter and other chatstreams, and Facebook walls and timelines. Its structure represented a shift beyond the “flatland known as the desktop” (where our interfaces ignored the temporal dimension) towards streams, which flow and can therefore serve as a concrete representation of time.

It’s a bit like moving from a desktop to a magic diary: Picture a diary whose pages turn automatically, tracking your life moment to moment … Until you touch it, and then, the page-turning stops. The diary becomes a sort of reference book: a complete and searchable guide to your life. Put it down, and the pages start turning again.

 
more...
No comment yet.
Rescooped by luiy from Linked Data and Semantic Web
Scoop.it!

Visualizing SPARQL end point results with d3.js | #dataviz

Visualizing SPARQL end point results with d3.js | #dataviz | e-Xploration | Scoop.it

SPARQL and RDF are very quickly becoming the (Open) standard for linking and accessing database works. Readers of my blog I have been searching the corners of what can and cannot be achieved with this for some time now.

Triggered by some nice visualization work at the BioHackathon on ChEMBL content, I picked up visualization of RDF data (see this 2010 post where I asked people to visualize data using SPARQL). So, and since d3.js is cool nowadays (it was processing.js in the past), so I had a go at the learning curve.


Via Irina Radchenko
luiy's insight:

SPARQL and RDF are very quickly becoming the (Open) standard for linking and accessing databaseworks. Readers of my blog I have been searching the corners of what can and cannot be achieved with this for some time now.

Triggered by some nice visualization work at the BioHackathon on ChEMBL content, I picked up visualization of RDF data (see this 2010 post where I asked people to visualize data using SPARQL). So, and since d3.js is cool nowadays (it was processing.js in the past), so I had a go at the learning curve.

I started with a pie chart and this example code. Because I was working on the SPARQL queries for metabolites in WikiPathways (using Andra's important WP-RDF work, doi:10.1038/npre.2011.6300.1).

more...
No comment yet.
Rescooped by luiy from Web 2.0 for juandoming
Scoop.it!

Beautiful web-based timeline software | #dataviz

Beautiful web-based timeline software | #dataviz | e-Xploration | Scoop.it
Tiki-Toki is web-based software for creating beautiful interactive timelines that you can share on the internet.

Via Ana Rodera, juandoming
more...
Alejandro Tortolini's curator insight, July 3, 2013 11:59 AM

Tiki-Toki permite crear bonitas líneas de tiempo interactivas.

AnnaB's curator insight, July 3, 2013 12:08 PM

add your insight...

 
Jean Claude Le Tellier's curator insight, July 4, 2013 6:36 AM

Really easy to use and share

Rescooped by luiy from Knowledge Broker
Scoop.it!

The Strength of Weak Ties

The Strength of Weak Ties | e-Xploration | Scoop.it
The relationships between individuals with weak ties generate more innovation that those between individuals with a more constant and related relationships.

Via Kenneth Mikkelsen
luiy's insight:

Kenneth Mikkelsen's comment,Today, 8:40 PM


Read about Granovetter's work in his original research paper here:http://sociology.stanford.edu/people/mgranovetter/documents/granstrengthweakties.pdf
more...
Kenneth Mikkelsen's comment, July 2, 2013 2:40 PM
Read about Granovetter's work in his original research paper here: http://sociology.stanford.edu/people/mgranovetter/documents/granstrengthweakties.pdf
Rescooped by luiy from IT Books Free Share
Scoop.it!

book : Graph Theory with #Algorithms and its Applications

book : Graph Theory with #Algorithms and its Applications | e-Xploration | Scoop.it
eBook Free Download: Graph Theory with Algorithms and its Applications | PDF, EPUB | ISBN: 8132207491 | 2012-11-02 | English | PutLocker

Via Fox eBook
luiy's insight:

LINK : http://uploaded.net/file/0ago8mwc

more...
Fox eBook's curator insight, June 24, 2013 9:21 PM

Graph Theory with Algorithms and its Applications: In Applied Science and Technology
The book has many important features which make it suitable for both undergraduate and postgraduate students in various branches of engineering and general and applied sciences. The important topics interrelating Mathematics & Computer Science are also covered briefly. The book is useful to readers with a wide range of backgrounds including Mathematics, Computer Science/Computer Applications and Operational Research. While dealing with theorems and algorithms, emphasis is laid on constructions which consist of formal proofs, examples with applications. Uptill, there is scarcity of books in the open literature which cover all the things including most importantly various algorithms and applications with examples.

Scooped by luiy
Scoop.it!

Algorithms Every Data Scientist Should Know: Reservoir Sampling | #datascience #algorithms

Algorithms Every Data Scientist Should Know: Reservoir Sampling | #datascience #algorithms | e-Xploration | Scoop.it

Say you have a stream of items of large and unknown length that we can only iterate over once. Create an algorithm that randomly chooses an item from this stream such that each item is equally likely to be selected.

 

luiy's insight:

Data scientists, that peculiar mix of software engineer and statistician, are notoriously difficult to interview. One approach that I’ve used over the years is to pose a problem that requires some mixture of algorithm design and probability theory in order to come up with an answer. Here’s an example of this type of question that has been popular in Silicon Valley for a number of years: 

 

Say you have a stream of items of large and unknown length that we can only iterate over once. Create an algorithm that randomly chooses an item from this stream such that each item is equally likely to be selected.


The first thing to do when you find yourself confronted with such a question is to stay calm. The data scientist who is interviewing you isn’t trying to trick you by asking you to do something that is impossible. In fact, this data scientist is desperate to hire you. She is buried under a pile of analysis requests, her ETL pipeline is broken, and her machine learning model is failing to converge. Her only hope is to hire smart people such as yourself to come in and help. She wants you to succeed.

more...
No comment yet.
Scooped by luiy
Scoop.it!

The Spy Files #wikileaks Internet's spy map OWNIs | #surveillance

The Spy Files #wikileaks Internet's spy map OWNIs | #surveillance | e-Xploration | Scoop.it
The Spy Files Wikileaks The Internet's spy maps OWNI
more...
No comment yet.
Scooped by luiy
Scoop.it!

All about "Data mining tools" | Data Mining and Knowledge Discovery | #datamining #dataviz #datascience

All about "Data mining tools" | Data Mining and Knowledge Discovery | #datamining #dataviz #datascience | e-Xploration | Scoop.it
luiy's insight:

The development and application of data mining algorithms requires the use of powerful software tools. As the number of available tools continues to grow, the choice of the most suitable tool becomes increasingly difficult. This paper attempts to support the decision-making process by discussing the historical development and presenting a range of existing state-of-the-art data mining and related tools. Furthermore, we propose criteria for the tool categorization based on different user groups, data structures, data mining tasks and methods, visualization and interaction styles, import and export options for data and models, platforms, and license policies. These criteria are then used to classify data mining tools into nine different types. The typical characteristics of these types are explained and a selection of the most important tools is categorized.


This paper is organized as follows: the first section Historical Development and State-of-the-Art highlights the historical development of data mining software until present; the criteria to compare data mining software are explained in the second section Criteria for Comparing Data Mining Software. The last section Categorization of Data Mining Software into Different Types proposes a categorization of data mining software and introduces typical software tools for the different types.

more...
Fàtima Galan's curator insight, July 1, 2013 9:30 AM

Is a very interesting article because also provides the relation of DataMining Commercial and Open-Source tools and distinguish about DataMinin suites, Business Intelligence packages, Mathematical packages…..

Scooped by luiy
Scoop.it!

Building data science teams - O'Reilly Radar | #datascience #dataviz #bigdata

Building data science teams - O'Reilly Radar | #datascience #dataviz #bigdata | e-Xploration | Scoop.it
A data science team needs people with the right skills and perspectives, and it also requires strong tools, processes, and interaction between the team and the rest of the company.
luiy's insight:

Starting in 2008, Jeff Hammerbacher(@hackingdata) and I sat down to share our experiences building the data and analytics groups at Facebook and LinkedIn. In many ways, that meeting was the start of data science as a distinct professional specialization (see the “What makes a data scientist” section of this report for the story on how we came up with the title “Data Scientist”). Since then, data science has taken on a life of its own. The hugely positive response to “What Is Data Science?,” a great introduction to the meaning of data science in today’s world, showed that we were at the start of a movement. There are now regular meetups, well-established startups, and even college curricula focusing on data science. AsMcKinsey’s big data research report and LinkedIn’s data indicates, data science talent is in high demand.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#Cyberscience 2.0 : Research in the Age of Digital Social Networks

#Cyberscience 2.0 : Research in the Age of Digital Social Networks | e-Xploration | Scoop.it
At the start of the twenty-first century, the Internet was already perceived to have fundamentally changed the landscape for research.
luiy's insight:

At the start of the twenty-first century, the Internet was already perceived to have fundamentally changed the landscape for research. With its opportunities for digital networking, novel publication schemes, and new communication formats, the web was a game-changer for how research was done as well as what came after—the dissemination and discussion of results. Addressing the seismic shifts of the past ten years, Cyberscience 2.0 examines the consequences of the arrival of social media and the increasing dominance of big Internet players, such as Google, for science and research, particularly in the realms of organization and communication.

more...
No comment yet.
Rescooped by luiy from Interface Usability and Interaction
Scoop.it!

Introducing Primal Assistants: A framework for software agents | #MAS

Introducing Primal Assistants: A framework for software agents | #MAS | e-Xploration | Scoop.it

Via Anne-Marie Armstrong
luiy's insight:

Primal does a lot of heavy lifting in knowledge representation and content filtering. If you ask it to grab you some relevant content around your interests, it will do precisely that.

 

But what if you don’t want to have to ask? Search engines are fantastic, but they still require that you go to them and then try to figure out how to formulate your query in a way that gets you decent results.

 

Primal already has the ability to understand what you want, and we’re now working on some technology that will let Primal deliver you the content that you truly care about before you know you want it.

 

Read on to learn more about Primal’s new software agent and content streaming framework.

 

- See more at:

http://www.diigo.com/annotated/880d62a796dcb2bb49bb2075ac024f0e#sthash.dJikOkFR.dpuf

more...
Anne-Marie Armstrong's curator insight, July 5, 2013 9:10 AM

Big data continues to be on the minds of developers and users.  Software agents might be a part of bringing better, smarter searches to individuals.

Rescooped by luiy from LeadershipABC
Scoop.it!

Don Tapscott: We Need Fundamental #Change In All Our Institutions

Don Tapscott: We Need Fundamental #Change In All Our Institutions | e-Xploration | Scoop.it
Don Tapscott was one of the earliest of the world’s thought leaders to grasp the structural change in the way organizations will be run in the 21st Century.

Via Kenneth Mikkelsen
luiy's insight:

The fundamental problem facing all our institutions today, including government, is not related to conjunctural economic changes. It’s not a business cycle that we are going through. It’s not a cyclical change. It’s a secular change. We are at a punctuation point in human history where the industrial age and institutions have finally come to their logical conclusion. They have essentially run out of gas.

more...
No comment yet.
Scooped by luiy
Scoop.it!

What Do Ants Know That We Don't? | Wired | #algorithms

What Do Ants Know That We Don't? | Wired | #algorithms | e-Xploration | Scoop.it
Ever notice how ant colonies so successfully explore food at 4th of July picnics? It’s all done without any central control.
luiy's insight:
What Ant Colony Networks Can Tell Us About What’s Next for Human-Engineered Ones

 

During the 130 million years or so that ants have been around, evolution has tuned ant colony algorithms to deal with the variability and constraints set by specific environments.

 

Ant colonies use dynamic networks of brief interactions to adjust to changing conditions. No individual ant knows what’s going on. Each ant just keeps track of its recent experience meeting other ants, either in one-on-one encounters when ants touch antennae, or when an ant encounters a chemical deposited by another.

 

Such networks have made possible the phenomenal diversity and abundance of more than 11,000 ant species in every conceivable habitat on Earth. So Anternet, and other ant networks, have a lot to teach us. Ant protocols may suggest ways to build our own information networks…

more...
No comment yet.
Rescooped by luiy from Anthropology, communication & technology
Scoop.it!

Scaling-Laws of Human Broadcast Communication Enable Distinction between Human, Corporate and Robot Twitter Users | #dataviz #bigdata

Scaling-Laws of Human Broadcast Communication Enable Distinction between Human, Corporate and Robot Twitter Users | #dataviz #bigdata | e-Xploration | Scoop.it

Human behaviour is highly individual by nature, yet statistical structures are emerging which seem to govern the actions of human beings collectively. Here we search for universal statistical laws dictating the timing of human actions in communication decisions. We focus on the distribution of the time interval between messages in human broadcast communication, as documented in Twitter, and study a collection of over 160,000 tweets for three user categories: personal (controlled by one person), managed (typically PR agency controlled) and bot-controlled (automated system). To test our hypothesis, we investigate whether it is possible to differentiate between user types based on tweet timing behaviour, independently of the content in messages. For this purpose, we developed a system to process a large amount of tweets for reality mining and implemented two simple probabilistic inference algorithms: 1. a naive Bayes classifier, which distinguishes between two and three account categories with classification performance of 84.6% and 75.8%, respectively and 2. a prediction algorithm to estimate the time of a user's next tweet with an . Our results show that we can reliably distinguish between the three user categories as well as predict the distribution of a user's inter-message time with reasonable accuracy. More importantly, we identify a characteristic power-law decrease in the tail of inter-message time distribution by human users which is different from that obtained for managed and automated accounts. This result is evidence of a universal law that permeates the timing of human decisions in broadcast communication and extends the findings of several previous studies of peer-to-peer communication.


Via Andrea Naranjo
luiy's insight:

We are investigating here to what extent these computational neuroscience approaches can be applied to analyse human communication decisions on the online social network Twitter, specifically to understand the timing of tweeting. We follow a very simple, easily interpretable approach using non-parametric Bayesian statistics to analyse and then predict the nature of the tweeter, i.e., is the tweeter a genuine individual or somebody or something else.

more...
No comment yet.
Scooped by luiy
Scoop.it!

MyWorld 2015 - Priority 17 | Phrase Stripes visualization | #dataviz

MyWorld 2015 - Priority 17 | Phrase Stripes visualization | #dataviz | e-Xploration | Scoop.it

Click here to edit the title

luiy's insight:

This Phrase Stripes visualization shows the most frequent word phrases within the freetext form of the MyWorld2015 survey. The number of terms within a phrase increases for every column. The ranking within each column depicts the ranking (count) of each phrase. E.g. "Aids" is the single word which occurs most in the data, the phrase "honest and responsive government" is the most frequent four-word phrase.

This is preliminary work by Hendrik Strobelt from NYU Poly in collaboration with UN Global Pulse; contact: hendrik(at)strobelt.com

more...
No comment yet.
Scooped by luiy
Scoop.it!

Using SEO Tools to extract Twitter JSON data into an Excel file | #dataviz

This is the second of a set of 2 videos - explaining how to get Twitter information dynamically into an Excel spreadsheet. This example shows the use of the ...
more...
No comment yet.
Scooped by luiy
Scoop.it!

How #Algorithms Change The World As We Know It [Infographic]

How #Algorithms Change The World As We Know It [Infographic] | e-Xploration | Scoop.it
The algorithms around us every day change the world constantly. This infographic goes into detail about some of the most profound algorithms in our lives.
more...
No comment yet.
Rescooped by luiy from Big Data Analysis in the Clouds
Scoop.it!

What happens when the world turns into one giant brain | #algorithms #bigdata

What happens when the world turns into one giant brain | #algorithms #bigdata | e-Xploration | Scoop.it
Currently much of the big data being churned out is merely exhaust. But imagine the possibilities once we figure out how to produce and process better data on the fly on a global scale. Call it Big Inference.

Via Pierre Levy
luiy's insight:
Key problems in the way

To get there though we’ll have to confront a number of hurdles:

We need to gather the data. Emerging, massively distributed and networked sensors will be the equivalent of human sensory transducers like rods and cones. The rise of the Internet of Things also means that every device will be able to contribute its own data stream to a collective understanding of the current state of the world.

 

Much of the content of big data these days is exhaust – data originally collected for transactional or other purposes, for which mining and analysis are afterthoughts, and whose characteristics are often ill-suited to further analysis. This will certainly change, as data collection matures into a process explicitly designed to improve our peceptual and decision-making capabilities.

 

We need the processing power to interpret the data While it has become fashionable to note how cheap compute cycles have become, it’s certainly not the case that we can process billions or trillions of input streams in real time –especially when we need to find patterns that are distributed across many noisy and possibly contradictory sensor inputs (i.e., we can’t just process each stream in isolation). We may need to develop new processor technologies to handle these kind of astronomically parallel and heterogeneous inputs.

 

We need the algorithms. To actually make sense of the data and decide what actions and responses to take, we have to figure out how to extract high-level patterns and concepts from the raw inputs. There is an ongoing debate over the right approach: Most researchers will say that we need something more “brain-like” than current systems, but there are many different (and opposing) theories about which aspects of our brain’s computational architecture are actually important. My own bet is on probabilistic programming methods, which are closely aligned with an emerging body of theory that views the brain as a Bayesian inference and decision engine.

more...
No comment yet.
Scooped by luiy
Scoop.it!

NSA slides explain the #PRISM data-collection program - The Washington Post | #privacy #surveillance

NSA slides explain the #PRISM data-collection program - The Washington Post | #privacy #surveillance | e-Xploration | Scoop.it
Through a Top-Secret program authorized by federal judges working under the Foreign Intelligence Surveillance Act (FISA), the U.S. intelligence community can gain access to the servers of nine internet companies for a wide range of digital data.
luiy's insight:

The top-secret PRISM program allows the U.S. intelligence community to gain access from nine Internet companies to a wide range of digital information, including e-mails and stored data, on foreign targets operating outside the United States. The program is court-approved but does not require individual warrants. Instead, it operates under a broader authorization from federal judges who oversee the use of the Foreign Intelligence Surveillance Act (FISA). Some documents describing the program were first released by The Washington Post on June 6. The newly released documents below give additional details about how the program operates, including the levels of review and supervisory control at the NSA and FBI. The documents also show how the program interacts with the Internet companies. These slides, annotated by The Post, represent a selection from the overall document, and certain portions are redacted. Read related article.

more...
No comment yet.
Scooped by luiy
Scoop.it!

This Is What It Feels Like to Pass Through A Singularity | #privacy #surveillance

This Is What It Feels Like to Pass Through A Singularity | #privacy #surveillance | e-Xploration | Scoop.it
The government has an automated system to track your movements and monitor who your friends are. Our news comes from remote-controlled "drone reporters." There's a device in your pocket that can produce a sex partner for you at the touch of a button.
luiy's insight:

The government has an automated system to track your movements and monitor who your friends are. Our news comes from remote-controlled "drone reporters." There's a device in your pocket that can produce a sex partner for you at the touch of a button. Maybe the singularity just happened, and we didn't notice.

 

Perhaps the most shocking aspect of whistleblower Ed Snowden's recent revelations about the NSA's surveillance of Americans is how little they shocked most people. A common response was that we already knew the government was spying on us, or that only a fool would think their emails and phone calls were private. Snowden's story was just confirmation of something many of us already took for granted. And yet it blew up into the story of the year because it was also a genuine revelation. Our vague, occasionally paranoid, suspicions that we live in a landscape alive with surveillance devices turned out to be true.

 

What is that feeling, the uncanny realization that you are actually living in your own fantasies? In the 1970s, Alvin and Heidi Toffler called it "future shock." Today, we might call it passing through the singularity. Either way, we've gone from dreaming about a world that might be real, to accepting that our dreams are hard facts.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Twitter visualizes billions of tweets in artful, interactive 3D maps | #dataviz

Twitter visualizes billions of tweets in artful, interactive 3D maps | #dataviz | e-Xploration | Scoop.it
Twitter visualizes billions of tweets in artful, interactive 3D maps The Verge Today, the social network is getting artsy once agsain, using the same dataset — which it calls Billion Strokes — to produce interactive elevation maps that render...
luiy's insight:

On June 1st, Twitter created beautiful maps visualizing billions of geotagged tweets. Today, the social network is getting artsy once agsain, using the same dataset — which it calls Billion Strokes — to produce interactive elevation maps that render geotagged tweets in 3D. This time around, Twitter visualized geotagged tweets from San Francisco, New York, and Istanbul in maps that viewers can manipulate.

 

For each city map, Twitter gives users the option of adding eight different layers over the topography. Users can also change the size of the elevation differences mapped out, to get a better idea of where most tweets are sent from. The maps can be seen from either an overhead view, or on a horizontal plane. The resulting maps looking like harsh mountain ranges, but the peaks and valleys aren't representative of the land — rather, a peak illustrates a high amount of tweets being sent from that location, while a trough displays an area where fewer tweets are sent. The whole thing was put together byNicolas Belmonte, Twitter's in-house data visualization scientist. You can check out the interactive maps on Twitter's GitHub page.

more...
No comment yet.