e-Xploration
Follow
Find
23.5K views | +4 today
e-Xploration
antropologo.net, dataviz, collective intelligence, algorithms, social learning, social change, digital humanities
Curated by luiy
Your new post is loading...
Your new post is loading...
Rescooped by luiy from Cyborg Lives
Scoop.it!

Bluescape, the Touchscreen That Covers a Wall

Bluescape, the Touchscreen That Covers a Wall | e-Xploration | Scoop.it

Haworth and Obscura Digital's digital whiteboard can hold 160 acres of virtual space-

 

Apple (AAPL) has rolled out smaller models of its iPad. Jeff Reuschel is thinking bigger. The global design director for office-furniture maker Haworth, in partnership with interactive display company Obscura Digital, has created a touchscreen that covers a conference-room wall. Like a supersize version of CNN’s (TWX) Magic Wall, Bluescape displays a unified image across 15 linked 55-inch flat-screen monitors, each equipped with 32 specialized sensors to read users’ hand movements. Unlike whiteboards or flip charts, it won’t require much erasing or page turning: When zoomed out as far as possible, the digital board’s virtual space totals 160 acres. Using Bluescape, corporate and university clients can store often scattershot brainstorming sessions in perpetuity. Co-workers or classmates can add digital sticky notes, either with a digital pen on the wall itself or by uploading documents from other devices, from which they can also browse the virtual space. “There are fewer and fewer people working in cubicles,” says Reuschel. “The old-fashioned vertical surfaces are going away.”


Via Wildcat2030
more...
No comment yet.
Rescooped by luiy from Systems Theory
Scoop.it!

Big Data Needs a Big Theory to Go with It: Scientific American

Big Data Needs a Big Theory to Go with It: Scientific American | e-Xploration | Scoop.it
Just as the industrial age produced the laws of thermodynamics, we need universal laws of complexity to solve our seemingly intractable problems (RT @laurienti: @StarlingInsight is striving to help organizations deal with this type of complexity.

Via Ben van Lier
luiy's insight:

What makes a “complex system” so vexing is that its collective characteristics cannot easily be predicted from underlying components: the whole is greater than, and often significantly different from, the sum of its parts. A city is much more than its buildings and people. Our bodies are more than the totality of our cells. This quality, called emergent behavior, is characteristic of economies, financial markets, urban communities, companies, organisms, the Internet, galaxies and the health care system.

 

The digital revolution is driving much of the increasing complexity and pace of life we are now seeing, but this technology also presents an opportunity. The ubiquity of cell phones and electronic transactions, the increasing use of personal medical probes, and the concept of the electronically wired “smart city” are already providing us with enormous amounts of data. With new computational tools and techniques to digest vast, interrelated databases, researchers and practitioners in science, technology, business and government have begun to bring large-scale simulations and models to bear on questions formerly out of reach of quantitative analysis, such as how cooperation emerges in society, what conditions promote innovation, and how conflicts spread and grow.

The trouble is, we don't have a unified, conceptual framework for addressing questions of complexity. We don't know what kind of data we need, nor how much, or what critical questions we should be asking. “Big data” without a “big theory” to go with it loses much of its potency and usefulness, potentially generating new unintended consequences.

more...
No comment yet.
Scooped by luiy
Scoop.it!

2012 - #BigdataHistory - CRITICAL QUESTIONS FOR BIG DATA #dataawareness #privacy

2012 - #BigdataHistory - CRITICAL QUESTIONS FOR BIG DATA #dataawareness #privacy | e-Xploration | Scoop.it
(2012). CRITICAL QUESTIONS FOR BIG DATA. Information, Communication & Society: Vol. 15, A decade in Internet time: the dynamics of the Internet and society, pp. 662-679.
luiy's insight:

The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Given the rise of Big Data as a socio-technical phenomenon, we argue that it is necessary to critically interrogate its assumptions and biases. In this article, we offer six provocations to spark conversations about the issues of Big Data: a cultural, technological, and scholarly phenomenon that rests on the interplay of technology, analysis, and mythology that provokes extensive utopian and dystopian rhetoric.

more...
No comment yet.
Scooped by luiy
Scoop.it!

2010 - #BigdataHistory. "bigdata", Dynamic Factor Models for Macroeconomics Measurement and Forecasting #privacy

more...
No comment yet.
Scooped by luiy
Scoop.it!

2011 - Big data history: The next frontier for innovation, competition, and productivity | #dataawareness #privacy

2011 - Big data history: The next frontier for innovation, competition, and productivity | #dataawareness #privacy | e-Xploration | Scoop.it
Big data will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus—as long as the right policies and enablers are in place. A McKinsey Global Institute article.
luiy's insight:

MGI studied big data in five domains—healthcare in the United States, the public sector in Europe, retail in the United States, and manufacturing and personal-location data globally. Big data can generate value in each. For example, a retailer using big data to the full could increase its operating margin by more than 60 percent. Harnessing big data in the public sector has enormous potential, too. If US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year. Two-thirds of that would be in the form of reducing US healthcare expenditure by about 8 percent. In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues. And users of services enabled by personal-location data could capture $600 billion in consumer surplus. The research offers seven key insights.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#privacy : FTC Warns 10 Data Brokers on Privacy Gaffes #dataawareness

#privacy : FTC Warns 10 Data Brokers on Privacy Gaffes #dataawareness | e-Xploration | Scoop.it
Posing as potential customers, agency staffers turn up improprieties in selling insurance, credit, and employment information.
luiy's insight:

The Federal Trade Commission today reported that it had sent letters to 10 companies it identified as data brokers, warning that their practices might constitute consumer privacy violations under the Fair Credit Reporting Act. Such letters are not official notices of complaints being filed, but instead urge recipients to review whether their practices are in compliance with the law.

Questionable practices surfaced at the 10 companies during a test shopping review of 45 businesses by FTC staffers, who posed as individuals or business representatives seeking information about consumers related to creditworthiness, eligibility for insurance, or suitability for employment.

ConsumerBase and an unnamed company were warned for appearing to offer lists of consumers “pre-screened” for credit approvals.Brokers Data and US Data Corporation were called out for seeming to promising information useful to making insurance decision on individuals.Crimcheck.com, 4Nannies, U.S. Information Search, People Search Now, Case Breakers, and USA People Search were warned for appearing to offer consumer information for employment decisions.

The FTC issued the letters this week as part of its involvement in an international privacy sweep conducted by the Global Privacy Enforcement Network, which encourages cross-border enforcement of privacy laws by connecting privacy enforcement authorities.

In recent years the FTC has sued Equifax, Experian, and TransUnion—the three leading credit reporting agencies—and obtained nearly $3 million in civil penalties. The FTC also won a $15 million judgment against ChoicePoint for not screening prospective subscribers before selling them sensitive consumer information.

more...
No comment yet.
Rescooped by luiy from Collective Intelligence & Distance Learning
Scoop.it!

Desire2Learn’s New Learning Suite Aims To Predict Success, Change How Students Navigate Their Academic Career

Desire2Learn’s New Learning Suite Aims To Predict Success, Change How Students Navigate Their Academic Career | e-Xploration | Scoop.it

To do this, Desire2Learn wants to bring predictive analytics into play in education. But why? Well, first and foremost because, today, if students want to figure out whether a course is right for them — or how well they might perform in that course — they’re hard pressed to find a good answer. They can ask fellow students, check websites that rank faculty based on nebulous criteria or try to find surveys, but none of these options are ideal.

 

With its new analytics engine, Desire2Learn aims to change that by giving students the ability to predict their success in a particular course based on what they’ve studied in the past and how they performed in those classes. The new, so-called “Student Success System,” was built (in part) from the technology it acquired from Degree Compass; however, while Degree Compass used predictive analytics to help students optimize their course selection, the new product aims to help both sides of the learning equation: Students and teachers.

 

On the teacher side, Desire2Learn’s new analytics engine allows them to view predictive data visualizations that compare student performance against their peers so that they can identify at-risk students, for example, and monitor a student’s progress over time.

 

The idea is to give teachers access to important insight on stuff like class dynamics and learning trends, which they can then combine with assessment data, to improve their instruction or adapt to the way individual students learn. In theory, this leads not only to higher engagement, but also better outcomes


Via Huey O'Brien
luiy's insight:

Essentially, the tool allows students to move their academic resume to the cloud so they can take it with them after they graduate, which the company is incentivizing by offering 2GB of free storage.

 

Basically, what we’ve come to realize, the Desire2Learn CEO tells me, is that the company’s initial approach to business (or academic) intelligence was off track. “Students and teachers don’t necessarily want more data, they want more insight and they want that data broken out in a way that they can understand and helps them more quickly visualize the learning map,” he says.

 

When I asked if building and adding more and more tools and features would dilute the experience and result in feature overload, Baker said that the company doesn’t want to build a million different tools. Instead, it wants to become a platform that supports a million tools and allows third-parties that specialize in particular areas of education to help develop better products.

 

Through open-sourcing its APIs, Desire2Learn along with Edmodo and an increasing number of education startups are beginning to tap into the potential inherent to the creation of a real ecosystem. Adding predictive analytics tools gives Desire2Learn another carrot with which they hope to be able to draw both teachers, students and development partners into its ecosystem.

more...
No comment yet.
Rescooped by luiy from Mindful Decision Making
Scoop.it!

Illusory Correlations: When The Mind Makes Connections That Don’t Exist

Illusory Correlations: When The Mind Makes Connections That Don’t Exist | e-Xploration | Scoop.it
Why do CEOs who excel at golf get paid more, despite poorer stock market performance?

Via Philippe Vallat
luiy's insight:

To see how easily the mind jumps to the wrong conclusions, try virtually taking part in a little experiment...

 

...imagine that you are presented with information about two groups of people about which you know nothing. Let's call them the Azaleans and the Begonians.

 

For each group you are given a list of positive and negative behaviours. A good one might be: an Azalean was seen helping an old lady across the road. A bad one might be: a Begonian urinated in the street.

So, you read this list of good and bad behaviours about the Azaleans and Begonians and afterwards you make some judgements about them. How often do they perform good and bad behaviours and what are they?

What you notice is that it's the Begonians that seem dodgy. They are the ones more often to be found shoving burgers into mailboxes and ringing doorbells and running away. The Azaleans, in contrast, are a sounder bunch; certainly not blameless, but overall better people.

 

While you're happy with the judgement, you're in for a shock. What's revealed to you afterwards is that actually the ratio of good to bad behaviours listed for both the Azaleans and Begonians was exactly the same. For the Azaleans 18 positive behaviours were listed along with 8 negative. For the Begonians it was 9 positive and 4 negative.

In reality you just had less information about the Begonians. What happened was that you built up an illusory connection between more frequent bad behaviours and the Begonians; they weren't more frequent, however, they just seemed that way.

When the experiment is over you find out that most other people had done exactly the same thing, concluding that the Begonians were worse people than the Azaleans.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Modularity and community structure in networks

luiy's insight:

Many networks of interest in the sciences, including a variety of social and biological networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure has attracted considerable recent attention. One of the most sensitive detection methods is optimization of the quality function known as “modularity” over the possible divisions of a network, but direct application of this method using, for instance, simulated annealing is computationally costly. Here we show that the modularity can be reformulated in terms of the eigenvectors of a new characteristic matrix for the network, which we call the modularity matrix, and that this reformulation leads to a spectral algorithm for community detection that returns results of better quality than competing methods in noticeably shorter running times. We demonstrate the algorithm with applications to several network data sets.

 

 

Example applications


In practice, the algorithm developed here gives excellent results. For a quantitative comparison between our algorithm and others we follow Duch and Arenas [19] and compare values of the modularity for a variety of
networks drawn from the literature. Results are shown in Table I for six different networks—the exact same six as used by Duch and Arenas. We compare modularity figures against three previously published algorithms: the betweenness-based algorithm of Girvan and Newman [10], which is widely used and has been incorporated into some of the more popular network analysis programs (denoted GN in the table); the fast algorithm of Clauset et al. [26] (CNM), which optimizes modularity using a greedy algorithm; and the extremal optimization algorithm of Duch and Arenas [19] (DA), which is arguably the best previously existing method, by standard
measures, if one discounts methods impractical for large networks, such as exhaustive enumeration of all partitions or simulated annealing.

more...
No comment yet.
Rescooped by luiy from Papers
Scoop.it!

Network modularity promotes cooperation

Cooperation in animals and humans is widely observed even if evolutionary biology theories predict the evolution of selfish individuals. Previous game theory models have shown that cooperation can evolve when the game takes place in a structured population such as a social network because it limits interactions between individuals. Modularity, the natural division of a network into groups, is a key characteristic of all social networks but the influence of this crucial social feature on the evolution of cooperation has never been investigated. Here, we provide novel pieces of evidence that network modularity promotes the evolution of cooperation in 2-person prisoner's dilemma games. By simulating games on social networks of different structures, we show that modularity shapes interactions between individuals favouring the evolution of cooperation. Modularity provides a simple mechanism for the evolution of cooperation without having to invoke complicated mechanisms such as reputation or punishment, or requiring genetic similarity among individuals. Thus, cooperation can evolve over wider social contexts than previously reported.

 

Network modularity promotes cooperation
Marianne Marcoux, David Lusseau

Journal of Theoretical Biology
Volume 324, 7 May 2013, Pages 103–108

http://dx.doi.org/10.1016/j.jtbi.2012.12.012


Via Complexity Digest
more...
Complexity Digest's curator insight, May 2, 2013 6:26 PM

Modularity is prevalent in natural and artificial systems. A modular structure reduces the probability of "damage" or "perturbations" to spread through a network. More at:

Modular Random Boolean Networks
Rodrigo Poblanno-Balp and Carlos Gershenson
Artificial Life 2011 17:4, 331-351

http://dx.doi.org/10.1162/artl_a_00042

Jose Santos's curator insight, May 26, 2013 11:46 AM

why modularity is everywhere in our lives...

 

Scooped by luiy
Scoop.it!

Catalanes, inmigrantes y charnegos: "raza", "cultura" y "mezcla" en el discurso nacionalista catalán

Catalanes, inmigrantes y charnegos: "raza", "cultura" y "mezcla" en el discurso nacionalista catalán | e-Xploration | Scoop.it
This paper focuses on the manner in which 'race' and 'culture' are used in nationalist rhetorics, paying special attention to the presence or absence of ideas of biological or cultural 'mixture' employed in order to define socio-political identity...
luiy's insight:

Este artículo plantea una reflexión sobre el uso de la “raza” y la “cultura” en los discursos nacionalistas, prestando atención a la presencia o ausencia de la idea de “mezcla” biológica o cultural como base para la definición de la identidad socio-política de los descendientes de padres de distinta nacionalidad. Esta idea será analizada en un contexto etnográfico concreto, el nacionalismo catalán, un tipo de nacionalismo denominado “cívico” porque principalmente define la identidad catalana a partir de criterios culturales. El artículo contrasta esta retórica nacionalista con una forma de ideología xenofóbica, que se desarrolló brevemente en Cataluña contra los descendientes del matrimonio mixto entre catalanes e inmigrantes procedentes de otras regiones de España durante las décadas de 1960 y 1970. A través de este caso se pretende mostrar el carácter construido y cambiante de los discursos de clasificación social en función del contexto, que pueden oscilar entre y/o combinar principios biológicos y culturales. Se sugiere que sería importante tener esto en cuenta en el contexto actual de crisis económica, especialmente en  elación a los inmigrantes no-europeos y sus descendientes que
se encuentran en Cataluña.

more...
No comment yet.
Rescooped by luiy from DigitAG& journal
Scoop.it!

Smarter Objects, Fluid Interfaces Group, MIT Media Lab

http://fluid.media.mit.edu/projects/smarter-objects The Smarter Objects system explores a new method for interaction with everyday objects. The system associ...

Via João Greno Brogueira, Andrea Graziano
luiy's insight:

The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other "smarter objects". As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs, buttons, etc). As such Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.


Autors are: Valentin Heun, Shunichi Kasahara, Pattie Maes

more...
Maquete Eletrônica's curator insight, May 13, 2013 9:01 AM

"O sistema de objetos Smarter explora um novo método de interação com objetos do cotidiano. O sistema associa um objeto virtual com cada objeto físico para suportar um meio fácil de modificar a interface eo comportamento do objeto físico, bem como suas interações com outros "objetos inteligentes". Como um usuário aponta um smartphone ou tablet em um objeto físico, uma realidade aumentada (AR) aplicativo reconhece o objeto e oferece uma interface gráfica intuitiva para o programa de comportamento e interações com outros objetos do objeto. Uma vez reprogramada, o Smarter objeto pode ser operado com uma interface tangível simples (como maçanetas, botões, etc.) Como tal, os mais inteligentes objetos combinar a capacidade de adaptação dos objetos digitais com interface tangível simples de um objeto físico. Nós implementamos vários objetos inteligentes e cenários de uso, demonstrando o potencial desta abordagem."

Scooped by luiy
Scoop.it!

2011 - Big-Data Computing: Creating revolutionary breakthroughs in commerce, science, and society #bigdataHistory #privacy

luiy's insight:

Advances in digital sensors, communications, computation, and storage have created huge  collections of data, capturing information of value to business, science, government, and society. For example, search engine companies such as Google, Yahoo!, and Microsoft have created an entirely new business by capturing the information freely available on the World
Wide Web and providing it to people in useful ways. These companies collect trillions of bytes of data every day and continually add new services such as satellite images, driving directions, and image retrieval. The societal benefits of these services are immeasurable, having transformed how people find and make use of information on a daily basis.

 

Just as search engines have transformed how we access information,other forms of bigdata computing can and will transform the activities of companies, scientific researchers, medical practitioners, and our nation's defense and intelligence operations. Some examples include:

more...
No comment yet.
Scooped by luiy
Scoop.it!

#bigdatahistory : A Very Short History Of Big Data 1944 - 2012 #dataawareness #privacy

#bigdatahistory : A Very Short History Of Big Data 1944 - 2012 #dataawareness #privacy | e-Xploration | Scoop.it
The story of how data became big starts many years before the current buzz around big data.
luiy's insight:

1944 Fremont Rider, Wesleyan University Librarian, publishes The Scholar and the Future of the Research Library. He estimates that American university libraries were doubling in size every sixteen years. Given this growth rate, Rider speculates that the Yale Library in 2040 will have “approximately 200,000,000 volumes, which will occupy over 6,000 miles of shelves… [requiring] a cataloging staff of over six thousand persons.”

 

1961 Derek Price publishes Science Since Babylon, in which he charts the growth of scientific knowledge by looking at the growth in the number of scientific journals and papers. He concludes that the number of new journals has grown exponentially rather than linearly, doubling every fifteen years and increasing by a factor of ten during every half-century. Price calls this the “law of exponential increase,” explaining that “each [scientific] advance generates a new series of advances at a reasonably constant birth rate, so that the number of births is strictly proportional to the size of the population of discoveries at any given time.”

 

.......

 

April 2012 The International Journal of Communications publishes a Special Section titled “Info Capacity” on the methodologies and findings of various studies measuring the volume of information. In “Tracking the flow of information into the home (PDF),” Neuman, Park, and Panek (following the methodology used by Japan’s MPT and Pool above) estimate that the total media supply to U.S. homes has risen from around 50,000 minutes per day in 1960 to close to 900,000 in 2005. And looking at the ratio of supply to demand in 2005, they estimate that people in the U.S. are “approaching a thousand minutes of mediated content available for every minute available for consumption.”  In “International Production and Dissemination of Information (PDF),” Bounie and Gille (following Lyman and Varian above) estimate that the world produced 14.7 exabytes of new information in 2008, nearly triple the volume of information in 2003.

 

May 2012 danah boyd and Kate Crawford publish “Critical Questions for Big Data” in Information, Communications, and Society. They define big data as “a cultural, technological, and scholarly phenomenon that rests on the interplay of:  (1) Technology: maximizing computation power and algorithmic accuracy to gather, analyze, link, and compare large data sets. (2) Analysis: drawing on large data sets to identify patterns in order to make economic, social, technical, and legal claims. (3) Mythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.”

more...
No comment yet.
Rescooped by luiy from Psychology and Social Networking
Scoop.it!

Conscious computing: how to take control of your life online

Conscious computing: how to take control of your life online | e-Xploration | Scoop.it
Twitter, Facebook, Google… we know the internet is driving us to distraction. But could sitting at your computer actually calm you down? Oliver Burkeman investigates the slow web movement

Via Aaron Balick
luiy's insight:

In March, I spent a week trying to live as faithfully as possible in accordance with the philosophy of calming (or conscious or contemplative) computing. At home, I stopped using my Nexus smartphone as a timepiece – I wore a watch instead – to prevent the otherwise inevitable slide from checking the time, or silencing the alarm, into checking my email, my Twitter feed or Wikipedia's List Of Unusual Deaths. After a couple of days, I disabled the Gmail and Twitter apps completely, and stored my phone in my bag while I worked, frequently forgetting it for hours at a time. At work, I shut off the internet in 90-minute slabs using Mac Freedom, the "internet blocking productivity software" championed by such writerly big shots as Zadie Smith and the late Nora Ephron. ("Freedom enforces freedom," its website explains chillingly.) Most mornings, I also managed 10 minutes with ReWire, a concentration-enhancing meditation app for the iPad that plays songs from your music library in short bursts, interrupted by silence; your job is to press a button as fast as you can each time you notice the music has stopped. I also tried to check my email no more than three times a day, and at fixed points: 9.30am, 1.30pm and 5pm.

 

Disconcerting things began to happen. I'm embarrassed to report that I found myself doing what's referred to, in Pang's book, as "paper-tweeting": scribbling supposedly witty wisecracks in a notebook as a substitute for the urge to share them online. (At least I'd never had a problem with "sleep texting", which, at least according to a few dubious media reports, is now a thing among serious smartphone addicts.) I had a few minor attacks of phantom mobile phone vibrations, aka "ringxiety", which research suggests afflicts at least 70% of us. By far the biggest obstacle to my experiment was the fact that the web and email are simultaneously sources of distraction and a vital tool: it's no use blocking the internet to work when you need the internet for work. Still, the overall result was more calmness and a clear sense that I'd gained purchase on my own mind: I was using it more than it was using me. I could jump online to look something up and then – this is the crucial bit – jump off again. After a few 90-minute stretches of weblessness, for example, I found myself not itching to get back online, but bored by the prospect. I started engaging in highly atypical behaviours, such as going for a walk, instead.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Google's Timelapse project shows how the Earth has changed over a quarter of a century

Google's Timelapse project shows how the Earth has changed over a quarter of a century | e-Xploration | Scoop.it
Google has expanded its mapping platform to launch a new project called Timelapse, taking you back through time to see how our planet has changed over the last 25 years. To create its new...
more...
No comment yet.
Rescooped by luiy from Big Data Analysis in the Clouds
Scoop.it!

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review | e-Xploration | Scoop.it
The creator of the Wolfram Alpha search engine explains why he thinks your life should be measured, analyzed, and improved.

Via Pierre Levy
luiy's insight:

What do you see as the big applications in personal analytics?


Augmented memory is going to be very important. I’ve been spoiled because for years I’ve had the ability to search my e-mail and all my other records. I’ve been the CEO of the same company for 25 years, and so I never changed jobs and lost my data. That’s something that I think people will just come to expect. Pure memory augmentation is probably the first step.

 

The next is preëmptive information delivery. That means knowing enough about people’s history to know what they’re going to care about. Imagine someone is reading a newspaper article, and we know there is a person mentioned in it that they went to high school with, and so we can flag it. I think that’s the sort of thing it’s possible to dramatically automate and make more efficient.

 

Then there will be a certain segment of the population that will be into the self-improvement side of things, using analytics to learn about ourselves. Because we may have a vague sense about something, but when the pattern is explicit, we can decide, “Do we like that behavior, do we not?” Very early on, back in the 1990s, when I first analyzed my e-mail archive, I learned that a lot of e-mail threads at my company would, by a certain time of day, just resolve themselves. That was a useful thing to know, because if I jumped in too early I was just wasting my time.

more...
Christophe CESETTI's curator insight, May 10, 2013 7:01 PM

here some more information about

• Measuring Employee Happiness and efficiency http://pear.ly/b7_yL

• from a french article "Le recrutement et la productivité à l’heure des Big Data" http://ow.ly/kSILt

• Pearltree http://pear.ly/b7_lf

Scooped by luiy
Scoop.it!

IBM What is big data? - Bringing big data to the enterprise

IBM What is big data? - Bringing big data to the enterprise | e-Xploration | Scoop.it
Everyday, we create 2.5 quintillion bytes of data–so much that 90% of the data in the world today has been created in the last two years alone.
luiy's insight:
What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.

more...
Jeff Hester's curator insight, May 10, 2013 2:39 PM

How big does the enterprise need to be for big data to be of value?

Rescooped by luiy from Papers
Scoop.it!

Exploring Default Mode and Information Flow on the Web

Social networking services (e.g., Twitter, Facebook) are now major sources of World Wide Web (called “Web”) dynamics, together with Web search services (e.g., Google). These two types of Web services mutually influence each other but generate different dynamics. In this paper, we distinguish two modes of Web dynamics: the reactive mode and the default mode. It is assumed that Twitter messages (called “tweets”) and Google search queries react to significant social movements and events, but they also demonstrate signs of becoming self-activated, thereby forming a baseline Web activity. We define the former as the reactive mode and the latter as the default mode of the Web. In this paper, we investigate these reactive and default modes of the Web's dynamics using transfer entropy (TE). The amount of information transferred between a time series of 1,000 frequent keywords in Twitter and the same keywords in Google queries is investigated across an 11-month time period.(...)

 

Oka M, Ikegami T (2013) Exploring Default Mode and Information Flow on the Web. PLoS ONE 8(4): e60398. http://dx.doi.org/10.1371/journal.pone.0060398


Via Complexity Digest
luiy's insight:
Discussion

This paper explored how to define the Web's reactive and default modes by information transfer by computing TE to characterize the inherent structure of the Web dynamics. First, we defined whether a keyword is in default or reactive mode in terms of how burst events are caused internally or externally. There are reports on YouTube page views and Twitter hashtags, whereby internally and externally caused bursts are distinguished by certain criteria[17], [19]. Our analysis of the number of bursts in relation to keyword frequency revealed that while low-frequency keywords tend to burst more, keywords are more influenced by real-world events, when compared to high-frequency keywords.

From this observation, we defined that high-frequency keywords form the Web's default mode network and low-frequency keywords constitute the Web's reactive mode. When analyzing the information transfer between Google and Twitter, we found that information is mostly transferred from Twitter to Google and that this tendency is more apparent for high-frequency keywords than for low-frequency keywords. We also studied the information flow network formed among Twitter keywords by taking the keywords as nodes and flow direction as the edges of a network. We found that high-frequency keywords tend to become information sources and low-frequency keywords tend to become information sinks. These findings suggest that we can use high-frequency keywords (or default mode of the Web) to reduce uncertainty with the externally driven low-frequency keywords (or reactive mode of the Web). However, it is fair to assume that frequently searched keywords in Google are different from the frequent keywords found on Twitter. Thus, if we investigated the high-frequency keywords found in Google queries, the results may be different.

The concept of reactive and default modes originates from brain science [20]–[22]. A brain region responsible for a given task is identified by measuring the neural activity that is observably higher compared to the baseline activity. Raichle et al. [23] examined the baseline activity by analyzing the regions that become less active when a specific task is given. This successful approach uncovered some remarkable perspectives and characteristics of the default mode; based on Buckner's [22] and Raichle's [24] reviews, these are: i) the area associated with the default mode is found as the integration of various subsystems in the brain - the medial prefrontal cortex and posterior cingulate cortex subsystems seem to play central roles. ii) The neural activity of the aforementioned subsystems were observed as noisy fMRI signals at a low frequency of about 0.1 Hz or less, showing global synchronization. iii) The default mode is to do with spontaneous cognition e.g., day dreaming and internal thoughts such as future planning. iv) The activity of the default mode is anti-correlated with the other brain regions that are responsible for focusing attention on the external world; and v) the brain region associated with the default mode overlaps with those involved in the construction of episodic memory.

This notion of the default mode can be generalized for any living systems with or without brain systems. In the case of the Web system, it can be said that 1) frequent keywords constitute the default mode (mostly everyday keywords), 2) these frequent keywords display less frequent bursting behaviors and are an information source for other keywords, 3) the default mode may help reduce uncertainty in the entire Web system, and 4) the default mode comprises quasi-periodic time series. From this comparison with the default mode network in brain systems, and in particular with the possibility that high-frequency keywords may help to predict essentially unpredictable events, it becomes apparent the Web's default mode may have the same property as the default modes in the brain. Differentiating between these two modes, the reactive and the default, provides a useful perspective for understanding Web dynamics and predicting the future of bursting behavior in the time series of keyword frequencies in tweets in Twitter, as well as in the time series of search queries in Google. With respect to the examples of complex networks in general, we believe that the default mode is key for understanding autonomy in complex systems in general. Any autonomous system (e.g., robots) possesses primitive forms of the default mode with different time scales [25].

more...
No comment yet.
Scooped by luiy
Scoop.it!

"bots" y política en México

"bots" y política en México | e-Xploration | Scoop.it
REPORTE BERÚMEN/SINEMBARGO Analistas: Luis Parra Meixueiro y Edmundo Berumen Osuna Datos: Berumen y Asociados twitter:  @mundoBo y @luisparramei Ciu
more...
No comment yet.
Scooped by luiy
Scoop.it!

#dataawareness : FTC warns data brokers on privacy rules

#dataawareness : FTC warns data brokers on privacy rules | e-Xploration | Scoop.it
Federal officials have warned data brokerage firms they may be violating privacy rules in selling personal data.
luiy's insight:

The FTC last year urged Congress to pass a law forcing the industry to disclose its practices, and in December FTC officials ordered nine data brokers to explain how they assemble and sell profiles of individual consumers. Sen. John D. Rockefeller IV (D-W.Va.) also haspushed for more information about the industry.

 

The warning letters issued over the past week started with a broader inquiry by FTC officials into 45 data brokers that appeared to market information whose use is restricted by the Fair Credit Reporting Act.

Agency staffers contacted the companies by phone and e-mail, posing as interested potential customers, to see if the employees of the firms were following federal rules. The warning letters noted possible violations in this initial screening and left open the possibility of future regulatory action should the companies fail to make changes.

 

“This should help raise awareness,” said Laura Berger, an attorney with the Bureau of Consumer Protection. She declined to comment on whether these or other companies are facing full investigations by the FTC.

Two of the companies that received warning letters appeared to offer “pre-screened” lists of potential customers for credit offers, FTC officials said, and two others appeared to offer information that could affect insurance eligibility. Six marketed information that could be used to make employment decisions.

 

The companies that received warning letters were 4Nannies, Brokers Data, Case Breakers,ConsumerBase, Crimcheck.com, People Search Now, U.S. Information Search, US Data Corporation and USA People Search. A 10th company was not named because receipt of the FTC letter had not been confirmed.

more...
No comment yet.