e-Xploration
32.0K views | +1 today
Follow
e-Xploration
antropologiaNet, dataviz, collective intelligence, algorithms, social learning, social change, digital humanities
Curated by luiy
Your new post is loading...
Your new post is loading...
Scooped by luiy
Scoop.it!

#privacy : FTC Warns 10 Data Brokers on Privacy Gaffes #dataawareness

#privacy : FTC Warns 10 Data Brokers on Privacy Gaffes #dataawareness | e-Xploration | Scoop.it
Posing as potential customers, agency staffers turn up improprieties in selling insurance, credit, and employment information.
luiy's insight:

The Federal Trade Commission today reported that it had sent letters to 10 companies it identified as data brokers, warning that their practices might constitute consumer privacy violations under the Fair Credit Reporting Act. Such letters are not official notices of complaints being filed, but instead urge recipients to review whether their practices are in compliance with the law.

Questionable practices surfaced at the 10 companies during a test shopping review of 45 businesses by FTC staffers, who posed as individuals or business representatives seeking information about consumers related to creditworthiness, eligibility for insurance, or suitability for employment.

ConsumerBase and an unnamed company were warned for appearing to offer lists of consumers “pre-screened” for credit approvals.Brokers Data and US Data Corporation were called out for seeming to promising information useful to making insurance decision on individuals.Crimcheck.com, 4Nannies, U.S. Information Search, People Search Now, Case Breakers, and USA People Search were warned for appearing to offer consumer information for employment decisions.

The FTC issued the letters this week as part of its involvement in an international privacy sweep conducted by the Global Privacy Enforcement Network, which encourages cross-border enforcement of privacy laws by connecting privacy enforcement authorities.

In recent years the FTC has sued Equifax, Experian, and TransUnion—the three leading credit reporting agencies—and obtained nearly $3 million in civil penalties. The FTC also won a $15 million judgment against ChoicePoint for not screening prospective subscribers before selling them sensitive consumer information.

more...
No comment yet.
Rescooped by luiy from Collective Intelligence & Distance Learning
Scoop.it!

Desire2Learn’s New Learning Suite Aims To Predict Success, Change How Students Navigate Their Academic Career

Desire2Learn’s New Learning Suite Aims To Predict Success, Change How Students Navigate Their Academic Career | e-Xploration | Scoop.it

To do this, Desire2Learn wants to bring predictive analytics into play in education. But why? Well, first and foremost because, today, if students want to figure out whether a course is right for them — or how well they might perform in that course — they’re hard pressed to find a good answer. They can ask fellow students, check websites that rank faculty based on nebulous criteria or try to find surveys, but none of these options are ideal.

 

With its new analytics engine, Desire2Learn aims to change that by giving students the ability to predict their success in a particular course based on what they’ve studied in the past and how they performed in those classes. The new, so-called “Student Success System,” was built (in part) from the technology it acquired from Degree Compass; however, while Degree Compass used predictive analytics to help students optimize their course selection, the new product aims to help both sides of the learning equation: Students and teachers.

 

On the teacher side, Desire2Learn’s new analytics engine allows them to view predictive data visualizations that compare student performance against their peers so that they can identify at-risk students, for example, and monitor a student’s progress over time.

 

The idea is to give teachers access to important insight on stuff like class dynamics and learning trends, which they can then combine with assessment data, to improve their instruction or adapt to the way individual students learn. In theory, this leads not only to higher engagement, but also better outcomes


Via Huey O'Brien
luiy's insight:

Essentially, the tool allows students to move their academic resume to the cloud so they can take it with them after they graduate, which the company is incentivizing by offering 2GB of free storage.

 

Basically, what we’ve come to realize, the Desire2Learn CEO tells me, is that the company’s initial approach to business (or academic) intelligence was off track. “Students and teachers don’t necessarily want more data, they want more insight and they want that data broken out in a way that they can understand and helps them more quickly visualize the learning map,” he says.

 

When I asked if building and adding more and more tools and features would dilute the experience and result in feature overload, Baker said that the company doesn’t want to build a million different tools. Instead, it wants to become a platform that supports a million tools and allows third-parties that specialize in particular areas of education to help develop better products.

 

Through open-sourcing its APIs, Desire2Learn along with Edmodo and an increasing number of education startups are beginning to tap into the potential inherent to the creation of a real ecosystem. Adding predictive analytics tools gives Desire2Learn another carrot with which they hope to be able to draw both teachers, students and development partners into its ecosystem.

more...
No comment yet.
Rescooped by luiy from Mindful Decision Making
Scoop.it!

Illusory Correlations: When The Mind Makes Connections That Don’t Exist

Illusory Correlations: When The Mind Makes Connections That Don’t Exist | e-Xploration | Scoop.it
Why do CEOs who excel at golf get paid more, despite poorer stock market performance?

Via Philippe Vallat
luiy's insight:

To see how easily the mind jumps to the wrong conclusions, try virtually taking part in a little experiment...

 

...imagine that you are presented with information about two groups of people about which you know nothing. Let's call them the Azaleans and the Begonians.

 

For each group you are given a list of positive and negative behaviours. A good one might be: an Azalean was seen helping an old lady across the road. A bad one might be: a Begonian urinated in the street.

So, you read this list of good and bad behaviours about the Azaleans and Begonians and afterwards you make some judgements about them. How often do they perform good and bad behaviours and what are they?

What you notice is that it's the Begonians that seem dodgy. They are the ones more often to be found shoving burgers into mailboxes and ringing doorbells and running away. The Azaleans, in contrast, are a sounder bunch; certainly not blameless, but overall better people.

 

While you're happy with the judgement, you're in for a shock. What's revealed to you afterwards is that actually the ratio of good to bad behaviours listed for both the Azaleans and Begonians was exactly the same. For the Azaleans 18 positive behaviours were listed along with 8 negative. For the Begonians it was 9 positive and 4 negative.

In reality you just had less information about the Begonians. What happened was that you built up an illusory connection between more frequent bad behaviours and the Begonians; they weren't more frequent, however, they just seemed that way.

When the experiment is over you find out that most other people had done exactly the same thing, concluding that the Begonians were worse people than the Azaleans.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Modularity and community structure in networks

luiy's insight:

Many networks of interest in the sciences, including a variety of social and biological networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure has attracted considerable recent attention. One of the most sensitive detection methods is optimization of the quality function known as “modularity” over the possible divisions of a network, but direct application of this method using, for instance, simulated annealing is computationally costly. Here we show that the modularity can be reformulated in terms of the eigenvectors of a new characteristic matrix for the network, which we call the modularity matrix, and that this reformulation leads to a spectral algorithm for community detection that returns results of better quality than competing methods in noticeably shorter running times. We demonstrate the algorithm with applications to several network data sets.

 

 

Example applications


In practice, the algorithm developed here gives excellent results. For a quantitative comparison between our algorithm and others we follow Duch and Arenas [19] and compare values of the modularity for a variety of
networks drawn from the literature. Results are shown in Table I for six different networks—the exact same six as used by Duch and Arenas. We compare modularity figures against three previously published algorithms: the betweenness-based algorithm of Girvan and Newman [10], which is widely used and has been incorporated into some of the more popular network analysis programs (denoted GN in the table); the fast algorithm of Clauset et al. [26] (CNM), which optimizes modularity using a greedy algorithm; and the extremal optimization algorithm of Duch and Arenas [19] (DA), which is arguably the best previously existing method, by standard
measures, if one discounts methods impractical for large networks, such as exhaustive enumeration of all partitions or simulated annealing.

more...
No comment yet.
Rescooped by luiy from Papers
Scoop.it!

Network modularity promotes cooperation

Cooperation in animals and humans is widely observed even if evolutionary biology theories predict the evolution of selfish individuals. Previous game theory models have shown that cooperation can evolve when the game takes place in a structured population such as a social network because it limits interactions between individuals. Modularity, the natural division of a network into groups, is a key characteristic of all social networks but the influence of this crucial social feature on the evolution of cooperation has never been investigated. Here, we provide novel pieces of evidence that network modularity promotes the evolution of cooperation in 2-person prisoner's dilemma games. By simulating games on social networks of different structures, we show that modularity shapes interactions between individuals favouring the evolution of cooperation. Modularity provides a simple mechanism for the evolution of cooperation without having to invoke complicated mechanisms such as reputation or punishment, or requiring genetic similarity among individuals. Thus, cooperation can evolve over wider social contexts than previously reported.

 

Network modularity promotes cooperation
Marianne Marcoux, David Lusseau

Journal of Theoretical Biology
Volume 324, 7 May 2013, Pages 103–108

http://dx.doi.org/10.1016/j.jtbi.2012.12.012


Via Complexity Digest
more...
Complexity Digest's curator insight, May 2, 2013 6:26 PM

Modularity is prevalent in natural and artificial systems. A modular structure reduces the probability of "damage" or "perturbations" to spread through a network. More at:

Modular Random Boolean Networks
Rodrigo Poblanno-Balp and Carlos Gershenson
Artificial Life 2011 17:4, 331-351

http://dx.doi.org/10.1162/artl_a_00042

Jose Santos's curator insight, May 26, 2013 11:46 AM

why modularity is everywhere in our lives...

 

Scooped by luiy
Scoop.it!

Catalanes, inmigrantes y charnegos: "raza", "cultura" y "mezcla" en el discurso nacionalista catalán

Catalanes, inmigrantes y charnegos: "raza", "cultura" y "mezcla" en el discurso nacionalista catalán | e-Xploration | Scoop.it
This paper focuses on the manner in which 'race' and 'culture' are used in nationalist rhetorics, paying special attention to the presence or absence of ideas of biological or cultural 'mixture' employed in order to define socio-political identity...
luiy's insight:

Este artículo plantea una reflexión sobre el uso de la “raza” y la “cultura” en los discursos nacionalistas, prestando atención a la presencia o ausencia de la idea de “mezcla” biológica o cultural como base para la definición de la identidad socio-política de los descendientes de padres de distinta nacionalidad. Esta idea será analizada en un contexto etnográfico concreto, el nacionalismo catalán, un tipo de nacionalismo denominado “cívico” porque principalmente define la identidad catalana a partir de criterios culturales. El artículo contrasta esta retórica nacionalista con una forma de ideología xenofóbica, que se desarrolló brevemente en Cataluña contra los descendientes del matrimonio mixto entre catalanes e inmigrantes procedentes de otras regiones de España durante las décadas de 1960 y 1970. A través de este caso se pretende mostrar el carácter construido y cambiante de los discursos de clasificación social en función del contexto, que pueden oscilar entre y/o combinar principios biológicos y culturales. Se sugiere que sería importante tener esto en cuenta en el contexto actual de crisis económica, especialmente en  elación a los inmigrantes no-europeos y sus descendientes que
se encuentran en Cataluña.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Evidence for a Collective Intelligenc Factor in the Performance o Human Groups

more...
No comment yet.
Scooped by luiy
Scoop.it!

Social learning dilemma

Social learning dilemma | e-Xploration | Scoop.it
Last week, my father sent me a link to the 100 top-ranked specialties in the sciences and social sciences. The Web of Knowledge report considered 10 broad areas[1] of natural and social science, an...
luiy's insight:

Although we can trace the study of evolution of cooperation to Peter Kropotkin, the modern treatment — especially via agent-based modeling — was driven by the innovative thoughts of Robert Axelrod. Axelrod & Hamilton (1981) ran a computer tournament where other researchers submitted strategies for playing the iterated prisoners’ dilemma. The clarity of their presentation, and the surprising effectiveness of an extremely simple tit-for-tat strategy motivated much of the current work on cooperation. True to their subject matter, Rendell et al. (2010) imitated Axelrod and ran their own computer tournament of social learning strategies, offering 10,000 euros for the best submission. By cosmic coincidence, the prize went to students of cooperation: Daniel Cownden and Tim Lillicrap, two graduate students at Queen’s University, the former a student of mathematician and notable inclusive-fitness theorist Peter Taylor.

 

A restless multi-armed bandit served as the learning environment. The agent could select which of 100 arms to pull in order to receive a payoff drawn independently (for each arm) from an exponential distribution. It was made “restless” by changing the payoff after each pull with probability . A dynamic environment was chosen because copying outdated information is believed to be a central weakness of social learning, and because Papadimitriou & Tsitsiklis (1999) showed that solving this bandit (finding an optimal policy) is PSPACE-complete[3], or in laymen terms: very intractable.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review | e-Xploration | Scoop.it
The creator of the Wolfram Alpha search engine explains why he thinks your life should be measured, analyzed, and improved.
luiy's insight:

What do you see as the big applications in personal analytics?

Augmented memory is going to be very important. I’ve been spoiled because for years I’ve had the ability to search my e-mail and all my other records. I’ve been the CEO of the same company for 25 years, and so I never changed jobs and lost my data. That’s something that I think people will just come to expect. Pure memory augmentation is probably the first step.

 

The next is preëmptive information delivery. That means knowing enough about people’s history to know what they’re going to care about. Imagine someone is reading a newspaper article, and we know there is a person mentioned in it that they went to high school with, and so we can flag it. I think that’s the sort of thing it’s possible to dramatically automate and make more efficient.

 

Then there will be a certain segment of the population that will be into the self-improvement side of things, using analytics to learn about ourselves. Because we may have a vague sense about something, but when the pattern is explicit, we can decide, “Do we like that behavior, do we not?” Very early on, back in the 1990s, when I first analyzed my e-mail archive, I learned that a lot of e-mail threads at my company would, by a certain time of day, just resolve themselves. That was a useful thing to know, because if I jumped in too early I was just wasting my time.

more...
No comment yet.
Scooped by luiy
Scoop.it!

2012 Social Network Analysis Report – Demographic – Geographic and Search Data Revealed | Ignite Social Media

2012 Social Network Analysis Report – Demographic – Geographic and Search Data Revealed | Ignite Social Media | e-Xploration | Scoop.it
luiy's insight:

Reporting is the same as last year - most sites' search stats were pulled back by querying just their name. For example: "Twitter", instead of "Twitter.com" However, with that said, certain networks such as Tribe.net still needed to utilize the name.com variation, since people looking for tribe could be looking for a myriad of things, thus corrupting the data set.

All data continues to come from Google because they have one of the largest data sets on the web. We continued to use their Google Ad Planner and Google Insight for Search products to pull demographic and geographic data.

 

The Top Cities and Top Region reports show proportionate interest levels to the area based on the given search query.

 

The Demographic and Geographic reports have Y axis numbers that are percentages out of 100, therefore if the score is .52 then it is 52% of the population.

 

The Search Traffic reports are based on proportionate search traffic for the given query. It is on a scale of 100. Therefore if a given month shows the chart near 100, then that is the busiest month for query searches ever reported in Google during that given time frame.

more...
No comment yet.
Scooped by luiy
Scoop.it!

The Overview Project. Visualization to connect The dots.

The Overview Project. Visualization to connect The dots. | e-Xploration | Scoop.it
Integrated search. This is high on the list of user requests. Overview will shortly have a full-text search field: type in your search word or phrase, and the system will highlight all the documents in the tree that contain that term, and load those documents into the document list. If you find something interesting, you can turn your search results into a tag with one click. New document list. It hasn’t always been obvious that the lower left panel of the main screen is a list of documents, showing all the documents in the currently selected folder or tag. Nor has it been clear how to go through the documents in this list quickly (At the moment, the fastest way is to use the j and k keys.) We’re going to have a much more descriptive list of documents, including title, text snippet, and suggested tags (see below) for document in the list. Suggested tags. Currently, each folder and document is described by a list of characteristic words, the words that make that folder or document different from all the others. We’re expanding this concept into a list of suggested tags, with a super fast way to apply them to a document, or all documents in the list.
more...
No comment yet.
Rescooped by luiy from Collaboration
Scoop.it!

We need to learn how to connect

We need to learn how to connect | e-Xploration | Scoop.it

But if you want to prepare people not just for the next job, but for the one after that, you need to help them think through the relationships they have and what they learn from the people around them. Understanding people isn’t just an HR skill for managers.


Via Kenneth Mikkelsen
more...
David Hain's curator insight, March 3, 2013 7:38 AM

The usual challenging but profound perspective from Harlod Jarche.  Recommended!

Rescooped by luiy from Psychology and Social Networking
Scoop.it!

Conscious computing: how to take control of your life online

Conscious computing: how to take control of your life online | e-Xploration | Scoop.it
Twitter, Facebook, Google… we know the internet is driving us to distraction. But could sitting at your computer actually calm you down? Oliver Burkeman investigates the slow web movement

Via Aaron Balick
luiy's insight:

In March, I spent a week trying to live as faithfully as possible in accordance with the philosophy of calming (or conscious or contemplative) computing. At home, I stopped using my Nexus smartphone as a timepiece – I wore a watch instead – to prevent the otherwise inevitable slide from checking the time, or silencing the alarm, into checking my email, my Twitter feed or Wikipedia's List Of Unusual Deaths. After a couple of days, I disabled the Gmail and Twitter apps completely, and stored my phone in my bag while I worked, frequently forgetting it for hours at a time. At work, I shut off the internet in 90-minute slabs using Mac Freedom, the "internet blocking productivity software" championed by such writerly big shots as Zadie Smith and the late Nora Ephron. ("Freedom enforces freedom," its website explains chillingly.) Most mornings, I also managed 10 minutes with ReWire, a concentration-enhancing meditation app for the iPad that plays songs from your music library in short bursts, interrupted by silence; your job is to press a button as fast as you can each time you notice the music has stopped. I also tried to check my email no more than three times a day, and at fixed points: 9.30am, 1.30pm and 5pm.

 

Disconcerting things began to happen. I'm embarrassed to report that I found myself doing what's referred to, in Pang's book, as "paper-tweeting": scribbling supposedly witty wisecracks in a notebook as a substitute for the urge to share them online. (At least I'd never had a problem with "sleep texting", which, at least according to a few dubious media reports, is now a thing among serious smartphone addicts.) I had a few minor attacks of phantom mobile phone vibrations, aka "ringxiety", which research suggests afflicts at least 70% of us. By far the biggest obstacle to my experiment was the fact that the web and email are simultaneously sources of distraction and a vital tool: it's no use blocking the internet to work when you need the internet for work. Still, the overall result was more calmness and a clear sense that I'd gained purchase on my own mind: I was using it more than it was using me. I could jump online to look something up and then – this is the crucial bit – jump off again. After a few 90-minute stretches of weblessness, for example, I found myself not itching to get back online, but bored by the prospect. I started engaging in highly atypical behaviours, such as going for a walk, instead.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Google's Timelapse project shows how the Earth has changed over a quarter of a century

Google's Timelapse project shows how the Earth has changed over a quarter of a century | e-Xploration | Scoop.it
Google has expanded its mapping platform to launch a new project called Timelapse, taking you back through time to see how our planet has changed over the last 25 years. To create its new...
more...
No comment yet.
Rescooped by luiy from Big Data Analysis in the Clouds
Scoop.it!

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review

Stephen Wolfram Adds Analytics to the Quantified-Self Movement | MIT Technology Review | e-Xploration | Scoop.it
The creator of the Wolfram Alpha search engine explains why he thinks your life should be measured, analyzed, and improved.

Via Pierre Levy
luiy's insight:

What do you see as the big applications in personal analytics?


Augmented memory is going to be very important. I’ve been spoiled because for years I’ve had the ability to search my e-mail and all my other records. I’ve been the CEO of the same company for 25 years, and so I never changed jobs and lost my data. That’s something that I think people will just come to expect. Pure memory augmentation is probably the first step.

 

The next is preëmptive information delivery. That means knowing enough about people’s history to know what they’re going to care about. Imagine someone is reading a newspaper article, and we know there is a person mentioned in it that they went to high school with, and so we can flag it. I think that’s the sort of thing it’s possible to dramatically automate and make more efficient.

 

Then there will be a certain segment of the population that will be into the self-improvement side of things, using analytics to learn about ourselves. Because we may have a vague sense about something, but when the pattern is explicit, we can decide, “Do we like that behavior, do we not?” Very early on, back in the 1990s, when I first analyzed my e-mail archive, I learned that a lot of e-mail threads at my company would, by a certain time of day, just resolve themselves. That was a useful thing to know, because if I jumped in too early I was just wasting my time.

more...
Christophe CESETTI's curator insight, May 10, 2013 7:01 PM

here some more information about

• Measuring Employee Happiness and efficiency http://pear.ly/b7_yL

• from a french article "Le recrutement et la productivité à l’heure des Big Data" http://ow.ly/kSILt

• Pearltree http://pear.ly/b7_lf

Scooped by luiy
Scoop.it!

IBM What is big data? - Bringing big data to the enterprise

IBM What is big data? - Bringing big data to the enterprise | e-Xploration | Scoop.it
Everyday, we create 2.5 quintillion bytes of data–so much that 90% of the data in the world today has been created in the last two years alone.
luiy's insight:
What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.

more...
Jeff Hester's curator insight, May 10, 2013 2:39 PM

How big does the enterprise need to be for big data to be of value?

Rescooped by luiy from Papers
Scoop.it!

Exploring Default Mode and Information Flow on the Web

Social networking services (e.g., Twitter, Facebook) are now major sources of World Wide Web (called “Web”) dynamics, together with Web search services (e.g., Google). These two types of Web services mutually influence each other but generate different dynamics. In this paper, we distinguish two modes of Web dynamics: the reactive mode and the default mode. It is assumed that Twitter messages (called “tweets”) and Google search queries react to significant social movements and events, but they also demonstrate signs of becoming self-activated, thereby forming a baseline Web activity. We define the former as the reactive mode and the latter as the default mode of the Web. In this paper, we investigate these reactive and default modes of the Web's dynamics using transfer entropy (TE). The amount of information transferred between a time series of 1,000 frequent keywords in Twitter and the same keywords in Google queries is investigated across an 11-month time period.(...)

 

Oka M, Ikegami T (2013) Exploring Default Mode and Information Flow on the Web. PLoS ONE 8(4): e60398. http://dx.doi.org/10.1371/journal.pone.0060398


Via Complexity Digest
luiy's insight:
Discussion

This paper explored how to define the Web's reactive and default modes by information transfer by computing TE to characterize the inherent structure of the Web dynamics. First, we defined whether a keyword is in default or reactive mode in terms of how burst events are caused internally or externally. There are reports on YouTube page views and Twitter hashtags, whereby internally and externally caused bursts are distinguished by certain criteria[17], [19]. Our analysis of the number of bursts in relation to keyword frequency revealed that while low-frequency keywords tend to burst more, keywords are more influenced by real-world events, when compared to high-frequency keywords.

From this observation, we defined that high-frequency keywords form the Web's default mode network and low-frequency keywords constitute the Web's reactive mode. When analyzing the information transfer between Google and Twitter, we found that information is mostly transferred from Twitter to Google and that this tendency is more apparent for high-frequency keywords than for low-frequency keywords. We also studied the information flow network formed among Twitter keywords by taking the keywords as nodes and flow direction as the edges of a network. We found that high-frequency keywords tend to become information sources and low-frequency keywords tend to become information sinks. These findings suggest that we can use high-frequency keywords (or default mode of the Web) to reduce uncertainty with the externally driven low-frequency keywords (or reactive mode of the Web). However, it is fair to assume that frequently searched keywords in Google are different from the frequent keywords found on Twitter. Thus, if we investigated the high-frequency keywords found in Google queries, the results may be different.

The concept of reactive and default modes originates from brain science [20]–[22]. A brain region responsible for a given task is identified by measuring the neural activity that is observably higher compared to the baseline activity. Raichle et al. [23] examined the baseline activity by analyzing the regions that become less active when a specific task is given. This successful approach uncovered some remarkable perspectives and characteristics of the default mode; based on Buckner's [22] and Raichle's [24] reviews, these are: i) the area associated with the default mode is found as the integration of various subsystems in the brain - the medial prefrontal cortex and posterior cingulate cortex subsystems seem to play central roles. ii) The neural activity of the aforementioned subsystems were observed as noisy fMRI signals at a low frequency of about 0.1 Hz or less, showing global synchronization. iii) The default mode is to do with spontaneous cognition e.g., day dreaming and internal thoughts such as future planning. iv) The activity of the default mode is anti-correlated with the other brain regions that are responsible for focusing attention on the external world; and v) the brain region associated with the default mode overlaps with those involved in the construction of episodic memory.

This notion of the default mode can be generalized for any living systems with or without brain systems. In the case of the Web system, it can be said that 1) frequent keywords constitute the default mode (mostly everyday keywords), 2) these frequent keywords display less frequent bursting behaviors and are an information source for other keywords, 3) the default mode may help reduce uncertainty in the entire Web system, and 4) the default mode comprises quasi-periodic time series. From this comparison with the default mode network in brain systems, and in particular with the possibility that high-frequency keywords may help to predict essentially unpredictable events, it becomes apparent the Web's default mode may have the same property as the default modes in the brain. Differentiating between these two modes, the reactive and the default, provides a useful perspective for understanding Web dynamics and predicting the future of bursting behavior in the time series of keyword frequencies in tweets in Twitter, as well as in the time series of search queries in Google. With respect to the examples of complex networks in general, we believe that the default mode is key for understanding autonomy in complex systems in general. Any autonomous system (e.g., robots) possesses primitive forms of the default mode with different time scales [25].

more...
No comment yet.
Scooped by luiy
Scoop.it!

"bots" y política en México

"bots" y política en México | e-Xploration | Scoop.it
REPORTE BERÚMEN/SINEMBARGO Analistas: Luis Parra Meixueiro y Edmundo Berumen Osuna Datos: Berumen y Asociados twitter:  @mundoBo y @luisparramei Ciu
more...
No comment yet.
Scooped by luiy
Scoop.it!

#dataawareness : FTC warns data brokers on privacy rules

#dataawareness : FTC warns data brokers on privacy rules | e-Xploration | Scoop.it
Federal officials have warned data brokerage firms they may be violating privacy rules in selling personal data.
luiy's insight:

The FTC last year urged Congress to pass a law forcing the industry to disclose its practices, and in December FTC officials ordered nine data brokers to explain how they assemble and sell profiles of individual consumers. Sen. John D. Rockefeller IV (D-W.Va.) also haspushed for more information about the industry.

 

The warning letters issued over the past week started with a broader inquiry by FTC officials into 45 data brokers that appeared to market information whose use is restricted by the Fair Credit Reporting Act.

Agency staffers contacted the companies by phone and e-mail, posing as interested potential customers, to see if the employees of the firms were following federal rules. The warning letters noted possible violations in this initial screening and left open the possibility of future regulatory action should the companies fail to make changes.

 

“This should help raise awareness,” said Laura Berger, an attorney with the Bureau of Consumer Protection. She declined to comment on whether these or other companies are facing full investigations by the FTC.

Two of the companies that received warning letters appeared to offer “pre-screened” lists of potential customers for credit offers, FTC officials said, and two others appeared to offer information that could affect insurance eligibility. Six marketed information that could be used to make employment decisions.

 

The companies that received warning letters were 4Nannies, Brokers Data, Case Breakers,ConsumerBase, Crimcheck.com, People Search Now, U.S. Information Search, US Data Corporation and USA People Search. A 10th company was not named because receipt of the FTC letter had not been confirmed.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Robert Axelrod's Home Page

luiy's insight:

Complexity

Demonstration of the Game of Life Center for the Study of Complex Systems at the University of Michigan Cohen, Axelrod, and Riolo (CAR) project on agent-based modelsSanta Fe Institute 

Modeling in the Social Sciences

Guide for Newcomers to Agent-Based Modeling in the Social Sciences

 

Computational Economics (including all aspects of agent-based modeling in the social sciences) The Modeling Program at the University of Michigan�s Department of Political Science Complexity of Cooperation (including Chapter 4 on Landscape theory and WW II data)

Cooperation Theory

Source code for the rules of the second round of the Computer Tournament for the Prisoner�s Dilemma Annotated Bibliography on the Evolution of Cooperation (through 1994) "Twenty Years on: The Evolution of Cooperation Revisited" by Robert Hoffman, 2000

 

Methodology

Best of the Web - Methodology

 

Public Policy

The Gerald R. Ford School of Public Policy at the University of Michigan
more...
No comment yet.
Scooped by luiy
Scoop.it!

‘Going Dark’ Versus a ‘Golden Age for Surveillance’ | Center for Democracy & Technology

‘Going Dark’ Versus a ‘Golden Age for Surveillance’ | Center for Democracy & Technology | e-Xploration | Scoop.it
luiy's insight:

Choosing between “going dark” and “a golden age for surveillance”

This post argues that the big picture for agency access to data is mostly “golden.” The loss of agency access to information, due to encryption, is more than offset by surveillance gains from computing and communications technology. In addition, government encryption regulation harms cybersecurity. These conclusions will not be easily accepted by investigatory agencies, however, so it is important to work through the analysis in more detail.

 

Communications that were previously subject to wiretap may now be shrouded in encryption. In place of the old monopoly telephone network, agencies have to contend with a confusing variety of communications providers, some of which have little experience in complying with legal process. It is no wonder agency officials strenuously object to the use of new technology that hinders their ability to employ traditional surveillance methods.

 

Implementing wiretaps and reading the plaintext of communications are not the only goal, however. The computing and communications infrastructure are vital to economic growth, individual creativity, government operations, and numerous other goals. If there is a modest harm to investigatory agencies and an enormous gain, societies should choose the enormous gain. In 1999 the U.S. government concluded that strong encryption was precisely that sort of valuable technology – it was worth going at least slightly “dark” in order reap the many benefits of effective encryption. Not even the attacks of September 11, 2001 changed that judgment.

The evidence suggests, furthermore, that the degradation of wiretap capability has been modest at most, and—at least statistically—wiretaps have become more useful over time. The number of wiretap orders implemented in the U.S. has grown steadily the last two decades. According to publcally available statistics, court approved wiretaps are now at a record high: 3,194 wiretap court orders were issued for the interception of electronic, wire, or oral communications in 2010, a 34% increase from the 2,376 issued in 2009. In the six instances where encryption was encountered, it did not prevent law enforcement from retrieving the plaintext forms of communication.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Interview With Barry Wellman

Interview With Barry Wellman | e-Xploration | Scoop.it
© Barry Wellman and Figure/Ground Communication Dr. Wellman was interviewed by Andrew Iliadis on October 30th, 2012 Professor Barry Wellman is based at the University of Toronto, he directs NetLab,...
luiy's insight:

How do you see the individual actor in contemporary society? Your theory of “networked individualism” implies something different from actors working within groups. Is it the information in the network that comprises the individual? Can you say something briefly about this concept?

Well we’re playing off against, mostly against groups. Groups are really tightly bounded and densely knit networks where everybody knows each other; the village or a work team would be the best examples of it. A lot of things that a lot of evidence points out are that people live in multiple communities. I remember when I did my first study in East York, which is in Toronto by the way, in 1968, we were surprised to find out how few people lived in the same neighborhood.  We always thought of communities as neighborhoods. Now we’re studying work groups and we’re finding the same thing. Scholars especially move around from team to team. So, yes, that’s an issue, and networked individualism says, look, there are individuals, they are centers of their own personal networks, and then they move around between team and team. And we can’t analyze them as motley type super-individuals because people are constrained, they’re connected. We couldn’t solve the New York City Hurricane Sandy flood situation by giving everybody a little shovel. We have to have something that is comprised of small little building block groups.

 

Another concept – “Glocalization” – is becoming widely used and this is in part due to your own research on the subject. What is glocalization, and how is it different from the older model of globalization?

Glocalization is an multiple invented term. Keith Hampton, who was once my student and is now a faculty member at Rutgers, and I jointly invented the term. But we found that four or five other people did too. It’s a neologism certainly in which we put together global and local, and what we kept finding is people use social media such as the internet to be wildly connected, but at the same time the local situations turn out to be very important, both online and of course in real life because as computer scientists keep forgetting, they have bodies, so glocalization in our sense means interaction that is both global and local and of course everything in-between happening more or less simultaneously. But for many people the local is more important because almost always the people they speak to on the internet, or whatever form of it, are the same people they see in their physical interaction as well. There is no separation between the two.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Data visualization meets cyber crime - Risk Management

Data visualization meets cyber crime - Risk Management | e-Xploration | Scoop.it
Big data can be a bad thing - if you can't see the knowledge locked inside. That is particularly important when you are trying to see possible relationships and linkages.
luiy's insight:

According to the Herald, Army General Keith Alexander, head of the US military’s Cyber Command, predicts “the intensity and number of attacks will grow significantly throughout the year.” He said that cyber-attacks – particularly on the US banking sector – are getting worse. In fact, several major banks were recently targeted with coordinated Denial of Service (DoS) attacks.

Because of the increase in the number and severity of these attacks, many organizations have begun using advanced analytics and data visualization technologies to find cyber-crime activity and predict future attacks. These technologies help employees who aren’t data scientists or analysts to ask questions of the data – based on their own business expertise – to quickly and easily find patterns, spot inconsistencies, even get answers to questions they haven’t yet thought to ask.

The Navy Cyber Defense Operations Command is using data visualization to help defend the safety and security of the US Navy’s computer networks. These same capabilities are available for banks. Enormous amounts of network traffic data can be aggregated, manipulated, fused, visualized, processed and analyzed in a drag-and-drop interface. Sophisti­cated analyses can be performed quickly – even immediately – by people across all levels of your organization.

Using multiple types of analysis, alerts and other valuable intelligence can be created for anomaly detection and predictive analytics, and to investigate slow and low network intrusion. And the analytic models get smarter over time with learning and improvement cycles.

Analytics and data visualization gives a more complete picture of a bank’s systems and networks so that you can take a strategic approach to prioritizing resources and efforts instead of just plugging holes and fighting fires. These technological capabilities go well beyond business intelligence to provide data-driven information and analysis that is future directed, so decision makers can be proactive.  To make the data even more accessible, results can be delivered through multiple channels, including smart phones and iPads.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Data Visualization: 25 free Tools

Data Visualization: 25 free Tools | e-Xploration | Scoop.it
Samples of data visualization tools include maps, graphs, pictures, and many more. The 25 free visualization tools featured in this list.
luiy's insight:

Date visualization can be a pain if one is not aware of the many tools one can use in its endeavor. Data visualization is a process wherein data is shown in a simpler way in order for people to understand easily. Samples of data visualization tools include maps, graphs, pictures, and many more. There are many tools like this in the Internet, but none can be as effective as the 25 free visualization tools featured in this list. Some of these free visualization tools are being used for years now, while others are still new in the business. Some of these apps also try to broaden our approach to this quite tiring yet rewarding part of information technology. A few are orthodox and mathematical in nature, while some apps will try to visualize important human elements such as love, joy, hope, and faith. The 25 best free visualization tools are:

more...
malek's curator insight, May 9, 2013 8:36 AM

A few are orthodox and mathematical in nature, while some apps will try to visualize important human elements such as love, joy, hope, and faith.

Lindsay Wilson's curator insight, May 9, 2013 10:48 AM

Great to learn about new technology for charts and visualizations