e-Xploration
Follow
Find tag "algorithms"
24.1K views | +5 today
e-Xploration
antropologo.net, dataviz, collective intelligence, algorithms, social learning, social change, digital humanities
Curated by luiy
Your new post is loading...
Your new post is loading...
Scooped by luiy
Scoop.it!

Travail et travailleurs de la donnée - #Algopol | #datascience #methods

Travail et travailleurs de la donnée - #Algopol | #datascience #methods | e-Xploration | Scoop.it
Le questionnement scientifique qui anime le projet ALGOPOL voudrait comprendre la structure des liens sociaux existant au sein de réseaux égocentrés à partir du contenu des échanges et des liens partagés sur Facebook. Les interactions sur cette plateforme se déploient-elles différemment, avec une énonciation différente, autour de contenus partagés différents, selon les segments du réseau social mobilisés ? A-t-on des conversations différentes avec les liens « forts » et les liens « faibles » ? Les objets informationnels mis en partage sont-ils les mêmes selon la forme et la structure de la sociabilité numérique des individus ? Chercher à répondre à ces questions requiert des données fines et précises que les méthodes d’enquête traditionnelle ont beaucoup de difficulté à fournir [11].
more...
No comment yet.
Rescooped by luiy from Data is big
Scoop.it!

Machine Learning #WorkFlow | #datascience #bigdata

Machine Learning #WorkFlow | #datascience #bigdata | e-Xploration | Scoop.it
So far, I am planning to write a serie of posts explaining a basic Machine Learning work-flow (mostly supervised). In this post, my target is to propose the bird-eye view, as I'll dwell into details at the latter posts explaining each of the components in detail. I decide to write this serie due to two reasons; the first reason is self-education -to get all my bits and pieces together after a period of theoretical research and industrial practice- the second is to present a naive guide to beginn

Via ukituki
luiy's insight:

Each box has a color tone from YELLOW to RED. The yellower the box, the more this component relies on Statistics knowledge base. As the box turns into red[gets darker], the component depends more heavily on Machine Learning knowledge base. By saying this, I also imply that, without good statistical understanding, we are not able to construct a convenient machine learning pipeline. As a footnote, this schema is changed by post-modernism of Representation Learning algorithms and I'll touch this at the latter posts.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Michael Hansmeyer: Building unimaginable shapes | #algorithms #design

luiy's insight:

Inspired by cell division, Michael Hansmeyer writes algorithms that design outrageously fascinating shapes and forms with millions of facets. No person could draft them by hand, but they're buildable -- and they could revolutionize the way we think of architectural form.
Michael Hansmeyer is an architect and programmer who explores the use of algorithms and computation to generate architectural form. Full bio: http://www.ted.com/speakers/michael_h...

 

Why you should listen

Classical architecture is defined by "orders" -- ways to connect a column to a building, to articulate the joining of materials and structural forces. Colloquially, these orders are based on elemental forms: the tree trunk, the plank, the scroll, the leaf. Michael Hansmeyer is adding a new elemental form: the subdivision algorithm. He turns his math and programming skills to making ornate, organic, hyperdetailed columns generated from lines of code and then comped up in cross-sections of cardboard, almost as if they're being 3D printed.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#Bigdata, language and the death of the theorist | #DH #algorithms

#Bigdata, language and the death of the theorist | #DH #algorithms | e-Xploration | Scoop.it
Plenty of people have foreseen the death of the scientific theory at the hands of big data analysis, but when computers become good enough to understand literature, art and human history, will it spell the end for the humanities academic?
luiy's insight:

A lot has been written about the ways that big data has changed scientific enquiry, but as supercomputers increase in power and the tools to use them become less obtuse, whole new academic disciplines are beginning to feel the benefits of crunching data.

 

Believe it or not, some people even think we can forecast the future with big data. Predicting world-changing events is a possibility, some claim, if you treat society and history like a big data problem. It's how big data analyst Kalev Leetaru found where Osama bin Laden had been hiding, in a way.

more...
No comment yet.
Rescooped by luiy from data aesthetic
Scoop.it!

#Algorithmic culture. “Culture now has two audiences: people and machines" | #cyberculture

#Algorithmic culture. “Culture now has two audiences: people and machines" | #cyberculture | e-Xploration | Scoop.it

“ A conversation with Ted Striphas”


Via Jessica Parland, nicolasthely
luiy's insight:

How will you define the “Culture of Algorithms”?


My preferred phrase is “algorithmic culture,” which I use in the first instance to refer to the the ways in which computers, running complex mathematical formulae, engage in what’s often considered to be the traditional work of culture: the sorting, classifying, and hierarchizing of people, places, objects, and ideas. The Google example from above illustrates the point, although it’s also the case elsewhere on the internet. Facebook engages in much the same work in determining which of your friends, and which of their posts, will appear prominently in your news feed. The same goes for shopping sites and video or music streaming services, when they offer you products based on the ones you (or someone purportedly like you) have already consumed.

 

What’s important to note, though, is the way in which algorithmic culture then feeds back to produce new habits of thought, conduct, and expression that likely wouldn’t exist in its absence—a culture of algorithms, as it were. The worry here, pointed out by Eli Pariser and others, is that this culture tends to reinforce more than it challenges one’s existing preferences or ways of doing things. This is what is often called “personalization,” though Pariser calls it a “you loop” instead. By the same token, it is possible for algorithmic systems to introduce you to cultural goods that you might not have encountered otherwise. Today, culture may only be as good as its algorithms.

more...
No comment yet.
Rescooped by luiy from Global Brain
Scoop.it!

The Next Big Thing You Missed: The Quest to Give Computers the Power of #Imagination | WIRED | #AI #Vicarious

The Next Big Thing You Missed: The Quest to Give Computers the Power of #Imagination | WIRED | #AI #Vicarious | e-Xploration | Scoop.it
Vicarious wants to make artificial intelligence infinitely more intelligent. Here's why tech giants like Mark Zuckerberg and Jeff Bezos have given the startup millions of dollars to do it.

Via Spaceweaver
luiy's insight:

Imagine you want to teach a child the meaning of the word “table.” You’d probably point to a few examples in your home–a round wooden kitchen table, a square plastic kiddie table, a massive rectangular dining room table. “This is a table,” you’d say, three or four times, and after a while, the child would start identifying other tables, regardless of their shape, size, or color.

 

If you want to teach a modern computer the meaning of the word table, the process isn’t much different–except that you can’t stop at three or four examples. When dealing with a machine, you have to show it millions of tables before it can accurately identify a table on its own. This typically is called artificial intelligence, but D. Scott Phoenix doesn’t see it that way. “It’s actually pretty dumb,” says Phoenix, the founder of the three-year-old Silicon Valley startup Vicarious. Intelligence, he explains, is “being able to deduce something from very few examples.”

more...
No comment yet.
Scooped by luiy
Scoop.it!

The future of knowledge navigation | #HCI #Wolfram #interoperability

The future of knowledge navigation | #HCI #Wolfram #interoperability | e-Xploration | Scoop.it
luiy's insight:

Stephen Wolfram's recent announcement may change all that. The Wolfram Alpha natural language he has announced seems to be a solution to many complex human/computer interface problems. According to Wolfram, symbolic programming is the future of systems design. He says:

"There are plenty of existing general-purpose computer languages. But their vision is very different—and in a sense much more modest—than the Wolfram Language. They concentrate on managing the structure of programs, keeping the language itself small in scope, and relying on a web of external libraries for additional functionality. In the Wolfram Language my concept from the very beginning has been to create a single tightly integrated system in which as much as possible is included right in the language itself."

Wolfram also talks about the fluidity of the new language, suggesting that coding and data can become interchangeable:

"In most languages there’s a sharp distinction between programs, and data, and the output of programs. Not so in the Wolfram Language. It’s all completely fluid. Data becomes algorithmic. Algorithms become data. There’s no distinction needed between code and data. And everything becomes both intrinsically scriptable, and intrinsically interactive. And there’s both a new level of interoperability, and a new level of modularity."

more...
No comment yet.
Rescooped by luiy from digitalNow
Scoop.it!

How Google Works | #algorithms

How Google Works | #algorithms | e-Xploration | Scoop.it
Have you ever wondered how Google works? To help you get a better understanding of Google’s algorithm as well as to show you how some of Google’s

Via Don Dea
more...
Don Dea's curator insight, February 25, 2014 12:46 AM

It's economically feasible too. The average access speed in the U.S. is now under 10 megabits per second and costs around $40-$60. Verizon FiOS charges $300 a month for 500 megabit service. Yet Google and others charge just $70 a month for a full gigabit connection, download and upload. VTel in Springfield, Vt., charges $35. Gigabit in Hong Kong was $26 way back in 2011.

Scooped by luiy
Scoop.it!

Constraints on the Universe as a Numerical #Simulation | #research #algorithms

Constraints on the Universe as a Numerical #Simulation | #research #algorithms | e-Xploration | Scoop.it
luiy's insight:

Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Social network e bigdata, “ #sentiment analysis” | #datascience #algorithms

Social network e bigdata, “ #sentiment analysis” | #datascience #algorithms | e-Xploration | Scoop.it
Un libro illustra le basi di questa analisi delle opinioni . Che inizia a prendere piede anche nel nostro Paese. In politica (e non solo)
luiy's insight:

ALGORITMI E BOT - Fare sentiment analysis però significa anche capire se l'opinione espressa è positiva o negativa. Ossia stabilire il grado di felicità su un determinato tema degli utenti. Per fare questo ulteriore passo, i tre ricercatori hanno rielaborato un complesso algoritmo indipendente dalla lingua usata dall'utente. Ma al lavoro meccanico va sempre aggiunto quello manuale. «Se vogliamo capire quanto gli utenti siano felici o meno dell'elezione di Papa Francesco, non possiamo affidarci solo ai computer», sintetizza Iacus. Una macchina infatti non è in grado di comprendere certe espressioni, così come non è capace, ad esempio, di associare determinati soprannomi (si pensi a Berlusconi) a una figura politica. O di interpretare espressioni come "che bella fregatura". Ecco perché allora è necessario il lavoro umano.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Google's Grand Plan to Make Your Brain Irrelevant | #algorithms

Google's Grand Plan to Make Your Brain Irrelevant | #algorithms | e-Xploration | Scoop.it
Google is on a shopping spree. But instead of a shopping cart filled with gadgets or groceries, its aggressive buy-up of robotics, smart device, and artificial intelligence startups is Google's way of assembling the pieces and people it needs to...
luiy's insight:

Basically, the idea is to mimic the biological structure of the human brain with software so that it can build machines that learn “organically” — that is, without human involvement.

 

Google is already working to apply these insights to its familiar consumer products and services. Deep learning can help recognize what’s in your photos without asking you to tag them yourself, and it can help understand human speech, a key tool for its smartphone apps and Google Glass computerized eyewear. But Google also sees the new AI as a better way to target ads — the core of its business.

 

The DeepMind acquisition is one more step down this road. And though the company has not said as much, you can bet that this new form of AI will also play into things like Nest smart thermostats, the Google self-driving cars, and its big push into robotics.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Google acquiert DeepMind, start-up en intelligence artificielle | #algorithms #NSA

Google acquiert DeepMind, start-up en intelligence artificielle | #algorithms #NSA | e-Xploration | Scoop.it
Google continue de surfer sur la vague de la robotique et de l’intelligence artificielle. En rachetant la startup britannique DeepMind, spécialisée dans l’AI, le géant du web dévoile ses intentions un peu plus.
luiy's insight:

Google vient d’acquérir DeepMind Technologies pour environ 400.000 dollars. DeepMind se définit comme une startup combinant les meilleures méthodes d’apprentissage automatique et des neurosciences des systèmes pour construire des algorithmes utilisables dans la vie de tous les jours. DeepMind commercialise ses applications pour des simulations e-commerce et les jeux vidéos.

 

-------------------------------------------------------

 

Plus aucun doute : Google axe sa stratégie autour de l’intelligence artificielle, indispensable notamment pour ses applications de traduction ainsi que ses solutions de reconnaissance vocale. Après le recrutement de Ray Kurzweil, le père des théories sur la singularité technologique et la création en mai dernier d’un laboratoire baptiséQuantum Artificial Intelligence Lab en collaboration avec la NASA, Google se donne un peu plus les moyens de développer des algorithmes capables de faire des prévisions plus précises.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#Complexity in Social Networks | #algorithms #SNA

#Complexity in Social Networks | #algorithms #SNA | e-Xploration | Scoop.it
How network structure impacts consumer experience.
luiy's insight:

In the same way software has “eaten” many industries and continues to devour more, the structure of complex systems is relevant in an increasing number of subjects, from neurobiology to industrial engineering. In the consumer internet, many of the most interesting technology platforms are, at their core, networks. As with most complex systems, small changes can have large consequences, and the structure of a network can materially impact consumer experience, many times changing the core way that people interact with the service.

 

One way to think about these technology platforms is to think of any complex network as having four fundamental components:

 

- Nodes (the objects in the graph, e.g., people, things)

 

- Data/content (the thing being shared between the nodes, e.g., tweet

 

- Edges with rules (e.g., bidirectional “friend”, single-directional “follow”)

 

- Jumping functions, specifically ways to transmit the data/content from one subgroup of people to another on the same platform, usually based on rules surrounding how the edges are structured (e.g., retweeting / liking / favoriting).

more...
No comment yet.
Scooped by luiy
Scoop.it!

#BIGDATA SOCIETY: Age of Reputation or Age of Discrimination? | #controverses #privacy

#BIGDATA SOCIETY: Age of Reputation or Age of Discrimination? | #controverses #privacy | e-Xploration | Scoop.it
luiy's insight:

Like every technology, Big Data has some side effects. Even if you are not concerned about losing your privacy, you should be worried about one thing: discrimination. A typical application of Big Data is to distinguish different kinds of people: terrorists from normal people, good from bad insurance risks, honest tax payers from those who don't declare all income ... You may ask, isn't that a good thing? Maybe on average it is, but what if you are wrongly classified? Have you checked the information collected by the Internet about your name or gone through the list of pictures google stores about you? Even more scary than how much is known about you is the fact that there is quite some information in between which does not fit. So, what if you are stopped by border control, just because you have a similar name as a criminal suspect? If so, you might have been traumatized for quite some time.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Where was Ferguson in my Facebook feed? | #algorithms #filtering

Where was Ferguson in my Facebook feed? | #algorithms #filtering | e-Xploration | Scoop.it
There were big differences in the content related to Ferguson on Twitter and Facebook. Was the reason what users wanted from each, or the sites' algorithms?
luiy's insight:

A number of journalists and commentators observed a jarring disconnect between the mostly uncontroversial posts on Facebook (like chatter about celebrities taking the Ice Bucket Challenge to raise funds for the fight against Lou Gehrig’s disease), and the stream of visceral reportage from the tense scene in Ferguson, where citizens had gathered to protest the August 9th police killing of an unarmed black teen, Michael Brown.

more...
No comment yet.
Scooped by luiy
Scoop.it!

Net Neutrality, #Algorithmic Filtering and #Ferguson | #censorship

Net Neutrality, #Algorithmic Filtering and #Ferguson | #censorship | e-Xploration | Scoop.it

Net Neutrality, Algorithmic Filtering and Ferguson

luiy's insight:

Twitter was also affected by algorithmic filtering. “Ferguson” did not trend in the US on Twitter but it did trend locally. [I’ve since learned from @gilgul that that it *briefly* trended but mostly trended at localities.] So, there were fewer chances for people not already following the news to see it on their “trending” bar. Why? Almost certainly because there was already national, simmering discussion for many days and Twitter’s trending algorithm (said to be based on a method called “term frequency inverse document frequency”) rewards spikes… So, as people in localities who had not been talking a lot about Ferguson started to mention it, it trended there though the national build-up in the last five days penalized Ferguson.


Algorithms have consequences.


Mass media, typically, does not do very well covering chronic problems of unprivileged populations, poor urban blacks bear the brunt of this, but they are not alone. Rural mostly white America, too, is almost always ignored except for the occasional “meth labs everywhere” story. But yesterday, many outlets were trying, except police didn’t let them. Chris Hayes says that police ordered satellite trucks off the area so that they could not go live from the area. Washington Post was only one outlet whose journalists were arrested — citizen journalists were targeted as well.


more...
No comment yet.
Rescooped by luiy from Communication in the digital era
Scoop.it!

How #algorithms decide the #news you see

How #algorithms decide the #news you see | e-Xploration | Scoop.it
Homepage traffic for news sites continues to decrease. This trend is the result of an “if the news is important, it will find me” mentality that developed with the rise of social media, when people began to read links that their friends and others in their networks recommended. Thus, readers are increasingly discovering news through social media, email, and reading…

Via Andrea Naranjo
luiy's insight:

Beyond the filter bubble, algorithmic bias extends to search engine manipulation, which refers to the process undertaken by many companies, celebrities, and public figures to ensure that favorable content rises to the top of search engine results in particular regions. Though not intuitive to the average Web user, it's actually a form of soft censorship, explains Wenke Lee, Director of the Georgia Tech Information Security Center.

 

After reading Pariser's book, Lee and his research team set out to test the effect of personalized search results on Google and built a tool called Bobble, a browser plug-in that runs simultaneous Google searches from different locations around the globe so users can see the difference between Google search returns for different people. They found that results differ based on several factors: Web content at any given time, the region from which a search is performed, recent search history, and how much search engine manipulation has occurred to favor a given result. Though Bobble has largely been confined to research purposes, it has been downloaded close 10,000 times and has tremendous potential as a news literacy teaching tool.

more...
No comment yet.
Scooped by luiy
Scoop.it!

#Stigmergic dimensions of Online Creative Interaction | #algorithms #memes

#Stigmergic dimensions of Online Creative Interaction | #algorithms #memes | e-Xploration | Scoop.it
This paper examines the stigmergic dimensions of online interactive creativity through the lens of Picbreeder. Picbreeder is a web-based system for collaborative interactive evolution of images. Th...
luiy's insight:

Creativity as stigmergy

 

If stigmergy happens when an agent’s effect on the environment “stimulates and guides” the work of others, then certainly creative communities must be subject to some kind of stigmergy. No creative endeavor exists in a vac- uum, and being inspired and stimulated by the work of another is so fundamental to creative communities of artists, academics, engineers, etc., that it is difficult to imagine these communities functioning any other way.

 

Closely related to the concept of stigmergy is the concept of self-organization. The reason that it is remarkable that one user’s work stimulates another’s is the emergence of patterns that appear as if that they could be centrally controlled. Often, a mix of direct communication and con- trol as well as emergent properties of the social structure give rise to collaborative creative activities. Fig. 4 suggests an informal ordering of the amount direct communication and coordination involved in several different types of creative processes, with emergent creative processes on the left end, and highly coordinated processes on the right

more...
No comment yet.
Scooped by luiy
Scoop.it!

Data Mining #Algorithms In R/Clustering/K-Cores | #datascience #SNA

Data Mining #Algorithms In R/Clustering/K-Cores | #datascience #SNA | e-Xploration | Scoop.it
luiy's insight:
Cores

The notion of core is presented in Butts (2010) as following:

 

Let G = (V, E) be a graph, and let f (v, S, G) for v ∈ V, S ⊆ V be a real-valued vertex property function (in the language of Batagelj and Zaversnik). Then some set H ⊆ V is a generalized k-core for f if H is a maximal set such that f (v, H, G) ≥ k for all v ∈ H. Typically, f is chosen to be a degree measure with respect to S (e.g., the number of ties to vertices in S). In this case, the resulting k-cores have the intuitive property of being maximal sets such that every set member is tied (in the appropriate manner) to at least k others within the set.

 

Degree-based k-cores are a simple tool for identifying well-connected structures within large graphs. Let the core number of vertex v be the value of the highest-value core containing v. Then, intuitively, vertices with high core numbers belong to relatively well-connected sets (in the sense of sets with high minimum internal degree). It is important to note that, while a given k-core need not be connected, it is composed of subsets which are themselves well-connected; thus, the k-cores can be thought of as unions of relatively cohesive subgroups.

 

As k-cores are nested, it is also natural to think of each k-core as representing a “slice” through a hypothetical “cohesion surface” on G. (Indeed, k-cores are often visualized in exactly this manner.)

The kcores function produces degree-based k-cores, for various degree measures (with or without edge values). The return value is the vector of core numbers for V , based on the selected degree measure. Missing (i.e., NA) edge are removed for purposes of the degree calculation.

 
more...
No comment yet.
Scooped by luiy
Scoop.it!

Cyber-democracy: my global political program! by @plevy | @eDemocracy

Cyber-democracy: my global political program! by @plevy | @eDemocracy | e-Xploration | Scoop.it
Visit the post for more.
more...
Monica S Mcfeeters's curator insight, March 21, 2014 9:26 AM

It is good to see that more and more articles are calling attention to all these concerns.

Scooped by luiy
Scoop.it!

The Five Graphs of Love | #Neo4j #SNA #algorithms

The iDating industry cares about interactions and connections. Those two concepts are closely linked. If someone has a connection to another person, through a shared…
luiy's insight:

Dating sites and apps worldwide have begun to use graph databases to achieve competitive gain. Neo4j provides thousand-fold performance improvements and massive agility benefits over relational databases, enabling new levels of performance and insight. Join us for a webinar, presented by Amanda Laucher, that discusses the five graphs of love, and how companies like eHarmony, Hinge and AreYouInterested.com, are now using graph algorithms to create more interactions and connections.

 

more...
No comment yet.
Rescooped by luiy from The New Global Open Public Sphere
Scoop.it!

White Paper on Research #Challenges in Social #CollectiveIntelligence


Via Pierre Levy
luiy's insight:

This report first situates and outlines the potential of social computation to provide the basis for Social Collective Intelligence (SCI) in future systems. This involves the close interaction of social groups and machines together with systems of incentives and social structures to perform tasks that would otherwise be difficult to achieve either using entirely human or entirely machine solutions. The deliverable considers the challenges both from a technical and from a social science standpoint, identifying the potential for aligning them in order to provide an interdisciplinary perspective on the development of SCI systems. The paper then describes some of the challenges in developing an engineering approach to the development of such systems. Finally the paper outlines some of the “big questions” that arise from the framework for SCI research developed in the white paper.

more...
No comment yet.
Rescooped by luiy from Robótica Educativa!
Scoop.it!

Exclusive: How Google's #Algorithms Rules the Web

Exclusive: How Google's #Algorithms Rules the Web | e-Xploration | Scoop.it
Want to know how Google is about to change your life? Stop by the Ouagadougou conference room on a Thursday morning. It is here, at the Mountain View, Cali

Via Pierre Levy, juandoming
luiy's insight:

Google is famously creative at encouraging these breakthroughs; every year, it holds an internal demo fair called CSI — Crazy Search Ideas — in an attempt to spark offbeat but productive approaches. But for the most part, the improvement process is a relentless slog, grinding through bad results to determine what isn’t working. One unsuccessful search became a legend: Sometime in 2001, Singhal learned of poor results when people typed the name “audrey fino” into the search box. Google kept returning Italian sites praising Audrey Hepburn. (Fino means fine in Italian.) “We realized that this is actually a person’s name,” Singhal says. “But we didn’t have the smarts in the system.”

 

The Audrey Fino failure led Singhal on a multiyear quest to improve the way the system deals with names — which account for 8 percent of all searches. To crack it, he had to master the black art of “bi-gram breakage” — that is, separating multiple words into discrete units. For instance, “new york” represents two words that go together (a bi-gram). But so would the three words in “new york times,” which clearly indicate a different kind of search. And everything changes when the query is “new york times square.” Humans can make these distinctions instantly, but Google does not have a Brazil-like back room with hundreds of thousands of cubicle jockeys. It relies on algorithms.

more...
Catherine Pascal's curator insight, January 30, 2014 6:33 AM

 Clair !!

Mlik Sahib's curator insight, January 31, 2014 12:08 AM

"The comparison demonstrates the power, even intelligence, of Google’s algorithm, honed over countless iterations. It possesses the seemingly magical ability to interpret searchers’ requests — no matter how awkward or misspelled. Google refers to that ability as search quality, and for years the company has closely guarded the process by which it delivers such accurate results. But now I am sitting with Singhal in the search giant’s Building 43, where the core search team works, because Google has offered to give me an unprecedented look at just how it attains search quality. The subtext is clear: You may think the algorithm is little more than an engine, but wait until you get under the hood and see what this baby can really do."

Rescooped by luiy from The New Global Open Public Sphere
Scoop.it!

The Age of 'Infopolitics' | #privacy #dataAwareness #infopersons

The Age of 'Infopolitics' | #privacy #dataAwareness #infopersons | e-Xploration | Scoop.it
As digital persons, we’re vulnerable to new, digital injustices.

Via Pierre Levy
luiy's insight:

We need a concept of infopolitics precisely because we have become infopersons. What should we do about our Internet and phone patterns’ being fastidiously harvested and stored away in remote databanks where they await inspection by future algorithms developed at the National Security Agency, Facebook, credit reporting firms like Experian and other new institutions of information and control that will come into existence in future decades? What bits of the informational you will fall under scrutiny? The political you? The sexual you? What next-generation McCarthyisms await your informational self? And will those excesses of oversight be found in some Senate subcommittee against which we democratic citizens might hope to rise up in revolt — or will they lurk among algorithmic automatons that silently seal our fates in digital filing systems?


----------------------------


Infopolitics, infopersons, algorithms, NSA, informational self, data awareness.

 

more...
Mlik Sahib's curator insight, January 31, 2014 12:31 AM

"As soon as we learn to see ourselves and our politics as informational, we can begin to see the importance of surveillance reforms of the sort proposed by Senator Ron Wyden, Democrat  of Oregon, as well as the wisdom implicit in the transgressions of “hacktivists” whose ethics call for anonymity and untraceability. Despite their decidedly different political sensibilities, what links together the likes of Senator Wyden and the international hacker network known as Anonymous is that they respect the severity of what is at stake in our information. They understand that information is a site for the call of justice today, alongside more quintessential battlefields like liberty of thought and equality of opportunity. Willingness to see ourselves as informational persons subject to informational powers could help us bring into view what will be required to protect the many individual rights and social ties now inhering in all those bits and bytes."