cognition
Follow
Find
8.7K views | +4 today
 
Rescooped by FastTFriend from Collective intelligence
onto cognition
Scoop.it!

The nature of collective intelligence

The nature of collective intelligence | cognition | Scoop.it

Presentation by Pierre Levy


Via Viktor Markowski
more...
Viktor Markowski's curator insight, March 2, 2013 11:57 AM

45 minute video presentation supported by slides on the nature of collective intelligence and the philosophical and technical construct behind the next level of the internet as a global mind.

Luciano Lampi's curator insight, March 22, 2013 2:15 PM

Pierre Levy, c´est toujours très intéressant!

Bernard Ryefield's curator insight, June 18, 2013 2:32 PM

Pierre Lévy invented IEML; think semantic web

cognition
How it evolved, what we do with it, futures; And otherwise interesting stuff
Curated by FastTFriend
Your new post is loading...
Your new post is loading...
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Can Another Body Be Seen as an Extension of Your Own?

Can Another Body Be Seen as an Extension of Your Own? | cognition | Scoop.it
Among dance forms, tango holds a unique and potent allure. It showcases two individuals—each with a separate mind, body, and bundle of goals and intentions, moving at times in close embrace, at times stepping away from each other, improvising moves and flourishes while responding to the imaginative overtures of the other—who somehow manage to give the impression of two bodies answering to a single mind. For performers and viewers alike, much of tango’s appeal comes from this apparent psychic fusion into a super-individual unit. Michael Kimmel, a social and cultural anthropologist who has researched the interpersonal dynamics of tango, writes that dancers “speak in awe of the way that individuality dissolves into a meditative unity for the three minutes that the dance lasts. Time and space give way to a unique moment of presence, of flow within and between partners.”

Tango offers more than aesthetic bliss; like all artistic practices that demand great skill, it also presents a seductive scientific puzzle, highlighting the mind’s potential to learn and re-shape itself in dramatic ways. But it’s only very recently that scientists have started building a systematic framework to explain how a person might achieve the sort of fusion that is needed for activities like social dancing, and what the impact of such an interpersonal entanglement might be.

At the heart of the puzzle is the notion of a body schema—a mental representation of the physical self that allows us to navigate through space without smashing into things, to scratch our nose without inadvertently smacking it, and to know how far and how quickly to reach for a cup of coffee without knocking it over. We can do all these things because our brains have learned to identify the edges of our bodies using information from multiple senses and devote exquisite attention to stimuli near our bodily boundaries.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Genetic ‘intelligence networks’ discovered in the brain | KurzweilAI

Genetic ‘intelligence networks’ discovered in the brain | KurzweilAI | cognition | Scoop.it
Scientists from Imperial College London have identified two clusters (“gene networks”) of genes that are linked to human intelligence. Called M1 and M3, these gene networks appear to influence cognitive function, which includes memory, attention, processing speed and reasoning.

Importantly, the scientists have discovered that these two networks are likely to be under the control of master regulator switches. The researcher want to identify those switches and see if they can manipulate them, and ultimately find out if this knowledge of gene networks could allow for boosting cognitive function.

“We know that genetics plays a major role in intelligence but until now, haven’t known which genes are relevant,” said Michael Johnson, lead author of the study from the Imperial College London Department of Medicine. Johnson says the genes they have found so far are likely to share a common regulation, which means it may be possible to manipulate a whole set of genes linked to human intelligence.

Combining data from brain samples, genomic information, and IQ tests

In the study, published in the journal Nature Neuroscience, the international team of researchers looked at samples of human brain from patients who had undergone neurosurgery for epilepsy. The investigators analyzed thousands of genes expressed in the human brain, and then combined these results with genetic information from healthy people who had undergone IQ tests and from people with neurological disorders such as autism spectrum disorder and intellectual disability.

Then they conducted various computational analyses and comparisons to identify the gene networks influencing healthy human cognitive abilities. Remarkably, they found that some of the same genes that influence human intelligence in healthy people cause impaired cognitive ability and epilepsy when mutated. And they found that genes that make new memories or sensible decisions when faced with lots of complex information also overlap with those that cause severe childhood onset epilepsy or intellectual disability.

Via Wildcat2030
more...
Farid Mheir's curator insight, January 15, 5:57 PM

Brain research and work to understand the architecture of our brain and reverse engineer it into machines continues to bring wonderful insights into who we are and what will soon be possible with digital machines.

Rescooped by FastTFriend from Daily Magazine
Scoop.it!

Quantum Computers Explained – Limits of Human ...

Quantum Computers Explained – Limits of Human ... | cognition | Scoop.it

Where are the limits of human technology? And can we somehow avoid them? This is where quantum computers become very interesting. 


Via THE *OFFICIAL ANDREASCY*
more...
Christian Verstraete's curator insight, January 4, 3:05 AM

Interesting and frightening at the same time.

CapConsult's curator insight, January 5, 3:37 AM

Quantum computers to understand with a bit of science, recommended video.

Scooped by FastTFriend
Scoop.it!

Edge.org

Edge.org | cognition | Scoop.it
FastTFriend's insight:

History records huge changes in our species over the last 5000 years or so—and presumably pre-history would fill in the picture. But scholars have generally held the view that the fundamental nature of our species—the human genome, so to speak—has remained largely the same for at least 10,000 years and possibly much longer. As Marshall McLuhan argued, technology extends our senses—it does not fundamentally change them. Once one begins to alter human DNA—for example, through CRISPR—or the human nervous system—by inserting mechanical or digital devices—then we are challenging the very definition of what it means to be human. And once one cedes high level decisions to digital creations, or these artificially intelligent entities cease to follow the instructions that have been programmed into them and rewrite their own processes, our species will no longer be dominant on this planet.

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Debunking the biggest myths about artificial intelligence

Debunking the biggest myths about artificial intelligence | cognition | Scoop.it
From killer robots, to runaway sentience, there's a lot of FUD that needs clearing up.
FastTFriend's insight:

"The myriad dangers of artificial intelligences acting independently from humans are easy to imagine in the case of a rogue robot warrior, or a self-driving car that doesn’t correctly identify a life-threatening situation. The dangers are less obvious in the case of a smart search engine that has been quietly biased to give answers that, in the humble opinion of the megacorp that owns the search engine, aren’t in your best interest."

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Alzheimer's from the inside

Alzheimer's from the inside | cognition | Scoop.it

There's an excellent short-film, featuring journalist Greg O'Brien, who describes the experience of Alzheimer's disease as it affects him.

more...
No comment yet.
Rescooped by FastTFriend from Backstage Rituals
Scoop.it!

Casting Herself in the Lead Role, Photographer Recreates Famous Artworks by Degas, Picasso, Rembrandt and More - Feature Shoot

Casting Herself in the Lead Role, Photographer Recreates Famous Artworks by Degas, Picasso, Rembrandt and More - Feature Shoot | cognition | Scoop.it

Via Mohir
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Why do we forget names?

Why do we forget names? | cognition | Scoop.it
A reader, Dan, asks “Why do we forget people’s names when we first meet them? I can remember all kinds of other details about a person but completely forget their name. Even after a lengthy, in-depth conversation.
more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

What Makes Our Brains Special?

What Makes Our Brains Special? | cognition | Scoop.it
The human brain is unique: Our remarkable cognitive capacity has allowed us to invent the wheel, build the pyramids and land on the moon. In fact, scientists sometimes refer to the human brain as the “crowning achievement of evolution.”

But what, exactly, makes our brains so special? Some leading arguments have been that our brains have more neurons and expend more energy than would be expected for our size, and that our cerebral cortex, which is responsible for higher cognition, is disproportionately large—accounting for over 80 percent of our total brain mass.

Suzana Herculano-Houzel, a neuroscientist at the Institute of Biomedical Science in Rio de Janeiro, debunked these well-established beliefs in recent years when she discovered a novel way of counting neurons—dissolving brains into a homogenous mixture, or “brain soup.” Using this technique she found the number of neurons relative to brain size to be consistent with other primates, and that the cerebral cortex, the region responsible for higher cognition, only holds around 20 percent of all our brain’s neurons, a similar proportion found in other mammals. In light of these findings, she argues that the human brain is actually just a linearly scaled-up primate brain that grew in size as we started to consume more calories, thanks to the advent of cooked food.

Via Wildcat2030
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Why story is used to explain symphonies and sport matches alike by Philip Ball — Aeon

We use neat stories to explain everything from sports matches to symphonies. Is it time to leave the nursery of the mind?
FastTFriend's insight:

"And that’s the real point. We need narrative not because it is a valid epistemological description of the world but because of its cognitive role."

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

How we experience the meaning we create - Aeon Video

Because our senses create meaning in the world, augmented reality could allow us to change it for the better through new modes of perception
more...
No comment yet.
Rescooped by FastTFriend from The Long Poiesis
Scoop.it!

Mad Men of the future will be brain scientists and dream experts

Mad Men of the future will be brain scientists and dream experts | cognition | Scoop.it
It's been 60 years since first TV ad aired in the UK. What could the next 60 bring?

Via Xaos
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

How David Hume Helped Me Solve My Midlife Crisis - The Atlantic

How David Hume Helped Me Solve My Midlife Crisis - The Atlantic | cognition | Scoop.it
David Hume, the Buddha, and a search for the Eastern roots of the Western Enlightenment
FastTFriend's insight:

"This story may help explain Hume’s ideas. It unquestionably exemplifies them. All of the characters started out with clear, and clashing, identities—the passionate Italian missionary and the urbane French priest, the Tibetan king and lamas, the Siamese king and monks, the skeptical young Scot.

But I learned that they were all much more complicated, unpredictable, and fluid than they appeared at first, even to themselves. Both Hume and the Buddha would have nodded sagely at that thought. "

more...
No comment yet.
Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

Horizontal History - Wait But Why

Horizontal History - Wait But Why | cognition | Scoop.it
Most of us have a pretty terrible understanding of history. Our knowledge is spotty, with large gaps all over the place, and the parts of history we do end up knowing a lot about usually depend on the particular teachers, parents, books, articles, and movies we happen to come across in our lives. Without a foundational, tree-trunk understanding of all parts of history, we often forget the things we do learn, leaving even our favorite parts of history a bit hazy in our heads. Raise your hand if you’d like to go on stage and debate a history buff on the nuances of a historical time period of your choosing. That’s what I thought.

The reason history is so hard is that it’s so soft. To truly, fully understand a time period, an event, a movement, or an important historical figure, you’d have to be there, and many times over. You’d have to be in the homes of the public living at the time to hear what they’re saying; you’d have to be a fly on the wall in dozens of secret, closed-door meetings and conversations; you’d need to be inside the minds of the key players to know their innermost thoughts and motivations. Even then, you’d be lacking context. To really have the complete truth, you’d need background—the cultural nuances and national psyches of the time, the way each of the key players was raised during childhood and the subtle social dynamics between those players, the impact of what was going on in other parts of the world, and an equally-thorough understanding of the many past centuries that all of these things grew out of.

That’s why not only can’t even the most perfect history buff fully understand history, but the key people involved at the time can’t ever know the full story. History is a giant collective tangle of thousands of interwoven stories involving millions of characters, countless chapters, and many, many narrators.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Embodied Zeitgeist
Scoop.it!

WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? | Edge.org

WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? | Edge.org | cognition | Scoop.it

Via Xaos
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Ello | ellowrites | At last. We have figured

Ello | ellowrites | At last. We have figured | cognition | Scoop.it
At last. We have figured it out. Thank you, The New Yorker.
FastTFriend's insight:

:)

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

The Lobster review: 'like nothing you've seen before'

The Lobster review: 'like nothing you've seen before' | cognition | Scoop.it
Colin Farrell and Rachel Weisz excel in this hilarious and bracingly weird story about love, loneliness and animals
FastTFriend's insight:

If The Lobster makes one thing clear (and I’m not sure it does), it’s the blunt ludicrousness of expecting human beings to slot neatly into any kind of system designed to iron out society’s kinks. 

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

The Problem With Technological Ignorance

The Problem With Technological Ignorance | cognition | Scoop.it
Around the time that the Apple Watch was released, the Wall Street Journal ran an article about the potential effect on mechanical watch sales. Within the
FastTFriend's insight:

All of this is just another way of getting at one of the most fundamental ideas in computer science, and, really, in all of engineering: abstraction. Abstraction involves the hiding of unnecessary details, so that the important work can be done at the proper level. Abstraction means that programmers don’t have to contend with zeroes and ones every time they want to write an iPhone app, or that a scientist doesn’t need to write a new statistical package every time she wants to analyze some data; those details have been abstracted away. It’s essentially the only way to build sophisticated technologies. But one of its side effects is that at every stage of increased complexity, we have shielded this complexity from users, removing the ability for tinkering and grappling with the craftsmanship—which now often takes the form of mass production, rather than a single craftsman at work—of these technologies.

 

Understanding an if-then statement or how to build a webpage is less important for understanding our world than the fundamental idea of abstraction, that we can bootstrap one system on top of another, while hiding the details of the ones beneath.

more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Chomsky was right, researchers find: We do have a 'grammar' in our head

Chomsky was right, researchers find: We do have a 'grammar' in our head | cognition | Scoop.it
A team of neuroscientists has found new support for MIT linguist Noam Chomsky's decades-old theory that we possess an "internal grammar" that allows us to comprehend even nonsensical phrases.

"One of the foundational elements of Chomsky's work is that we have a grammar in our head, which underlies our processing of language," explains David Poeppel, the study's senior researcher and a professor in New York University's Department of Psychology. "Our neurophysiological findings support this theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner—a process that reflects an 'internal grammar' mechanism."

The research, which appears in the latest issue of the journal Nature Neuroscience, builds on Chomsky's 1957 work, Syntactic Structures (1957). It posited that we can recognize a phrase such as "Colorless green ideas sleep furiously" as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though the statistical relations between words are non-existent.

Neuroscientists and psychologists predominantly reject this viewpoint, contending that our comprehension does not result from an internal grammar; rather, it is based on both statistical calculations between words and sound cues to structure. That is, we know from experience how sentences should be properly constructed—a reservoir of information we employ upon hearing words and phrases. Many linguists, in contrast, argue that hierarchical structure building is a central feature of language processing.

In an effort to illuminate this debate, the researchers explored whether and how linguistic units are represented in the brain during speech comprehension.

Via Wildcat2030
more...
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Identity Is an Inside Joke - Issue 30: Identity - Nautilus

Identity Is an Inside Joke - Issue 30: Identity - Nautilus | cognition | Scoop.it
“The judgment of whether something is funny or not is spontaneous, automatic, almost a reflex,” writes Dutch sociologist Giselinde Kuipers. “Sense of humor thus lies very close to self-image.” Humor also takes on the shape of the teller’s surroundings: age, gender, class, and clan. Shared humor implies shared identity, shared ways of confronting reality. When we don’t get a joke, we feel left out; when we get a joke, or better yet, tell the joke that everyone roars at, we belong. “If you’re both laughing, it means you see the world in the same way,” says Peter McGraw, psychologist and co-author of The Humor Code: A Global Search for What Makes Things Funny.

Via Wildcat2030
more...
Wildcat2030's curator insight, November 26, 2015 10:25 PM

a very worthwhile read

Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

Why it matters that you realize you’re in a computer simulation

Why it matters that you realize you’re in a computer simulation | cognition | Scoop.it
What if our universe is something like a computer simulation, or a virtual reality, or a video game? The proposition that the universe is actually a computer simulation was furthered in a big way during the 1970s, when John Conway famously proved that if you take a binary system, and subject that system to only a few rules (in the case of Conway’s experiment, four); then that system creates something rather peculiar.

What Conway’s rules produced were emergent complexities so sophisticated that they seemed to resemble the behaviors of life itself. He named his demonstration The Game of Life, and it helped lay the foundation for the Simulation Argument, its counterpart the Simulation Hypothesis, and Digital Mechanics. These fields have gone on to create a massive multi-decade long discourse in science, philosophy, and popular culture around the idea that it actually makes logical, mathematical sense that our universe is indeed a computer simulation. To crib a summary from Morpheus, “The Matrix is everywhere”. But amongst the murmurs on various forums and reddit threads pertaining to the subject, it isn’t uncommon to find a word or two devoted to caution: We, the complex intelligent lifeforms who are supposedly “inside” this simulated universe, would do well to play dumb that we are at all conscious of our circumstance.

The colloquial warning says we must not betray the knowledge that we have become aware of being mere bits in the bit kingdom. To have a tipping point population of players who realize that they are actually in something like a video game would have dire and catastrophic results. Deletion, reformatting, or some kind of biblical flushing of our entire universe (or maybe just our species), would unfold. Leave the Matrix alone! In fact, please pretend it isn’t even there.

Via Wildcat2030
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Web of illusion: how the internet affects our confidence in what we know

Web of illusion: how the internet affects our confidence in what we know | cognition | Scoop.it
The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.
The internet has a reputation for harbouring know-it-alls.
FastTFriend's insight:

"In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory."

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Choosing Empathy | Edge.org

Choosing Empathy | Edge.org | cognition | Scoop.it
FastTFriend's insight:

What is empathy, and where does it come from in our intellectual landscape? The term, empathy, [was] generated by German aesthetic philosophers. The term in German is Einfühlung, which is when you "feel yourself into" an art object. The term originates from Theodor Lipps, following Robert Vischer, another aesthetic philosopher. They both believed that the way we make contact with art is not by assessing its qualities in an objective sense, but by feeling into it, by projecting ourselves emotionally into a work. That was translated into English 106 years ago by Titchener, into the word empathy.

more...
No comment yet.