cognition
9.1K views | +0 today
Follow
 
Scooped by FastTFriend
onto cognition
Scoop.it!

Critical Thinking Is Best Taught Outside the Classroom: Scientific American

Critical Thinking Is Best Taught Outside the Classroom: Scientific American | cognition | Scoop.it
Critical thinking is a teachable skill best taught outside the K–12 classroom
FastTFriend's insight:

Museums and other institutions of informal learning may be better suited to teach this skill than elementary and secondary schools. At the Exploratorium in San Francisco, we recently studied how learning to ask good questions can affect the quality of people's scientific inquiry. We found that when we taught participants to ask “What if?” and “How can?” questions that nobody present would know the answer to and that would spark exploration, they engaged in better inquiry at the next exhibit—asking more questions, performing more experiments and making better interpretations of their results. Specifically, their questions became more comprehensive at the new exhibit. Rather than merely asking about something they wanted to try (“What happens when you block out a magnet?”), they tended to include both cause and effect in their question (“What if we pull this one magnet out and see if the other ones move by the same amount?”). Asking juicy questions appears to be a transferable skill for deepening collaborative inquiry into the science content found in exhibits.

more...
No comment yet.
cognition
How it evolved, what we do with it, futures; And otherwise interesting stuff
Curated by FastTFriend
Your new post is loading...
Your new post is loading...
Scooped by FastTFriend
Scoop.it!

The problem with facts

The problem with facts | cognition | Scoop.it
Tim Harford on how today’s politicians deal with inconvenient truths
FastTFriend's insight:
Facts rarely stand up for themselves — they need someone to make us care about them, to make us curious. That’s what Rosling did. And faced with the apocalyptic possibility of a world where the facts don’t matter, that is the example we must follow.
more...
No comment yet.
Rescooped by FastTFriend from Cerveau et neurosciences
Scoop.it!

The Purpose of Sleep is to Forget

The Purpose of Sleep is to Forget | cognition | Scoop.it
A pair of papers published on Thursday in the journal Science offer evidence for another notion: We sleep to forget some of the things we learn each day.

Via Dr. Stefan Gruenwald, CORTEX MAG
more...
No comment yet.
Rescooped by FastTFriend from The Long Poiesis
Scoop.it!

On the dark history of intelligence as domination – Stephen Cave | Aeon Essays

On the dark history of intelligence as domination – Stephen Cave | Aeon Essays | cognition | Scoop.it
Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Via Xaos
more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Why upgrading your brain could make you less human – Michael Bess | Aeon Ideas

Within the lifetimes of most children today, bioenhancement is likely to become a basic feature of human society. Personalised pharmaceuticals will enable us to modify our bodies and minds in powerful and precise ways, with far fewer side-effects than today’s drugs. New brain-machine interfaces will improve our memory and cognition, extend our senses, and confer direct control over an array of semi-intelligent gadgets. Genetic and epigenetic modification will allow us to change our physical appearance and capabilities, as well as to tweak some of the more intangible aspects of our being such as emotion, creativity or sociability.

Do you find these ideas disquieting? One of the more insidious effects of such self-editing is that it will blur the boundary between persons and things. The reason is simple: bioenhancements are products. They require machines, chemicals, tools and techniques that develop over time. They become obsolete after a number of years. They are likely to be available for purchase on the open market. Some will be better than others, and more expensive than others. Some – like cars or jewellery or your house – will confer a greater or lesser degree of prestige.
But if we’re not careful, we ignore the fact that these ‘products’ are altering key aspects of a human being’s selfhood. Without realising it, we drift into an instrumental mode of thought, which would reduce a person to the sum total of her modified or unmodified traits. We could lose sight of the individual’s intrinsic value and dignity, and start comparing people as if they were used vehicles in a car lot.

Via Wildcat2030
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

The Grasshopper - Third Edition - Broadview Press

“Philosophers are not generally known for fine writing, but once in a generation or two a book appears out of nowhere, unclassifiable, inspired, amazing, mesmerizing, wonderful, classic … ” — Philosophy and Literature

more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

echo chambers: old psych, new tech

echo chambers: old psych, new tech | cognition | Scoop.it
If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people…
more...
No comment yet.
Rescooped by FastTFriend from The future of medicine and health
Scoop.it!

Psilocybin: A Journey beyond the Fear of Death?

Psilocybin: A Journey beyond the Fear of Death? | cognition | Scoop.it
In one of the largest and most rigorous clinical investigations of psychedelic drugs to date, researchers at Johns Hopkins University and New York University have found that a single dose of psilocybin—the psychoactive compound in “magic” mushrooms—substantially diminished depression and anxiety in patients with advanced cancer.

Psychedelics were the subject of a flurry of serious medical research in the 1960s, when many scientists believed some of the mind-bending compounds held tremendous therapeutic promise for treating a number of conditions including severe mental health problems and alcohol addiction. But flamboyant Harvard psychology professor Timothy Leary—one of the top scientists involved—started aggressively promoting LSD as a consciousness expansion tool for the masses, and the youth counterculture movement answered the call in a big way. Leary lost his job and eventually became an international fugitive. Virtually all legal research on psychedelics shuddered to a halt when federal drug policies hardened in the 1970s.

The decades-long research blackout ended in 1999 when Roland Griffiths of Johns Hopkins was among the first to initiate a new series of studies on psilocybin. Griffiths has been called the grandfather of the current psychedelics research renaissance, and a 21st-century pioneer in the field—but the soft-spoken investigator is no activist or shaman/showman in the mold of Leary. He’s a scientifically cautious clinical pharmacologist and author of more than 300 studies on mood-altering substances from coffee to ketamine.

Much of Griffiths’ fascination with psychedelics stems from his own mindfulness meditation practice, which he says sparked his interest in altered states of consciousness. When he started administering psilocybin to volunteers for his research, he was stunned that more than two-thirds of the participants rated their psychedelic journey one of the most important experiences of their lives.


Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

Would you ditch your therapist for a “philosophical counselor”?

Would you ditch your therapist for a “philosophical counselor”? | cognition | Scoop.it
Instead of going to traditional psychotherapists for advice and support, growing numbers of people are turning to philosophical counselors for particularly wise guidance. These counselors work much like traditional psychotherapists. But instead of offering solutions based solely on their understanding of mental health or psychology, philosophical counselors offer solutions and guidance drawn from the writings of great thinkers.

Millennia of philosophical studies can provide practical advice for those experiencing practical difficulties: There’s an entire field of philosophy that explores moral issues; stoic philosophers show us how to weather hardship; the existentialists advise on anxiety; and Aristotle was one of the first thinkers to question what makes a “good life.” All these topics make up a good chunk of any therapy session, philosophical or otherwise.

Philosophical counseling has been available since the early 1990s, when Elliot Cohen came up with the idea and founded the National Philosophical Counseling Association (NPCA) with around 20 counselors. The NPCA’s website suggests writer’s block, job loss, procrastination, and rejection are all appropriate subjects for philosophical guidance. (However, counselors will refer clients to a psychiatrist if they think they’re suffering from a serious mental health issue.) Clients pay about $100 a session for philosophically guided advice, and each session lasts roughly an hour.

“I saw so many people who had all these problems of living that seemed to be amenable to the thinking that students do in Philosophy 101 and Introduction to Logic,” Cohen says. He often draws on French existentialist Jean Paul Sartre, who believed that you are nothing more than your own actions. “If you don’t act, you don’t define yourself and you don’t become anything but a disappointed dream or expectation,” he adds.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

How trees communicate via a Wood Wide Web

How trees communicate via a Wood Wide Web | cognition | Scoop.it
A new book, The Hidden Life of Trees, claims that trees talk to one another. But is this really the case? The simple answer is that plants certainly exchange information with one another and other organisms such as insects. Think of the scents of newly mowed grass or crushed sage. Some of the chemicals that make up these aromas will tell other plants to prepare for an attack or summon predatory insects to defend them. These evocative smells could be seen as cries of warning or screams for help.

When plants are damaged by infection or by being eaten, they release a range of volatile molecules into the air around them. After exposure to some of these chemicals, nearby plants of the same species and even other species become less vulnerable to attack, for example by producing toxins or substances that make themselves harder to digest. These changes don’t usually happen straight away but the genes needed turn on much more quickly when they are needed.

There is also evidence that the chemicals released by plants in a particular location are subtly different from those released elsewhere by the same species. Consequently, it seems that if plants talk, they even have languages or at least regional accents.

Via Wildcat2030
more...
Esprit Solutions Pvt. Ltd.'s comment, October 3, 2016 6:57 AM
amazing
Abby E Lewis's comment, October 4, 2016 5:41 AM
So interesting, thanks!
Scooped by FastTFriend
Scoop.it!

Do dialect speakers get the same benefits as bilinguals? – Michael Erard | Aeon Essays

People who can switch between street dialects and standard language might have the same cognitive advantage as bilinguals
FastTFriend's insight:
Navigation among differences, exercising restrictions may hold the key to advantages displayed by speakers of more than one language or dialect. 
more...
No comment yet.
Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

Can religion be based on ritual practice without belief? – Christopher Kavanagh | Aeon Essays

Since the dawn of anthropology, sociology and psychology, religion has been an object of fascination. Founding figures such as Sigmund Freud, Émile Durkheim and Max Weber all attempted to dissect it, taxonomise it, and explore its psychological and social functions. And long before the advent of the modern social sciences, philosophers such as Xenophanes, Lucretius, David Hume and Ludwig Feuerbach have pondered the origins of religion.

In the century since the founding of the social sciences, interest in religion has not waned – but confidence in grand theorising about it has. Few would now endorse Freud’s insistence that the origins of religion are entwined with Oedipal sexual desires towards mothers. Weber’s linkage of a Protestant work ethic and the origins of capitalism might remain influential, but his broader comparisons between the religion and culture of the occidental and oriental worlds are now rightly regarded as historically inaccurate and deeply Euro-centric.

Today, such sweeping claims about religion are looked upon skeptically, and a circumscribed relativism has instead become the norm. However, a new empirical approach to examining religion – dubbed the cognitive science of religion (CSR) – has recently perturbed the ghosts of theoretical grandeur by offering explanations for religious beliefs and practices that are informed by theories of evolution and therefore involve cognitive processes thought to be prevalent, if not universal, among human beings.

This approach, like its Victorian predecessors, offers the possibility of discovering universal commonalities among the many idiosyncracies in religious concepts, beliefs and practices found across history and culture. But unlike previous efforts, modern researchers largely eschew any attempt to provide a single monocausal explanation for religion, arguing that to do so is as meaningless as searching for a single explanation for art or science. These categories are just too broad for such an analysis. Instead, as the cognitive anthropologist Harvey Whitehouse at the University of Oxford puts it, a scientific study of religion must begin by ‘fractionating’ the concept of religion, breaking down the category into specific features that can be individually explored and explained, such as the belief in moralistic High Gods or participation in collective rituals.

For critics of the cognitive science of religion, this approach repeats the mistakes of the old grand theorists, just dressed up in trendy theoretical garb. The charge is that researchers are guilty of reifying the concept of religion as a universal, an ethnocentric approach that fails to appreciate the cultural diversity of the real world. Perhaps ironically, it is scholars in the Study of Religions discipline that now express the most skepticism about the usefulness of the term ‘religion’. They argue that it is inextricably Western and therefore loaded with assumptions related to the Abrahamic religious institutions that dominate in the West. For instance, the religious studies scholar Russell McCutcheon at the University of Alabama argues in Manufacturing Religion (1997) that scholars treating religion as a natural category have produced analyses that are ‘ahistorical, apolitical [and] fetishised’.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Daily Magazine
Scoop.it!

In Hospital ICUs, AI Could Predict Which Patients Are Likely to Die

In Hospital ICUs, AI Could Predict Which Patients Are Likely to Die | cognition | Scoop.it
Hospitals have an understandable goal for their intensive care units: to reduce “dead in bed” events.  

Via THE *OFFICIAL ANDREASCY*
FastTFriend's insight:
How will it alter staff's decision making?
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Will Democracy Survive Big Data and Artificial Intelligence?

Will Democracy Survive Big Data and Artificial Intelligence? | cognition | Scoop.it
We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

The Function of Reason | Edge.org

The Function of Reason | Edge.org | cognition | Scoop.it
FastTFriend's insight:
Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social.
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

On shared false memories: what lies behind the Mandela effect – Caitlin Aamodt | Aeon Ideas

Would you trust a memory that felt as real as all your other memories, and if other people confirmed that they remembered it too? What if the memory turned out to be false? This scenario was named the ‘Mandela effect’ by the self-describe
more...
No comment yet.
Rescooped by FastTFriend from Knowmads, Infocology of the future
Scoop.it!

Move over Asimov: 23 principles to make AI safe and ethical

Move over Asimov: 23 principles to make AI safe and ethical | cognition | Scoop.it
Poised to seriously disrupt the world, will the impacts of artificial intelligence be for the good of humanity, or destroy it? The question sounds like the basis of a sci-fi flick, but with the speed that AI is advancing, hundreds of AI and robotics researchers have converged to compile the Asilomar AI Principles, a list of 23 principles, priorities and precautions that should guide the development of artificial intelligence to ensure it's safe, ethical and beneficial.

The list is the brainchild of the Future of Life Institute, an organization that aims to help humanity steer a safe course through the risks that might arise from new technology. Prominent members include the likes of Stephen Hawking and Elon Musk, and the group focuses on the potential threats to our species posed by technologies and issues like artificial intelligence, biotechnology, nuclear weapons and climate change.

At the Beneficial Artificial Intelligence (BAI) 2017 conference in January, the group gathered AI researchers from universities and companies to discuss the future of artificial intelligence and how it should be regulated. Before the meeting, the institute quizzed attendees on how they thought AI development needed to be prioritized and managed in the coming years, and used those responses to create a list of potential points. The revised version was studied at the conference, and only when 90 percent of the scientists agreed on a point would it be included in the final list.

The full list of the Asilomar AI Principles reads like an extended version of Isaac Asimov's famous Three Laws of Robotics. The 23 points are grouped into three areas: Research Issues, Ethics and Values, and Longer-Term Issues.

Via Wildcat2030
more...
prgnewshawaii's curator insight, February 4, 9:55 PM

This article reminds me of Isaac Asimov's "Three Laws of Robotics."  A great idea in theory, but when AI falls into the hands of hackers, criminal syndicates, and state-supported agents, morality and decency go out the window. I hope AI turns out to be "safe, ethical, and beneficial", but , given human nature, that may not be preordained. Hope for the best, but prepare for the worst.

Russell Roberts

Hawaii Intelligence Digest

https://hawaiiintelligencedigest.com.

https://paper.li/f-1482109921

 

Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

Why Tolkien's fantastic imaginary languages have had more impact than Esperanto

Why Tolkien's fantastic imaginary languages have had more impact than Esperanto | cognition | Scoop.it
JRR Tolkien began writing The Fall of Gondolin while on medical leave from the first world war, 100 years ago this month. It is the first story in what would become his legendarium – the mythology that underpins The Lord of the Rings. But behind the fiction was his interest in another epic act of creation: the construction of imaginary languages.

That same year, on the other side of Europe, Ludwik Zamenhof died in his native Poland. Zamenhof had also been obsessed with language invention, and in 1887 brought out a book introducing his own creation. He published this under the pseudonym Doktoro Esperanto, which in time became the name of the language itself.

The construction of imaginary languages, or conlangs, has a long history, dating back to the 12th century. And Tolkien and Zamenhof are two of its most successful proponents. Yet their aims were very different, and in fact point to opposing views of what language itself actually is.

Zamenhof, a Polish Jew growing up in a country where cultural and ethnic animosity was rife, believed that the existence of a universal language was the key to peaceful co-existence. Although language is the “prime motor of civilisation” he wrote, “difference of speech is a cause of antipathy, nay even of hatred, between people”. His plan was to devise something which was simple to learn, not tied to any one nation or culture, and could thus help unite rather than divide humanity.

As “international auxiliary languages” go, Esperanto has been very successful. At its peak, its speakers numbered in the millions, and although exact estimates are very difficult to make, even today up to a million people still use it. It has an expansive body of native literature, there’s a museum in China dedicated exclusively to it, while in Japan Zamenhof himself is even honoured as a god by one particular Shinto sect who use the language. Yet it never really came close to achieving his dreams of world harmony. And at his death, with World War I tearing Europe apart, the optimism he’d had for it had turned mostly to disillusion.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus | cognition | Scoop.it
In the 1960s, the American philosopher Edmund Gettier devised a thought experiment that has become known as a “Gettier case.” It shows that something’s “off” about the way we understand knowledge. This ordeal is called the “Gettier problem,” and 50 years later, philosophers are still arguing about it. Jennifer Nagel, a philosopher of mind at the University of Toronto, sums up its appeal. “The resilience of the Gettier problem,” she says, “suggests that it is difficult (if not impossible) to develop any explicit reductive theory of knowledge.”

What is knowledge? Well, thinkers for thousands of years had more or less taken one definition for granted: Knowledge is “justified true belief.” The reasoning seemed solid: Just believing something that happens to be true doesn’t necessarily make it knowledge. If your friend says to you that she knows what you ate last night (say it’s veggie pizza), and happens to be right after guessing, that doesn’t mean she knew. That was just a lucky guess—a mere true belief. Your friend would know, though, if she said veggie pizza because she saw you eat it—that’s the “justification” part. Your friend, in that case, would have good reason to believe you ate it.

The reason the Gettier problem is renowned is because Gettier showed, using little short stories, that this intuitive definition of knowledge was flawed. His 1963 paper, titled “Is Justified True Belief Knowledge?” resembles an undergraduate assignment. It’s just three pages long. But that’s all Gettier needed to revolutionize his field, epistemology, the study of the theory of knowledge.

The “problem” in a Gettier problem emerges in little, unassuming vignettes. Gettier had his, and philosophers have since come up with variations of their own. Try this version, from the University of Birmingham philosopher Scott Sturgeon:

Suppose I burgle your house, find two bottles of Newcastle Brown in the kitchen, drink and replace them. You remember purchasing the ale and come to believe there will be two bottles waiting for you at home. Your belief is justified and true, but you do not know what’s going on.

Via Wildcat2030
more...
No comment yet.
Rescooped by FastTFriend from Philosophy everywhere everywhen
Scoop.it!

The post-truth era of Trump is just what Nietzsche predicted

The post-truth era of Trump is just what Nietzsche predicted | cognition | Scoop.it
The morning of the US presidential election, I was leading a graduate seminar on Friedrich Nietzsche’s critique of truth. It turned out to be all too apt.

Nietzsche, German counter-Enlightenment thinker of the late 19th century, seemed to suggest that objective truth – the concept of truth that most philosophers relied on at the time – doesn’t really exist. That idea, he wrote, is a relic of an age when God was the guarantor of what counted as the objective view of the world, but God is dead, meaning that objective, absolute truth is an impossibility. God’s point of view is no longer available to determine what is true.

Nietzsche fancied himself a prophet of things to come – and not long after Donald Trump won the presidency, the Oxford Dictionaries declared the international word of the year 2016 to be “post-truth”.

Indeed, one of the characteristics of Trump’s campaign was its scorn for facts and the truth. Trump himself unabashedly made any claim that seemed fit for his purpose of being elected: that crime levels are sky-high, that climate change is a Chinese hoax, that he’d never called it a Chinese hoax, and so on. But the exposure of his constant contradictions and untruths didn’t stop him. He won.

Nietzsche offers us a way of understanding how this happened. As he saw it, once we realise that the idea of an absolute, objective truth is a philosophical hoax, the only alternative is a position called “perspectivism” – the idea there is no one objective way the world is, only perspectives on what the world is like.

This might seem outlandish. After all, surely we all agree certain things are objectively true: Trump’s predecessor as president is Barack Obama, the capital of France is Paris, and so on. But according to perspectivism, we agree on those things not because these propositions are “objectively true”, but by virtue of sharing the same perspective.

When it comes to basic matters, sharing a perspective on the truth is easy – but when it comes to issues such as morality, religion and politics, agreement is much harder to achieve. People occupy different perspectives, seeing the world and themselves in radically different ways. These perspectives are each shaped by the biases, the desires and the interests of those who hold them; they can vary wildly, and therefore so can the way people see the world.
Your truth, my truth

A core tenet of Enlightenment thought was that our shared humanity, or a shared faculty called reason, could serve as an antidote to differences of opinion a common ground that can function as the arbiter of different perspectives. Of course people disagree, but, the idea goes, through reason and argument they can come to see the truth. Nietzsche’s philosophy, however, claims such ideals are philosophical illusions, wishful thinking, or at worst covert way of imposing one’s own view on everyone else under the pretence of rationality and truth.

Via Wildcat2030
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Beyond humans, what other kinds of minds might be out there? – Murray Shanahan | Aeon Essays

From algorithms to aliens, could humans ever understand minds that are radically unlike our own?
FastTFriend's insight:
"But even if none of these science-fiction scenarios comes about, to situate human consciousness within a larger space of possibilities strikes me as one of the most profound philosophical projects we can undertake. It is also a neglected one. With no giants upon whose shoulders to stand, the best we can do is cast a few flares into the darkness."
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Chimps May Be Capable of Comprehending the Minds of Others

Chimps May Be Capable of Comprehending the Minds of Others | cognition | Scoop.it
A gorilla-suit experiment reveals our closest animal relatives may possess “theory of mind”
more...
No comment yet.
Rescooped by FastTFriend from Bounded Rationality and Beyond
Scoop.it!

Reasoning the Fast and Frugal Way: Models of Bounded Rationality

Gerd Gigerenzer and Daniel G. Goldstein Max Planck Institute for Psychological Research and University of Chicago Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: onereason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the authors held a competition between the satisficing "Take The Best" algorithm and various "rational" inference procedures (e.g., multiple regression). The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.

Via Alessandro Cerboni
more...
No comment yet.
Scooped by FastTFriend
Scoop.it!

Why panpsychism fails to solve the mystery of consciousness – Keith Frankish | Aeon Ideas

Is consciousness everywhere? Is it a basic feature of the Universe, at the very heart of the tiniest subatomic particles? Such an idea – panpsychism as it is known – might sound like New Age mysticism, but some hard-nosed analytic philosopher
more...
No comment yet.