Philosophy everywhere everywhen
11.1K views | +0 today
Philosophy everywhere everywhen
First Law of Philosophy: For every philosopher, there is an equal and opposite philosopher. The Second Law of Philosophy: They're both wrong
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030!

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus | Philosophy everywhere everywhen |
In the 1960s, the American philosopher Edmund Gettier devised a thought experiment that has become known as a “Gettier case.” It shows that something’s “off” about the way we understand knowledge. This ordeal is called the “Gettier problem,” and 50 years later, philosophers are still arguing about it. Jennifer Nagel, a philosopher of mind at the University of Toronto, sums up its appeal. “The resilience of the Gettier problem,” she says, “suggests that it is difficult (if not impossible) to develop any explicit reductive theory of knowledge.”

What is knowledge? Well, thinkers for thousands of years had more or less taken one definition for granted: Knowledge is “justified true belief.” The reasoning seemed solid: Just believing something that happens to be true doesn’t necessarily make it knowledge. If your friend says to you that she knows what you ate last night (say it’s veggie pizza), and happens to be right after guessing, that doesn’t mean she knew. That was just a lucky guess—a mere true belief. Your friend would know, though, if she said veggie pizza because she saw you eat it—that’s the “justification” part. Your friend, in that case, would have good reason to believe you ate it.

The reason the Gettier problem is renowned is because Gettier showed, using little short stories, that this intuitive definition of knowledge was flawed. His 1963 paper, titled “Is Justified True Belief Knowledge?” resembles an undergraduate assignment. It’s just three pages long. But that’s all Gettier needed to revolutionize his field, epistemology, the study of the theory of knowledge.

The “problem” in a Gettier problem emerges in little, unassuming vignettes. Gettier had his, and philosophers have since come up with variations of their own. Try this version, from the University of Birmingham philosopher Scott Sturgeon:

Suppose I burgle your house, find two bottles of Newcastle Brown in the kitchen, drink and replace them. You remember purchasing the ale and come to believe there will be two bottles waiting for you at home. Your belief is justified and true, but you do not know what’s going on.
No comment yet.
Scooped by Wildcat2030!

The post-truth era of Trump is just what Nietzsche predicted

The post-truth era of Trump is just what Nietzsche predicted | Philosophy everywhere everywhen |
The morning of the US presidential election, I was leading a graduate seminar on Friedrich Nietzsche’s critique of truth. It turned out to be all too apt.

Nietzsche, German counter-Enlightenment thinker of the late 19th century, seemed to suggest that objective truth – the concept of truth that most philosophers relied on at the time – doesn’t really exist. That idea, he wrote, is a relic of an age when God was the guarantor of what counted as the objective view of the world, but God is dead, meaning that objective, absolute truth is an impossibility. God’s point of view is no longer available to determine what is true.

Nietzsche fancied himself a prophet of things to come – and not long after Donald Trump won the presidency, the Oxford Dictionaries declared the international word of the year 2016 to be “post-truth”.

Indeed, one of the characteristics of Trump’s campaign was its scorn for facts and the truth. Trump himself unabashedly made any claim that seemed fit for his purpose of being elected: that crime levels are sky-high, that climate change is a Chinese hoax, that he’d never called it a Chinese hoax, and so on. But the exposure of his constant contradictions and untruths didn’t stop him. He won.

Nietzsche offers us a way of understanding how this happened. As he saw it, once we realise that the idea of an absolute, objective truth is a philosophical hoax, the only alternative is a position called “perspectivism” – the idea there is no one objective way the world is, only perspectives on what the world is like.

This might seem outlandish. After all, surely we all agree certain things are objectively true: Trump’s predecessor as president is Barack Obama, the capital of France is Paris, and so on. But according to perspectivism, we agree on those things not because these propositions are “objectively true”, but by virtue of sharing the same perspective.

When it comes to basic matters, sharing a perspective on the truth is easy – but when it comes to issues such as morality, religion and politics, agreement is much harder to achieve. People occupy different perspectives, seeing the world and themselves in radically different ways. These perspectives are each shaped by the biases, the desires and the interests of those who hold them; they can vary wildly, and therefore so can the way people see the world.
Your truth, my truth

A core tenet of Enlightenment thought was that our shared humanity, or a shared faculty called reason, could serve as an antidote to differences of opinion a common ground that can function as the arbiter of different perspectives. Of course people disagree, but, the idea goes, through reason and argument they can come to see the truth. Nietzsche’s philosophy, however, claims such ideals are philosophical illusions, wishful thinking, or at worst covert way of imposing one’s own view on everyone else under the pretence of rationality and truth.
No comment yet.
Scooped by Wildcat2030!

Authenticity in the Age of the Fake - Issue 42: Fakes - Nautilus

Authenticity in the Age of the Fake - Issue 42: Fakes - Nautilus | Philosophy everywhere everywhen |
The announcement of synthetic diamonds in 1955 was met with the same kind of alarm and skepticism that greeted claims to have made alchemical gold in the Middle Ages. Could these “fake” gems, created by a team at the General Electric research laboratories in Schenectady, New York really match the genuine article? One anonymous critic from California captured a widespread suspicion in blunt terms when he wrote to GE, saying:

You can’t make real diamonds for they are nature grown. You can’t make gold; no one can. They dig gold out of the ground and also diamonds. But no one can make them with a machine. That is just a lot of bull.
Yet what if it were true that diamonds really can be manufactured? When GE revealed the discovery, the stock of the De Beers diamond cartel in South Africa, which dominated the global market, plummeted. It seemed like a rare and precious commodity was about to be supplanted by an artificial form that could be fabricated by the ton, mirroring a millennia-old concern about the devastating power of fakes. Concerns over the devaluation of gold currency led the Roman emperor Diocletian to ban alchemy in the third century, and worries about counterfeiting and debased coinage also lay behind the condemnations of the art by Pope John XXII in 1317 and of King Henry IV of England in 1403.This, though, was no alchemy: The GE diamonds were perfect chemical replicas of the real thing. Was it the end of a billion-dollar market?

The answer was no. “Fake” diamonds are cheaper, and for industrial uses they have utterly eclipsed their natural counterparts. But at the luxury end of the market—gemstones for jewelry—artificial diamonds account for only 2 percent of global sales. How come?

When it comes to luxury and exotic materials, the competition between fake and real is partly a technical, chemical affair: how to create a good imitation, and how to spot it. But, as artificial gold and diamonds show, there is a deeper level to it, which is about something very human and socially constructed: the concept and value of authenticity.
No comment yet.
Scooped by Wildcat2030!

Dalai Lama: Behind Our Anxiety, the Fear of Being Unneeded

Dalai Lama: Behind Our Anxiety, the Fear of Being Unneeded | Philosophy everywhere everywhen |
In many ways, there has never been a better time to be alive. Violence plagues some corners of the world, and too many still live under the grip of tyrannical regimes. And although all the world’s major faiths teach love, compassion and tolerance, unthinkable violence is being perpetrated in the name of religion.

And yet, fewer among us are poor, fewer are hungry, fewer children are dying, and more men and women can read than ever before. In many countries, recognition of women’s and minority rights is now the norm. There is still much work to do, of course, but there is hope and there is progress.

How strange, then, to see such anger and great discontent in some of the world’s richest nations. In the United States, Britain and across the European Continent, people are convulsed with political frustration and anxiety about the future. Refugees and migrants clamor for the chance to live in these safe, prosperous countries, but those who already live in those promised lands report great uneasiness about their own futures that seems to border on hopelessness.


A small hint comes from interesting research about how people thrive. In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. This speaks to a broader human truth: We all need to be needed.

Being “needed” does not entail selfish pride or unhealthy attachment to the worldly esteem of others. Rather, it consists of a natural human hunger to serve our fellow men and women. As the 13th-century Buddhist sages taught, “If one lights a fire for others, it will also brighten one’s own way.”

No comment yet.
Scooped by Wildcat2030!

Consciousness could be a side effect of 'entropy', say researchers

Consciousness could be a side effect of 'entropy', say researchers | Philosophy everywhere everywhen |
It's impressive enough that our human brains are made up of the same 'star stuff' that forms the Universe, but new research suggests that this might not be the only thing the two have in common.

Just like the Universe, our brains might be programmed to maximise disorder - similar to the principle of entropy - and our consciousness could simply be a side effect.

The quest to understand human consciousness - our ability to be aware of ourselves and our surroundings - has been going on for centuries. Although consciousness is a crucial part of being human, researchers still don't truly understand where it comes from, and why we have it.

But a new study, led by researchers from France and Canada, puts forward a new possibility: what if consciousness arises naturally as a result of our brains maximising their information content? In other words, what if consciousness is a side effect of our brain moving towards a state of entropy?

Entropy is basically the term used to describe the progression of a system from order to disorder. Picture an egg: when it's all perfectly separated into yolk and white, it has low entropy, but when you scramble it, it has high entropy - it's the most disordered it can be.

This is what many physicists believe is happening to our Universe. After the Big Bang, the Universe has gradually been moving from a state of low entropy to high entropy, and because the second law of thermodynamics states that entropy can only increase in a system, it could explain why the arrow of time only ever moves forwards.

So researchers decided to apply the same thinking to the connections in our brains, and investigate whether they show any patterns in the way they choose to order themselves while we're conscious.
No comment yet.
Scooped by Wildcat2030!

Bob Dylan's Nobel prize – and what really defines literature

Bob Dylan's Nobel prize – and what really defines literature | Philosophy everywhere everywhen |
Bob Dylan’s Nobel prize win and the ensuing debate as to whether a musician should have been considered is a striking comment on the seemingly glib question of what literature actually is. And with the Man Booker prize also just around the corner, how and why literature matters are topics currently animating plenty of cultural debate.

Assessing the literary merit of Dylan’s work is nothing new. Christopher Ricks, a former Professor of Poetry at the University of Oxford, published a book on Dylan back in 2003, and a Cambridge Companion to Bob Dylan was released a few years later. But others have argued that his Nobel award snubs those who write “literature” — as in, in books.

The Nobel Prize in Literature is awarded to a writer who has produced for the field of literature, in Alfred Nobel’s words, “the most outstanding work in an ideal direction”. Dylan won the prize for having “created new poetic expressions”. The UK’s former Poet Laureate Andrew Motion commented that Dylan’s songs are “often the best words in the best order”. And Professor Sara Danius, Permanent Secretary of the Swedish Academy, spoke of Dylan’s “pictorial thinking”. The week before Dylan’s win, David Szalay’s All That Man Is, shortlisted for the Man Booker, won the Gordon Burn prize. The judges said the novel “subtly changes the way you look at the contemporary world”.

But what is an “ideal direction” for literature? And how exactly does literature change our relationship with “the contemporary world”?
No comment yet.
Scooped by Wildcat2030!

Is humanity losing faith in reason?

Is humanity losing faith in reason? | Philosophy everywhere everywhen |
The rise in popularity of Donald Trump and the emotionally-charged Brexit referendum have led many observers to proclaim that we are in an era of post-truth politics. Leave campaigner and Conservative MP Michael Gove spoke to this new reality when he declared that people “have had enough of experts”.

Truth doesn’t seem matter so much as truthiness – the quality of something seeming to be true even if it’s false.

Have we lost faith in reason? Philosopher Julian Baggini asks this question in a well-timed and cogently argued new book, The Edge of Reason: A Rational Skeptic in an Irrational World. Baggini looks at why rationality gets a bad rap these days. In many fields, the experts have let us down. Science has arguably over-reached. And religion – something billions of people continue to hold dear – is frequently portrayed in secular society as incompatible with intellectual coherence if not sanity.

A related phenomenon is the modern penchant for reducing reason to just logic or scientific reasoning. This would seem to rule out the possibility of there being moral truths, something humanity should be slow to surrender.

As Baggini puts it, “reason is not only disinterested reason, acting independently of anything other than value-free facts and logic”. Providing today’s “Unthinkable” idea, he adds: “We cannot look to disinterested reason to provide the basis of morality.”

You say reason is under fire: in what way?

Julian Baggini: “Let me count the ways! First and most recently, the widely documented loss of faith in experts and elites assumes that having greater knowledge and experience in thinking about issues counts for nothing and could even get in the way of a superior common sense. The brain is seen as having failed us and so the gut is trusted instead.

“Second, reason is associated with a dry, scientific world view that has no place for emotion, intuition or faith. Logic is for robots, we are not robots, therefore logic is not for us, which is ironically an attempt at arguing logically.

“Third, reason is routinely dismissed as merely a means of rationalising our prejudices and instinctive beliefs. ‘We all know’ that psychology has shown that the rational mind is not in charge and that the unconscious, driven by emotion and automatic processing, rules.
No comment yet.
Rescooped by Wildcat2030 from Wisdom 1.0!

The 50 Most Influential Living Philosophers | The Best Schools

The 50 Most Influential Living Philosophers | The Best Schools | Philosophy everywhere everywhen |
Here are the 50 most influential living philosophers, actively changing our understanding of ourselves and our world. Philosophy is far from dead!

Via Xaos
No comment yet.
Scooped by Wildcat2030!

Not all things wise and good are philosophy – Nicholas Tampio | Aeon Ideas

I have published widely on Islamic political thought, including an encyclopedia entry on the topic. Reading the Quran, Islamic jurisprudence (fiqh), philosophy (falsafa) and Ibn Khaldun’s history of the premodern world, the Muqaddimah (1377), has enriched my life and thought. Yet I disagree with the call, made by Jay L Garfield and Bryan W Van Norden in The New York Times, for philosophy departments to diversify and immediately incorporate courses in African, Indian, Islamic, Jewish, Latin American and Native American ‘philosophy’ into their curriculums. It might seem broadminded to call for philosophy professors to teach ancient Asian scholars such as Confucius and Candrakīrti in addition to dead white men such as David Hume and Immanuel Kant. However, this approach undermines what is distinct about philosophy as an intellectual tradition, and pays other traditions the dubious compliment of saying that they are just like ours. Furthermore, this demand fuels the political campaign to defund academic philosophy departments.

Philosophy originates in Plato’s Republic. It is a restless pursuit for truth through contentious dialogue. It takes place among ordinary human beings in cities, not sages and disciples on mountaintops, and it requires the fearless use of reason even in the face of established traditions or religious commitments. Plato’s book is the first text of philosophy and a reference point for texts as diverse as Aristotle’s Politics, Augustine’s City of God, al-Fārābī’s The Political Regime, and the French philosopher Alain Badiou’s book Plato’s Republic (2013). The British philosopher Alfred North Whitehead once said that the history of philosophy is a series of footnotes to Plato. Even philosophers who do not mention Plato directly still use his words – including ‘ideas’ – and his general orientation that prioritises truth over piety. Philosophy is the love of wisdom rather than the love of blood or country. It is in principle open to everybody, and people all around the world heed Plato’s call to live an examined life.

I am wary of the argument, however, that all serious reflection upon fundamental questions ought to be called philosophy. Philosophy is one among many ways to think about questions such as the origin of the Universe, the nature of justice, or the limits of knowledge. Philosophy, at its best, aims to be a dialogue between people of different viewpoints, but, again, it is a love of wisdom, rather than the possession of wisdom. This restless character has often made it the enemy of religion and tradition.

No comment yet.
Scooped by Wildcat2030!

Can religion be based on ritual practice without belief? – Christopher Kavanagh | Aeon Essays

Since the dawn of anthropology, sociology and psychology, religion has been an object of fascination. Founding figures such as Sigmund Freud, Émile Durkheim and Max Weber all attempted to dissect it, taxonomise it, and explore its psychological and social functions. And long before the advent of the modern social sciences, philosophers such as Xenophanes, Lucretius, David Hume and Ludwig Feuerbach have pondered the origins of religion.

In the century since the founding of the social sciences, interest in religion has not waned – but confidence in grand theorising about it has. Few would now endorse Freud’s insistence that the origins of religion are entwined with Oedipal sexual desires towards mothers. Weber’s linkage of a Protestant work ethic and the origins of capitalism might remain influential, but his broader comparisons between the religion and culture of the occidental and oriental worlds are now rightly regarded as historically inaccurate and deeply Euro-centric.

Today, such sweeping claims about religion are looked upon skeptically, and a circumscribed relativism has instead become the norm. However, a new empirical approach to examining religion – dubbed the cognitive science of religion (CSR) – has recently perturbed the ghosts of theoretical grandeur by offering explanations for religious beliefs and practices that are informed by theories of evolution and therefore involve cognitive processes thought to be prevalent, if not universal, among human beings.

This approach, like its Victorian predecessors, offers the possibility of discovering universal commonalities among the many idiosyncracies in religious concepts, beliefs and practices found across history and culture. But unlike previous efforts, modern researchers largely eschew any attempt to provide a single monocausal explanation for religion, arguing that to do so is as meaningless as searching for a single explanation for art or science. These categories are just too broad for such an analysis. Instead, as the cognitive anthropologist Harvey Whitehouse at the University of Oxford puts it, a scientific study of religion must begin by ‘fractionating’ the concept of religion, breaking down the category into specific features that can be individually explored and explained, such as the belief in moralistic High Gods or participation in collective rituals.

For critics of the cognitive science of religion, this approach repeats the mistakes of the old grand theorists, just dressed up in trendy theoretical garb. The charge is that researchers are guilty of reifying the concept of religion as a universal, an ethnocentric approach that fails to appreciate the cultural diversity of the real world. Perhaps ironically, it is scholars in the Study of Religions discipline that now express the most skepticism about the usefulness of the term ‘religion’. They argue that it is inextricably Western and therefore loaded with assumptions related to the Abrahamic religious institutions that dominate in the West. For instance, the religious studies scholar Russell McCutcheon at the University of Alabama argues in Manufacturing Religion (1997) that scholars treating religion as a natural category have produced analyses that are ‘ahistorical, apolitical [and] fetishised’.
No comment yet.
Scooped by Wildcat2030!

Why keeping a pet is fundamentally unethical – Gary L Francione & Anna E Charlton | Aeon Essays

We live with six rescued dogs. With the exception of one, who was born in a rescue for pregnant dogs, they all came from very sad situations, including circumstances of severe abuse. These dogs are non-human refugees with whom we share our home. Although we love them very much, we strongly believe that they should not have existed in the first place.

We oppose domestication and pet ownership because these violate the fundamental rights of animals.

The term ‘animal rights’ has become largely meaningless. Anyone who thinks that we should give battery hens a small increase in cage space, or that veal calves should be housed in social units rather than in isolation before they are dragged off and slaughtered, is articulating what is generally regarded as an ‘animal rights’ position. This is attributable in large part to Peter Singer, author of Animal Liberation (1975), who is widely considered the ‘father of the animal rights movement’.

The problem with this attribution of paternity is that Singer is a utilitarian who rejects moral rights altogether, and supports any measure that he thinks will reduce suffering. In other words, the ‘father of the animal rights movement’ rejects animal rights altogether and has given his blessing to cage-free eggs, crate-free pork, and just about every ‘happy exploitation’ measure promoted by almost every large animal welfare charity. Singer does not promote animal rights; he promotes animal welfare. He does not reject the use of animals by humans per se. He focuses only on their suffering. In an interview with The Vegan magazine in 2006, he said, for example, that he could ‘imagine a world in which people mostly eat plant foods, but occasionally treat themselves to the luxury of free-range eggs, or possibly even meat from animals who live good lives under conditions natural for their species, and are then humanely killed on the farm’.

We use the term ‘animal rights’ in a different way, similar to the way that ‘human rights’ is used when the fundamental interests of our own species are concerned. For example, if we say that a human has a right to her life, we mean that her fundamental interest in continuing to live will be protected even if using her as a non-consenting organ donor would result in saving the lives of 10 other humans. A right is a way of protecting an interest; it protects interests irrespective of consequences. The protection is not absolute; it may be forfeited under certain circumstances. But the protection cannot be abrogated for consequential reasons alone.
No comment yet.
Scooped by Wildcat2030!

How to Think Like Shakespeare

How to Think Like Shakespeare | Philosophy everywhere everywhen |
You’ve been cheated of your birthright: a complete education. In the words of Martin Luther King Jr. (at your age of 18), a "complete education" gives "not only power of concentration, but worthy objectives upon which to concentrate."

But now your education is in your own hands. And my advice is: Don’t let yourself be cheated anymore, and do not cheat yourself. Take advantage of the autonomy and opportunities that college permits by approaching it in the spirit of the 16th century. You’ll become capable of a level of precision, inventiveness, and empathy worthy to be called Shakespearean.

Building a bridge to the 16th century must seem like a perverse prescription for today’s ills. I’m the first to admit that English Renaissance pedagogy was rigid and rightly mocked for its domineering pedants. Few of you would be eager to wake up before 6 a.m. to say mandatory prayers, or to be lashed for tardiness, much less translate Latin for hours on end every day of the week. Could there be a system more antithetical to our own contemporary ideals of student-centered, present-focused, and career-oriented education?
No comment yet.
Scooped by Wildcat2030!

Talkative Orangutan Shows Scientists How Language Evolved - D-brief

Talkative Orangutan Shows Scientists How Language Evolved - D-brief | Philosophy everywhere everywhen |
An orangutan named Rocky is using “wookies” to reveal new insights into the origins of language.

In experiments conducted by a researcher at Amsterdam University, Rocky learned and recited a basic vocabulary of sounds, producing vocalizations no orangutan is known to make. By learning to mimic his human instructor, this talkative primate is lending support to one of the leading theories of language evolution.
Repeat After Me

Adriano Lameira, now a professor in the department of anthropology at Durham University, used food rewards to train Rocky to mimic the sounds a human was making. The sounds, called “wookies”, differ from vocalizations naturally produced by orangutans, termed “grumphs.”

Over time, Rocky got better at producing the wookies, learning to modulate his vocal folds — thin curtains of tissue that vibrate when air is passed over them — and other components of sound production to match the human enunciations. Rocky’s abilities prove that primates can manipulate their vocal folds at a fine scale to create distinct sounds, a key component for building up and using a complex vocabulary.
Language Evolved Gradually

Theories about how protean languages first came to be are widespread, and cover a pretty broad spectrum. Some say that language emerged from instinctive vocalizations that our ancestors uttered when experiencing strong emotions. Others hold that language emerged from the rhythmic “songs” and vocalizations of early hominins. Another theory holds that language is simply a natural progression from gesture-based communication, which is limited by sight lines and darkness.

The findings lend credence to the idea that language developed slowly, growing more complex over time. The findings were published Wednesday in Scientific Reports.

Wherever language came from, it has two essential components: physical and cognitive capabilities. We need to have both the mental faculties to form and communicate ideas and the bodily structures necessary to produce gestures or sounds.

Signing gorillas can communicate via gestures, proving they have the mental abilities to do so exist. Now, Rocky has shown that primates can learn to produce new sounds as well, illustrating that the physical underpinnings of language go back millions of years.
No comment yet.
Scooped by Wildcat2030!

Extraterrestrials May Be Robots Without Consciousness - Cosmos on Nautilus

Extraterrestrials May Be Robots Without Consciousness - Cosmos on Nautilus | Philosophy everywhere everywhen |
Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.

The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.
No comment yet.
Scooped by Wildcat2030!

How would the Stoics cope today? | Ryan Holiday

How would the Stoics cope today? | Ryan Holiday | Philosophy everywhere everywhen |
Some of us are stressed. Others are overworked, struggling with the new responsibilities of parenthood, or moving from one flawed relationship to another. Whatever it is, whatever you are going through, there is wisdom from the Stoics that can help.

Followers of this ancient and inscrutable philosophy have found themselves at the centre of some of history’s most trying ordeals, from the French Revolution to the American Civil War to the prison camps of Vietnam. Bill Clinton reportedly reads Roman Emperor and stoic Marcus Aurelius’s Meditations once a year, and one can imagine him handing a copy to Hillary after her heart-wrenching loss in the US presidential election.

Stoicism is a school of philosophy which was founded in Athens in the early 3rd century and then progressed to Rome, where it became a pragmatic way of addressing life’s problems. The central message is, we don’t control what happens to us; we control how we respond.

The Stoics were really writing and thinking about one thing: how to live. The questions they asked were not arcane or academic but practical and real. “What do I do about my anger?” “What do I do if someone insults me?” “I’m afraid to die; why is that?” “How can I deal with the difficult situations I face?” “How can I deal with the success or power I hold?”

There also happens to be a decent amount of advice on how to live under the looming threat of a tyrant (“I may wish to be free from torture, but if the time comes for me to endure it, I’ll wish to bear it courageously with bravery and honour,” wrote the Roman philosopher Seneca). All of which makes Stoic philosophy particularly well-suited to the world we live in.

While it would be hard to find a word dealt a greater injustice at the hands of the English language than “stoicism”— with its mistaken connotations of austerity and lack of emotion — in fact, nothing could be more necessary for our times than a good dose of Stoic philosophy.
No comment yet.
Scooped by Wildcat2030!

We Were Wrong About Consciousness Disappearing in Dreamless Sleep, Say Scientists

We Were Wrong About Consciousness Disappearing in Dreamless Sleep, Say Scientists | Philosophy everywhere everywhen |
When it comes to dreamlessness, conventional wisdom states that consciousness disappears when we fall into a deep, dreamless sleep.

But researchers have come up with a new way to define the different ways that we experience dreamlessness, and say there’s no evidence to suggest that our consciousness 'switches off' when we stop dreaming. In fact, they say the state of dreamlessness is way more complicated than we’d even imagined.

"[T]he idea that dreamless sleep is an unconscious state is not well-supported by the evidence," one of the researchers, Evan Thompson from the University of British Columbia in Canada, told Live Science.

Instead, he says the evidence points to the possibility of people having conscious experiences during all states of sleep - including deep sleep - and that could have implications for those accused of committing a crime while sleepwalking.

But first off, what exactly is dreamlessness?

Traditionally, dreamlessness is defined at that part of sleep that occurs between bouts of dreams - a time of deep sleep when your conscious experience is temporarily switched off. This is different from those times when you simply cannot remember your dreams once you've woken up.

As dream researchers from the University of California, Santa Cruz explain, most people over the age of 10 dream at least four to six times per night during a stage of sleep called REM, or Rapid Eye Movement. (Studies suggest that children under age 10 only dream during roughly 20 percent of their REM periods.)

Considering REM periods can vary in length from 5 to 10 minutes for the first REM period of the night to as long as 30-34 minutes later in the night, researchers have suggested that each dream is probably no longer than 34 minutes each.
No comment yet.
Scooped by Wildcat2030!

Stephen Hawking: AI will be 'either best or worst thing' for humanity

Stephen Hawking: AI will be 'either best or worst thing' for humanity | Philosophy everywhere everywhen |
Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilisation and our species”.

Hawking was speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research.

“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”
No comment yet.
Scooped by Wildcat2030!

Would you ditch your therapist for a “philosophical counselor”?

Would you ditch your therapist for a “philosophical counselor”? | Philosophy everywhere everywhen |
Instead of going to traditional psychotherapists for advice and support, growing numbers of people are turning to philosophical counselors for particularly wise guidance. These counselors work much like traditional psychotherapists. But instead of offering solutions based solely on their understanding of mental health or psychology, philosophical counselors offer solutions and guidance drawn from the writings of great thinkers.

Millennia of philosophical studies can provide practical advice for those experiencing practical difficulties: There’s an entire field of philosophy that explores moral issues; stoic philosophers show us how to weather hardship; the existentialists advise on anxiety; and Aristotle was one of the first thinkers to question what makes a “good life.” All these topics make up a good chunk of any therapy session, philosophical or otherwise.

Philosophical counseling has been available since the early 1990s, when Elliot Cohen came up with the idea and founded the National Philosophical Counseling Association (NPCA) with around 20 counselors. The NPCA’s website suggests writer’s block, job loss, procrastination, and rejection are all appropriate subjects for philosophical guidance. (However, counselors will refer clients to a psychiatrist if they think they’re suffering from a serious mental health issue.) Clients pay about $100 a session for philosophically guided advice, and each session lasts roughly an hour.

“I saw so many people who had all these problems of living that seemed to be amenable to the thinking that students do in Philosophy 101 and Introduction to Logic,” Cohen says. He often draws on French existentialist Jean Paul Sartre, who believed that you are nothing more than your own actions. “If you don’t act, you don’t define yourself and you don’t become anything but a disappointed dream or expectation,” he adds.
No comment yet.
Scooped by Wildcat2030!

Can Transcendence Be Taught?

Can Transcendence Be Taught? | Philosophy everywhere everywhen |
For two professors, the opening words of Goethe’s Faust have always been slightly disturbing, but only recently, as we’ve grown older, have they come to haunt us.

Faust sits in his dusty library, surrounded by tomes, and laments the utter inadequacy of human knowledge. He was no average scholar but a true savant — a master in the liberal arts of philosophy and theology and the practical arts of jurisprudence and medicine. In the medieval university, those subjects were the culminating moments of a lifetime of study in rhetoric, logic, grammar, arithmetic, geometry, music, and astronomy.

In other words, Faust knows everything worth knowing. And still, after all his careful bookwork, he arrives at the unsettling realization that none of it has really mattered. His scholarship has done pitifully little to unlock the mystery of human life.

Are we and our students in that same situation? Are we teaching them everything without teaching them anything regarding the big questions that matter most? Is there a curriculum that addresses why we are here? And why we live only to suffer and die?

Those questions are at the root of every great myth and wisdom tradition: the Katha Upanishad, the opening lines of the Bhagavad Gita, Sophocles’ Ajax, and the Book of Job among them. Job cries to the heavens, entreating God to clarify the tortuous perplexity of being human. But God does not oblige, and Job is left in a whirlwind, in the dark, just like Faust at the beginning of Goethe’s modern remake of the ancient biblical story.
No comment yet.
Scooped by Wildcat2030!

Humans may speak a 'universal' language

Humans may speak a 'universal' language | Philosophy everywhere everywhen |
Even though you’re not fluent in different languages, you may be able to recognise words in others. In German for water is ‘wasser’, in Dutch it's 'water' and in Serbian ‘voda’. Similar sounds and letters are used to form the word across languages.

Looking at this phenomenon, researchers at Cornell’s Cognitive Neuroscience Lab in the US have found we use similar sounds for the words of common objects and ideas, suggesting that humans may speak the same language.

By analysing around 40-100 basic vocabulary words in around 3,700 languages, approximately 62 per cent of the world’s current languages, the researchers came to the conclusion that for basic concepts such as body parts or aspects of the natural natural world, there are common sounds. The research was published in the journal Proceedings of the National Academy of Sciences.

Body parts in particular stood out. The word ‘nose’ was likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze’. The words ‘knee’ ‘bone’ and 'breasts’ were also similar across the language spectrum. The word for tongue is likely to have an ‘l’, as in 'langue' in French.

The words 'red' and 'round' were more likely to include the ‘r’ sound. 'Leaf' was found to include the sounds ‘l’, ‘b’ or ‘p’. The words 'bite', 'dog', 'star' and 'water' also stood out as words with strong similar sounds.

Certain words were also found to avoid specific sounds. Words for ‘I’ were found to be unlikely to include sounds involving ‘b’, ‘l’, ‘p’, ‘r’, ‘s’, ‘t’ or ‘u’.
No comment yet.
Scooped by Wildcat2030!

99.9999999% of your body is empty space

99.9999999% of your body is empty space | Philosophy everywhere everywhen |
Some days, you might feel like a pretty substantial person. Maybe you have a lot of friends, or an important job, or a really big car.

But it might humble you to know that all of those things – your friends, your office, your really big car, you yourself, and even everything in this incredible, vast Universe – are almost entirely, 99.9999999 percent empty space.

Here’s the deal. As I previously wrote in a story for the particle physics publication Symmetry, the size of an atom is governed by the average location of its electrons: how much space there is between the nucleus and the atom’s amorphous outer shell.

Nuclei are around 100,000 times smaller than the atoms they’re housed in.

If the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of dust, and the entire human species would fit into the volume of a sugar cube.

So then where does all our mass come from?

Energy! At a pretty basic level, we’re all made of atoms, which are made of electrons, protons, and neutrons.

And at an even more basic, or perhaps the most basic level, those protons and neutrons, which hold the bulk of our mass, are made of a trio of fundamental particles called quarks.

But, as I explained in Symmetry, the mass of these quarks accounts for just a tiny per cent of the mass of the protons and neutrons. And gluons, which hold these quarks together, are completely massless.

A lot of scientists think that almost all the mass of our bodies comes from the kinetic energy of the quarks and the binding energy of the gluons.

So if all of the atoms in the Universe are almost entirely empty space, why does anything feel solid?

The idea of empty atoms huddling together, composing our bodies and buildings and trees might be a little confusing.

If our atoms are mostly space, why can’t we pass through things like weird ghost people in a weird ghost world? Why don’t our cars fall through the road, through the centre of the Earth, and out the other side of the planet? Why don’t our hands glide through other hands when we give out high fives?
It’s time to reexamine what we mean by empty space. Because as it turns out, space is never truly empty. It’s actually full of a whole fistful of good stuff, including wave functions and invisible quantum fields.
No comment yet.
Scooped by Wildcat2030!

How Morality Changes in a Foreign Language

How Morality Changes in a Foreign Language | Philosophy everywhere everywhen |
What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.

And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?

Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.

In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?

Harold Wilson's curator insight, February 1, 2017 11:46 AM
Kierkegaard's idea of Subjectivism looms large here
Scooped by Wildcat2030!

What If Evolution Bred Reality Out Of Us?

What If Evolution Bred Reality Out Of Us? | Philosophy everywhere everywhen |
Look around you. What do you see?

Other people going about their business? Rooms with tables and chairs? Nature with its sky, grass and trees?

All that stuff, it's really there, right? Even if you were to disappear right now — poof! — the rest of the world would still exist in all forms you're seeing now, right?

Or would it?

This kind of metaphysical question is something you'd expect in a good philosophy class — or maybe even a discussion of quantum physics. But most of us wouldn't expect an argument denying the reality of the objective world to come out of evolutionary biology. After all, doesn't evolution tell us we've been tuned to reality by billions of years of natural selection? It makes sense that creatures that can't tell a poisonous snake from a stick shouldn't last long and, therefore, shouldn't pass their genes on to the next generation.

That is certainly how the standard argument goes. But Donald Hoffman, a cognitive scientist, isn't buying it.

For decades, Hoffman, a professor at the University of California, Irvine, has been studying the links between evolution, perception and intelligence (both natural and machine). Based on that body of work, he thinks we've been missing something fundamental when it comes to fundamental reality.

Fundamentally, Hoffman argues, evolution and reality (the objective kind) have almost nothing to do with each other.

Hoffman's been making a lot of news in recent months with these claims. His March 2015 TED talk went viral, gaining more than 2 million views. After a friend sent me the video, I was keen to learn more. I called Dr. Hoffman, and he graciously set aside some time for us to talk. What followed was a delightful conversation with a guy who does, indeed, have a big radical idea. At the same time, Hoffman doesn't come off as someone with an ax to grind. He seems genuinely open and truly curious. At his core, Hoffman says, he's a scientist with a theory that must either live or die by data.

So, what exactly is Hoffman's big radical idea? He begins with a precisely formulated theorem:

"Given an arbitrary world and arbitrary fitness functions, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but that is just tuned to fitness."
FastTFriend's curator insight, September 12, 2016 2:45 AM
"Instead, he claims, it's our interactions as conscious agents that give shape to the reality we experience. "I can take separate observers," he told Quanta Magazine, "put them together and create new observers, and keep doing this ad infinitum. It's conscious agents all the way down." 

- A huge impact on the interaction of agents! for one, at the absence of an objective 'outside' arbitrator, the value complex is open and much faster...
Scooped by Wildcat2030!

Why Science Should Stay Clear of Metaphysics - Issue 40: Learning - Nautilus

Why Science Should Stay Clear of Metaphysics - Issue 40: Learning - Nautilus | Philosophy everywhere everywhen |
Philosophers of science are not known for agreeing with each other—contrariness is part of the job description. But for thousands of years, from Aristotle to Thomas Kuhn, those who study what science is have roughly categorized themselves into two basic camps: “realists” and “anti-realists.”

In philosophical terms, “anti-realists” or “empiricists” understand science as investigating the properties of observable objects via experiments. Empirical theories are constrained by the experimental results. “Realists,” on the other hand, speculate more freely about the possible shape of the unobservable world, often designing mathematical explanations that cannot (yet) be tested. Isaac Newton was a realist, as are string theorists.

Most scientists do not lose sleep worrying about philosophical divides. But maybe they should; Albert Einstein certainly did, as did Niels Bohr, and Erwin Schrödinger. In the 20th century, Kuhn’s cataloguing of the “paradigmatic” nature of scientific revolutions entered the scientific consciousness. As did Karl Popper’s requirement that only theories that can in principle be determined to be false are scientific. “God exists,” for example, is not falsifiable.

But outside the halls of the academy, the influential works of philosophers of science, such as Rudolf Carnap, Wilfrid Sellars, Paul Feyerabend, and Bas C. van Fraassen, to list but a few, are little known to many scientists and the public.
No comment yet.
Scooped by Wildcat2030!

Whatever you think, you don’t necessarily know your own mind – Keith Frankish | Aeon Ideas

Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?

Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.

Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons.

Many other studies support this explanation. For example, if people are instructed to nod their heads while listening to a tape (in order, they are told, to test the headphones), they express more agreement with what they hear than if they are asked to shake their heads. And if they are required to choose between two items they previously rated as equally desirable, they subsequently say that they prefer the one they had chosen. Again, it seems, they are unconsciously interpreting their own behaviour, taking their nodding to indicate agreement and their choice to reveal a preference.
No comment yet.