Philosophy everywhere everywhen
10.8K views | +0 today
Follow
 
Rescooped by Wildcat2030 from cognition
onto Philosophy everywhere everywhen
Scoop.it!

The Internet is the God We Create-The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements.

The Internet is the God We Create-The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements. | Philosophy everywhere everywhen | Scoop.it

The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements. 


Via FastTFriend
Wildcat2030's insight:

The first volume, The Netocrats, deals with human history from the perspective of the new elite of Informationalism, the emerging society of information networks, shaped by digital interactivity, making prophecies about the digital future of politics, culture, economy, et cetera.

The second volume, The Global Empire, explores the near future of political globalization and the struggle to form new, functioning ideologies for a world where global decision making is a necessity.

The third volume, The Body Machines, thoroughly deals with the demise of the Cartesian subject. It discusses the implications of a materialist image of humanity and explains how it relates to the new, emerging technological paradigm. It explains why we’re all of us body machines, and why this is actually good news.

more...
FastTFriend's curator insight, January 9, 2013 3:23 AM

The first volume, The Netocrats, deals with human history from the perspective of the new elite of Informationalism, the emerging society of information networks, shaped by digital interactivity, making prophecies about the digital future of politics, culture, economy, et cetera.

The second volume, The Global Empire, explores the near future of political globalization and the struggle to form new, functioning ideologies for a world where global decision making is a necessity.

The third volume, The Body Machines, thoroughly deals with the demise of the Cartesian subject. It discusses the implications of a materialist image of humanity and explains how it relates to the new, emerging technological paradigm. It explains why we’re all of us body machines, and why this is actually good news.

Philosophy everywhere everywhen
First Law of Philosophy: For every philosopher, there is an equal and opposite philosopher. The Second Law of Philosophy: They're both wrong
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030
Scoop.it!

Don’t Be So Sure | Issue 121 | Philosophy Now

Don’t Be So Sure | Issue 121 | Philosophy Now | Philosophy everywhere everywhen | Scoop.it
You may think you know what philosophical skepticism is. It’s commonly traced back to René Descartes, who in his Meditations (1642) asks whether there is anything of which he can be completely certain. Famously, he decides there is: he cannot doubt his own existence. But first he entertains radical skeptical scenarios, notably that he’s dreaming, or that an ‘evil demon’ may be inducing in him false beliefs that seem certain. Pop culture embraces this form of skepticism most famously in the Wachowskis’ film The Matrix (1999), which suggests that you may be a brain in a vat, or rather, Keanu Reeves in a vat. Cartesian skepticism sets a rigorous test for our beliefs, in the hope of finding at least some beliefs that can survive the challenge.

But don’t be so sure about this picture. For one thing, this kind of skepticism appears before Descartes. It is found for instance in the fourteenth century thinker Nicholas of Autrecourt, who wanted to challenge the Aristotelian scholasticism of his day. He proposed that absolute certainty is possible, but only about a very limited range of things. The paradigm case of certainty would be the principle of non-contradiction, which says that a proposition and its precise denial cannot both be true. Nicholas inferred that all genuine knowledge would have to meet this standard of certainty. Thus, the only things you can know for sure are those whose falsehood would entail a contradiction. For instance, you can know that squares have four sides and that a human is an animal, since these things are true by definition; but you cannot know for sure that any square or human you are seeing is real, since no contradiction ensues from supposing them to be illusions. Nicholas ultimately concluded that for the most part, the best we can do is to find beliefs that are probable, not certain.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How did usury stop being a sin and become respectable finance? – Alex Mayyasi | Aeon Essays

How did usury stop being a sin and become respectable finance? – Alex Mayyasi | Aeon Essays | Philosophy everywhere everywhen | Scoop.it
‘A banker and a theologian’ sounds like the start of a bad joke. But for David Miller it’s merely a job description. After working in finance and business for 16 years, Miller turned to theology, and received his PhD from Princeton Theological Seminary in 2003. Now he’s a professor of business ethics and the director of Princeton University’s Faith and Work Initiative, where his research focuses on Christianity, Judaism and Islam. ‘How to Succeed without Selling Your Soul’ is the students’ popular nickname for his signature course.

In 2014, Citigroup called. The bank had been battered by successive scandals and a wave of public mistrust after the financial crisis, so they wanted to hire Miller as an on-call ethicist. He agreed. Rather than admonish bankers to follow the law – an approach that Miller thinks is inadequate – he talks to them about philosophy. Surprisingly, he hasn’t found bankers and business leaders to be a tough crowd. Many confess a desire to do good. ‘Often I have lunch with an executive, and they say: “You do this God stuff?”’ Miller told me. ‘And then we spend next hour talking about ethics, purpose, meaning. So I know there’s interest.’ Miller wants people in finance to talk about ‘wisdom, whatever its source’. To ignore these traditions and thinkers, as the bulk of the industry tends to do, is equivalent to ‘putting on intellectual blinders’, he says.

Today, a banker listening to a theologian seems like a curiosity, a category error. But for most of history, this kind of dialogue was the norm. Hundreds of years ago, when modern finance arose in Europe, moneylenders moderated their behaviour in response to debates among the clergy about how to apply the Bible’s teachings to an increasingly complex economy. Lending money has long been regarded as a moral matter. So just when and how did most bankers stop seeing their work in moral terms?
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The purpose of life is to be a nobody

The purpose of life is to be a nobody | Philosophy everywhere everywhen | Scoop.it
We all experience the world like we are at the center of reality.

We think and we feel in relation to how our senses absorb information and how this information mingles with our personal memories. The subjective perception created by these interactions provides the illusion of importance.

We forget that this perception only exists in our minds and that everyone near us is walking around under exactly the same psychological mindset.

In truth, we’re just one of billions, and over the course of history, everything about us is insignificant. Even people like Newton and Einstein, who we revere for their contributions to humanity, are only slightly less insignificant.

Our universe contains one septillion stars (a one followed by 24 zeroes) and a lot of these stars contain many, many more modes of dust that we call planets. If any of us ceased to exist tomorrow, little would change beyond the subjective emotional states of the people in our immediate circles.

Earth would continue its orbit, and the laws of physics would remain in tact. We’re nothing more than a fraction of a ripple in an infinite sea of entropy.

Many of us don’t like hearing this. It conflicts with the story our mind tells.
more...
FastTFriend's curator insight, September 16, 4:18 AM
Excellent! "The surest way to be unfilled is to walk around like you hold some sort of a privileged position in the universe. It’s not only a completely false and harmful illusion, but it also overlooks the fringe benefits of being a nobody".
Scooped by Wildcat2030
Scoop.it!

Consciousness is not a thing, but a process of inference – Karl Friston | Aeon Essays

Consciousness is not a thing, but a process of inference – Karl Friston | Aeon Essays | Philosophy everywhere everywhen | Scoop.it
I have a confession. As a physicist and psychiatrist, I find it difficult to engage with conversations about consciousness. My biggest gripe is that the philosophers and cognitive scientists who tend to pose the questions often assume that the mind is a thing, whose existence can be identified by the attributes it has or the purposes it fulfils.

But in physics, it’s dangerous to assume that things ‘exist’ in any conventional sense. Instead, the deeper question is: what sorts of processes give rise to the notion (or illusion) that something exists? For example, Isaac Newton explained the physical world in terms of massive bodies that respond to forces. However, with the advent of quantum physics, the real question turned out to be the very nature and meaning of the measurements upon which the notions of mass and force depend – a question that’s still debated today.

As a consequence, I’m compelled to treat consciousness as a process to be understood, not as a thing to be defined. Simply put, my argument is that consciousness is nothing more and nothing less than a natural process such as evolution or the weather. My favourite trick to illustrate the notion of consciousness as a process is to replace the word ‘consciousness’ with ‘evolution’ – and see if the question still makes sense. For example, the question What is consciousness for? becomes What is evolution for? Scientifically speaking, of course, we know that evolution is not for anything. It doesn’t perform a function or have reasons for doing what it does – it’s an unfolding process that can be understood only on its own terms. Since we are all the product of evolution, the same would seem to hold for consciousness and the self.

My view on consciousness resonates with that of the philosopher Daniel Dennett, who has spent his career trying to understand the origin of the mind. Dennett is concerned with how mindless, mere ‘causes’ (A leads to B) can give rise to the species of mindful ‘reasons’ as we know them (A happens so that B can happen). Dennett’s solution is what he calls ‘Darwin’s dangerous idea’: the insight that it’s possible to have design in the absence of a designer, competence in the absence of comprehension, and reasons (or ‘free-floating rationales’) in the absence of reasoners. A population of beetles that has outstripped another has probably done so for some ‘reason’ we can identify – a favourable mutation which produces a more camouflaging colour, for example. ‘Natural selection is thus an automatic reason-finder, which “discovers” and “endorses” and “focuses” reasons over many generations,’ Dennett writes in From Bacteria to Bach and Back: The Evolution of Minds (2017). ‘The scare quotes are to remind us that natural selection doesn’t have a mind, doesn’t itself have reasons, but is nevertheless competent to perform this “task” of design refinement.’
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Is Consciousness Fractal? - Issue 47: Consciousness - Nautilus

Is Consciousness Fractal? - Issue 47: Consciousness - Nautilus | Philosophy everywhere everywhen | Scoop.it
When the reclusive artist poured paint from cans onto vast canvases laid out across the floor of his barn in the late 1940s and early 1950s, he created splatters of paint that seemed completely random. Some interpretations saw them as a statement about the futility of World War II, others as a commentary on art as experience rather than representation. As Pollock refined his technique over the years, critics became increasingly receptive to his work, launching him into the public eye. “We have a deliberate disorder of hypothetical hidden orders,” one critic wrote, “or ‘multiple labyrinths.’ ”

In 1999, Richard Taylor, a physicist at the University of Oregon, expressed the “hidden orders” of Pollock’s work in a very different way. Taylor found that Pollock’s patterns were not random after all. They were fractal—and the complexity of those fractals steadily increased as Pollock’s technique matured.

Now, Pollock would not have known what a fractal was, nor would anyone else have at the time. It wasn’t until 1975 that the eminent mathematician Benoit Mandelbrot coined the term to describe patterns that are self-similar across different-sized scales, a “middle ground” between order and chaos. The “Nautilus” section of one famous fractal pattern named after Mandelbrot, for example, looks like a spiral, as does a magnified view of one of its sections, and so on.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The psychological importance of wasting time

The psychological importance of wasting time | Philosophy everywhere everywhen | Scoop.it
There will always be an endless list of chores to complete and work to do, and a culture of relentless productivity tells us to get to it right away and feel terribly guilty about any time wasted. But the truth is, a life spent dutifully responding to emails is a dull one indeed. And “wasted” time is, in fact, highly fulfilling and necessary.

Don’t believe me? Take it from the creator of “Inbox Zero.” As Oliver Burkeman reports in The Guardian, Merlin Mann was commissioned to write a book about his streamlined email system. Two years later, he abandoned the project and instead posted a (since deleted) blog post on how he’d spent so long focusing on how to spend time well, he’d ended up missing valuable moments with his daughter.

The problem comes when we spend so long frantically chasing productivity, we refuse to take real breaks. We put off sleeping in, or going for a long walk, or reading by the window—and, even if we do manage time away from the grind, it comes with a looming awareness of the things we should be doing, and so the experience is weighed down by guilt.

Instead, there’s a tendency to turn to the least fulfilling tendency of them all: Sitting at our desk, in front of our computer, browsing websites and contributing to neither our happiness nor our productivity.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Massimo Pigliucci on Seneca’s Stoic philosophy of happiness – Massimo Pigliucci | Aeon Classics

Massimo Pigliucci on Seneca’s Stoic philosophy of happiness – Massimo Pigliucci | Aeon Classics | Philosophy everywhere everywhen | Scoop.it
Lucius Annaeus Seneca is a towering and controversial figure of antiquity. He lived from 4 BCE to 65 CE, was a Roman senator and political adviser to the emperor Nero, and experienced exile but came back to Rome to become one of the wealthiest citizens of the Empire. He tried to steer Nero toward good governance, but in the process became his indirect accomplice in murderous deeds. In the end, he was ‘invited’ to commit suicide by the emperor, and did so with dignity, in the presence of his friends.

Seneca wrote a number of tragedies that directly inspired William Shakespeare, but was also one of the main exponents of the Stoic school of philosophy, which has made a surprising comeback in recent years. Stoicism teaches us that the highest good in life is the pursuit of the four cardinal virtues of practical wisdom, temperance, justice and courage – because they are the only things that always do us good and can never be used for ill. It also tells us that the key to a serene life is the realisation that some things are under our control and others are not: under our control are our values, our judgments, and the actions we choose to perform. Everything else lies outside of our control, and we should focus our attention and efforts only on the first category.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Are human rights anything more than legal conventions? – John Tasioulas | Aeon Ideas

Are human rights anything more than legal conventions? – John Tasioulas | Aeon Ideas | Philosophy everywhere everywhen | Scoop.it
We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.

Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?

But many now argue that we should set aside philosophical wrangles over the nature and origins of human rights. In the 21st century, they contend, human rights exist not in the nebulous ether of philosophical speculation, but in the black letter of law. Human rights are those laid down in The Universal Declaration of Human Rights (1948) and the various international and domestic laws that implement it. Some who adopt this line of thought might even invoke the 18th-century English philosopher Jeremy Bentham, who contemptuously dismissed the idea of natural rights existing independently of human-made laws as ‘rhetorical nonsense – nonsense upon stilts’.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why Foucault's work on power is more important than ever – Colin Koopman | Aeon Essays

Why Foucault's work on power is more important than ever – Colin Koopman | Aeon Essays | Philosophy everywhere everywhen | Scoop.it
Imagine you are asked to compose an ultra-short history of philosophy. Perhaps you’ve been challenged to squeeze the impossibly sprawling diversity of philosophy itself into just a few tweets. You could do worse than to search for the single word that best captures the ideas of every important philosopher. Plato had his ‘forms’. René Descartes had his ‘mind’ and John Locke his ‘ideas’. John Stuart Mill later had his ‘liberty’. In more recent philosophy, Jacques Derrida’s word was ‘text’, John Rawls’s was ‘justice’, and Judith Butler’s remains ‘gender’. Michel Foucault’s word, according to this innocent little parlour game, would certainly be ‘power’.

Foucault remains one of the most cited 20th-century thinkers and is, according to some lists, the single most cited figure across the humanities and social sciences. His two most referenced works, Discipline and Punish: The Birth of the Prison (1975) and The History of Sexuality, Volume One (1976), are the central sources for his analyses of power. Interestingly enough, however, Foucault was not always known for his signature word. He first gained his massive influence in 1966 with the publication of The Order of Things. The original French title gives a better sense of the intellectual milieu in which it was written: Les mots et les choses, or ‘Words and Things’. Philosophy in the 1960s was all about words, especially among Foucault’s contemporaries.

In other parts of Paris, Derrida was busily asserting that ‘there is nothing outside the text’, and Jacques Lacan turned psychoanalysis into linguistics by claiming that ‘the unconscious is structured like a language’. This was not just a French fashion. In 1967 Richard Rorty, surely the most infamous American philosopher of his generation, summed up the new spirit in the title of his anthology of essays, The Linguistic Turn. That same year, Jürgen Habermas, soon to become Germany’s leading philosopher, published his attempt at ‘grounding the social sciences in a theory of language’.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

If You Think You’re a Genius, You’re Crazy - Issue 46: Balance - Nautilus

If You Think You’re a Genius, You’re Crazy - Issue 46: Balance - Nautilus | Philosophy everywhere everywhen | Scoop.it
When John Forbes Nash, the Nobel Prize-winning mathematician, schizophrenic, and paranoid delusional, was asked how he could believe that space aliens had recruited him to save the world, he gave a simple response. “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Nash is hardly the only so-called mad genius in history. Suicide victims like painters Vincent Van Gogh and Mark Rothko, novelists Virginia Woolf and Ernest Hemingway, and poets Anne Sexton and Sylvia Plath all offer prime examples. Even ignoring those great creators who did not kill themselves in a fit of deep depression, it remains easy to list persons who endured well-documented psychopathology, including the composer Robert Schumann, the poet Emily Dickinson, and Nash. Creative geniuses who have succumbed to alcoholism or other addictions are also legion.

Instances such as these have led many to suppose that creativity and psychopathology are intimately related. Indeed, the notion that creative genius might have some touch of madness goes back to Plato and Aristotle. But some recent psychologists argue that the whole idea is a pure hoax. After all, it is certainly no problem to come up with the names of creative geniuses who seem to have displayed no signs or symptoms of mental illness.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How playing Wittgensteinian language-games can set us free – Sandy Grant | Aeon Ideas

We live out our lives amid a world of language, in which we use words to do things. Ordinarily we don’t notice this; we just get on with it. But the way we use language affects how we live and who we can be. We are as if bewitched by the practices of saying that constitute our ways of going on in the world. If we want to change how things are, then we need to change the way we use words. But can language-games set us free?

It was the maverick philosopher Ludwig Wittgenstein who coined the term ‘language-game’. He contended that words acquire meaning by their use, and wanted to see how their use was tied up with the social practices of which they are a part. So he used ‘language-game’ to draw attention not only to language itself, but to the actions into which it is woven. Consider the exclamations ‘Help!’ ‘Fire!’ ‘No!’ These do something with words: soliciting, warning, forbidding. But Wittgenstein wanted to expose how ‘words are deeds’, that we do something every time we use a word. Moreover, what we do, we do in a world with others.

This was not facile word-nerdery. Wittgenstein was intent on bringing out how ‘the “speaking” of language is part of an activity, or form of life’. In Philosophical Investigations (1953), he used the example of two builders. A brickie calls ‘Slab!’ and his helper brings it. What’s going on here? The helper who responds is not like a dog reacting to an order. We are humans, the ones who live together in language in the particular way that we do, a way that involves distinctive social practices.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why Tolkien's fantastic imaginary languages have had more impact than Esperanto

Why Tolkien's fantastic imaginary languages have had more impact than Esperanto | Philosophy everywhere everywhen | Scoop.it
JRR Tolkien began writing The Fall of Gondolin while on medical leave from the first world war, 100 years ago this month. It is the first story in what would become his legendarium – the mythology that underpins The Lord of the Rings. But behind the fiction was his interest in another epic act of creation: the construction of imaginary languages.

That same year, on the other side of Europe, Ludwik Zamenhof died in his native Poland. Zamenhof had also been obsessed with language invention, and in 1887 brought out a book introducing his own creation. He published this under the pseudonym Doktoro Esperanto, which in time became the name of the language itself.

The construction of imaginary languages, or conlangs, has a long history, dating back to the 12th century. And Tolkien and Zamenhof are two of its most successful proponents. Yet their aims were very different, and in fact point to opposing views of what language itself actually is.

Zamenhof, a Polish Jew growing up in a country where cultural and ethnic animosity was rife, believed that the existence of a universal language was the key to peaceful co-existence. Although language is the “prime motor of civilisation” he wrote, “difference of speech is a cause of antipathy, nay even of hatred, between people”. His plan was to devise something which was simple to learn, not tied to any one nation or culture, and could thus help unite rather than divide humanity.

As “international auxiliary languages” go, Esperanto has been very successful. At its peak, its speakers numbered in the millions, and although exact estimates are very difficult to make, even today up to a million people still use it. It has an expansive body of native literature, there’s a museum in China dedicated exclusively to it, while in Japan Zamenhof himself is even honoured as a god by one particular Shinto sect who use the language. Yet it never really came close to achieving his dreams of world harmony. And at his death, with World War I tearing Europe apart, the optimism he’d had for it had turned mostly to disillusion.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Extraterrestrials May Be Robots Without Consciousness - Cosmos on Nautilus

Extraterrestrials May Be Robots Without Consciousness - Cosmos on Nautilus | Philosophy everywhere everywhen | Scoop.it
Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.

The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Psychedelics work by violating our models of self and the world – Philip Gerrans & Chris Letheby | Aeon Essays

Psychedelics work by violating our models of self and the world – Philip Gerrans & Chris Letheby | Aeon Essays | Philosophy everywhere everywhen | Scoop.it
Psychedelic drugs are making a psychiatric comeback. After a lull of half a century, researchers are once again investigating the therapeutic benefits of psilocybin (‘magic mushrooms’) and LSD. It turns out that the hippies were on to something. There’s mounting evidence that psychedelic experiences can be genuinely transformative, especially for people suffering from intractable anxiety, depression and addiction. ‘It is simply unprecedented in psychiatry that a single dose of a medicine produces these kinds of dramatic and enduring results,’ Stephen Ross, the clinical director of the NYU Langone Center of Excellence on Addiction, told Scientific American in 2016.

Just what do these drugs do? Psychedelics reliably induce an altered state of consciousness known as ‘ego dissolution’. The term was invented, well before the tools of contemporary neuroscience became available, to describe sensations of self-transcendence: a feeling in which the mind is put in touch more directly and intensely with the world, producing a profound sense of connection and boundlessness.

How does all this help those with long-term psychiatric disorders? The truth is that no one quite knows how psychedelic therapy works. Some point to a lack of knowledge about the brain, but this is a half-truth. We actually know quite a lot about the neurochemistry of psychedelics. These drugs bind to a specific type of serotonin receptor in the brain (the 5-HT2A receptor), which precipitates a complex cascade of electrochemical signalling. What we don’t really understand, though, is the more complex relationship between the brain, the self and its world. Where does the subjective experience of being a person come from, and how is it related to the brute matter that we’re made of?

It’s here that we encounter a last frontier, metaphysically and medically. Some think the self is a real entity or phenomenon, implemented in neural processes, whose nature is gradually being revealed to us. Others say that cognitive science confirms the arguments of philosophers East and West that the self does not exist. The good news is that the mysteries of psychedelic therapy might be a hidden opportunity to finally start unravelling the controversy.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Book Club: The Edge of Reason 6, the five characteristics of rational discourse

Book Club: The Edge of Reason 6, the five characteristics of rational discourse | Philosophy everywhere everywhen | Scoop.it
Julian Baggini’s The Edge of Reason, of which we have so far examined the first five chapters, ends its second part with a discussion of the distinguishing characteristics of objective rational discourse. He begins it by suggesting that the problem with the classic (Platonic, really) view of reason is that it treats reason as an heteronomous entity, something coming from the outside, imposed on us by eternal laws of logic. Instead, human reason is, well, human, i.e., autonomous, shaped from the inside, shaped by the characteristics and limitations of what it means to be human in the first place.

That said, Julian immediately qualifies, reason does have a component of heteronimity, in that it cannot simply be a self-serving instrument entirely detached from how the world actually is, but rather has to account for the brute facts of external reality. Reason, he says, is nothing if it doesn’t aspire to objectivity, and this brings him to propose a definition of rational argument: “the giving of objective reasons for belief.”

However, if you recall our previous discussions of Baggini’s book, you will immediately notice a tension here: he has been arguing for a somewhat deflated, human, view of reason, and now he’s going to ask for objectivity? Well, yes, but he puts forth a deflated view of objectivity itself, one that he derives from the philosopher Thomas Nagel.

Nagel wrote a famous book, back in 1986, in which he argued that objectivity is often conceived in terms that he summarized with the oxymoronic phrase of the view from nowhere. Particularly, science aspires to such a view, which both Nagel and Baggini see as hopelessly misguided.

Julian’s intriguing example of an attempt to achieve a view from nowhere is the famous plaques that were put onboard the two Pioneer spacecrafts launched in the early ‘70s, and which are now outside the confines of the solar system (see top figure). The plaques were designed by astronomer Carl Sagan as a symbolic attempt to communicate with possible alien beings. (Symbolic because there is pretty much no chance in hell that the Pioneers will ever actually reach another habitable world, given their speed and their cosmic trajectories.)
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Opinion | We Aren’t Built to Live in the Moment

Opinion | We Aren’t Built to Live in the Moment | Philosophy everywhere everywhen | Scoop.it
We are misnamed. We call ourselves Homo sapiens, the “wise man,” but that’s more of a boast than a description. What makes us wise? What sets us apart from other animals? Various answers have been proposed — language, tools, cooperation, culture, tasting bad to predators — but none is unique to humans.

What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.

A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.

Behaviorists thought of animal learning as the ingraining of habit by repetition. Psychoanalysts believed that treating patients was a matter of unearthing and confronting the past. Even when cognitive psychology emerged, it focused on the past and present — on memory and perception.
Continue reading the main story

But it is increasingly clear that the mind is mainly drawn to the future, not driven by the past. Behavior, memory and perception can’t be understood without appreciating the central role of prospection. We learn not by storing static records but by continually retouching memories and imagining future possibilities.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

What's the point of art?

What's the point of art? | Philosophy everywhere everywhen | Scoop.it
One of the great paradoxes of human endeavour is why so much time and effort is spent on creating things and indulging in behaviour with no obvious survival value – behaviour otherwise known as art.

Attempting to shed light on this issue is problematic because first we must define precisely what art is. We can start by looking at how art, or the arts, were practised by early humans during the Upper Palaeolithic period, 40,000 to 12,000 years ago, and immediately thereafter.

This period is a far longer stretch of human history than the “modern” age and so how the arts were practised during it should serve as the starting point for any viable explanation. And while art in the modern world is often exploited as a means of expressing individualism, during most of cultural evolution it was utilised by small hunter-gatherer groups as a means of articulating social norms among most, if not all, members of a community.
The arts are special

Why should individuals engage in a preoccupation that requires significant effort, effort that could be better directed towards more immediately gainful activities, such as the search for food or other vital resources? One clue comes from the fact that art objects have special resonance because they come into being through human agency. This involves considerable emotional investment and, consequently, art acts as a crucial node in the complex web of things that make up a culture.

The time and effort committed to making art suggests such behaviour may be a means of signalling to other members of a group. Paradoxically, the very fact that art remains inscrutable and has little obvious practical value is precisely what makes it important for assessing whether a person making art can be regarded as a trustworthy member of a group. In short, art provided a “costly signal” (altruistic behaviour that indirectly benefits the individual by establishing a reputation) for monitoring group allegiance and managing a trust network that weeded out freeloaders.
When combined with ritual, which is often the case, art becomes an even more potent symbol. The notion that it can act as a vehicle for costly signalling is bolstered by the fact that art objects were regularly destroyed or defaced soon after being produced. This suggests that it was the process of making, rather than the final product, that was most significant.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Roger Penrose On Why Consciousness Does Not Compute - Issue 47: Consciousness - Nautilus

Roger Penrose On Why Consciousness Does Not Compute - Issue 47: Consciousness - Nautilus | Philosophy everywhere everywhen | Scoop.it
Once you start poking around in the muck of consciousness studies, you will soon encounter the specter of Sir Roger Penrose, the renowned Oxford physicist with an audacious—and quite possibly crackpot—theory about the quantum origins of consciousness. He believes we must go beyond neuroscience and into the mysterious world of quantum mechanics to explain our rich mental life. No one quite knows what to make of this theory, developed with the American anesthesiologist Stuart Hameroff, but conventional wisdom goes something like this: Their theory is almost certainly wrong, but since Penrose is so brilliant (“One of the very few people I’ve met in my life who, without reservation, I call a genius,” physicist Lee Smolin has said), we’d be foolish to dismiss their theory out of hand.
more...
Scooped by Wildcat2030
Scoop.it!

Seneca on Letting the Eminent Dead Guide You

Seneca on Letting the Eminent Dead Guide You | Philosophy everywhere everywhen | Scoop.it
Cherish some man of high character, and keep him ever before your eyes, living as if he were watching you, and ordering all your actions as if he beheld them.” Such, my dear Lucilius, is the counsel of Epicurus; he has quite properly given us a guardian and an attendant. We can get rid of most sins, if we have a witness who stands near us when we are likely to go wrong. The soul should have someone whom it can respect, – one by whose authority it may make even its inner shrine more hallowed. Happy is the man who can make others better, not merely when he is in their company, but even when he is in their thoughts! And happy also is he who can so revere a man as to calm and regulate himself by calling him to mind! One who can so revere another, will soon be himself worthy of reverence.

Choose therefore a Cato; or, if Cato seems too severe a model, choose some Laelius, a gentler spirit. Choose a master whose life, conversation, and soul-expressing face have satisfied you; picture him always to yourself as your protector or your pattern. For we must indeed have someone according to whom we may regulate our characters; you can never straighten that which is crooked unless you use a ruler.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

'Anumeric' people: What happens when a language has no words for numbers?

'Anumeric' people: What happens when a language has no words for numbers? | Philosophy everywhere everywhen | Scoop.it
Numbers do not exist in all cultures. There are numberless hunter-gatherers embedded deep in Amazonia, living along branches of the world’s largest river tree. Instead of using words for precise quantities, these people rely exclusively on terms analogous to “a few” or “some.”

In contrast, our own lives are governed by numbers. As you read this, you are likely aware of what time it is, how old you are, your checking account balance, your weight and so on. The exact (and exacting) numbers we think with impact everything from our schedules to our self-esteem.

But, in a historical sense, numerically fixated people like us are the unusual ones. For the bulk of our species’ approximately 200,000-year lifespan, we had no means of precisely representing quantities. What’s more, the 7,000 or so languages that exist today vary dramatically in how they utilize numbers.

Speakers of anumeric, or numberless, languages offer a window into how the invention of numbers reshaped the human experience. In a new book, I explore the ways in which humans invented numbers, and how numbers subsequently played a critical role in other milestones, from the advent of agriculture to the genesis of writing.
more...
Ivon Prefontaine, PhD's curator insight, April 27, 5:07 PM
Somewhere in this is a message about integrating quality and quantity.
Scooped by Wildcat2030
Scoop.it!

Hierarchies have a place even in societies built on equality – Stephen C Angle, Kwame Anthony Appiah, Julian Baggini and others | Aeon Essays

The modern West has placed a high premium on the value of equality. Equal rights are enshrined in law while old hierarchies of nobility and social class have been challenged, if not completely dismantled. Few would doubt that global society is all the better for these changes. But hierarchies have not disappeared. Society is still stratified according to wealth and status in myriad ways.

On the other hand, the idea of a purely egalitarian world in which there are no hierarchies at all would appear to be both unrealistic and unattractive. Nobody, on reflection, would want to eliminate all hierarchies, for we all benefit from the recognition that some people are more qualified than others to perform certain roles in society. We prefer to be treated by senior surgeons not medical students, get financial advice from professionals not interns. Good and permissible hierarchies are everywhere around us.

Yet hierarchy is an unfashionable thing to defend or to praise. British government ministers denounce experts as out of tune with popular feeling; both Donald Trump and Bernie Sanders built platforms on attacking Washington elites; economists are blamed for not predicting the 2008 crash; and even the best established practice of medical experts, such as childhood vaccinations, are treated with resistance and disbelief. We live in a time when no distinction is drawn between justified and useful hierarchies on the one hand, and self-interested, exploitative elites on the other.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Materialism alone cannot explain the riddle of consciousness – Adam Frank | Aeon Essays

Materialism alone cannot explain the riddle of consciousness – Adam Frank | Aeon Essays | Philosophy everywhere everywhen | Scoop.it
Materialism holds the high ground these days in debates over that most ultimate of scientific questions: the nature of consciousness. When tackling the problem of mind and brain, many prominent researchers advocate for a universe fully reducible to matter. ‘Of course you are nothing but the activity of your neurons,’ they proclaim. That position seems reasonable and sober in light of neuroscience’s advances, with brilliant images of brains lighting up like Christmas trees while test subjects eat apples, watch movies or dream. And aren’t all the underlying physical laws already known?

From this seemly hard-nosed vantage, the problem of consciousness seems to be just one of wiring, as the American physicist Michio Kaku argued in The Future of the Mind (2014). In the very public version of the debate over consciousness, those who advocate that understanding the mind might require something other than a ‘nothing but matter’ position are often painted as victims of wishful thinking, imprecise reasoning or, worst of all, an adherence to a mystical ‘woo’.

It’s hard not to feel the intuitional weight of today’s metaphysical sobriety. Like Pickett’s Charge up the hill at Gettysburg, who wants to argue with the superior position of those armed with ever more precise fMRIs, EEGs and the other material artefacts of the materialist position? There is, however, a significant weakness hiding in the imposing-looking materialist redoubt. It is as simple as it is undeniable: after more than a century of profound explorations into the subatomic world, our best theory for how matter behaves still tells us very little about what matter is. Materialists appeal to physics to explain the mind, but in modern physics the particles that make up a brain remain, in many ways, as mysterious as consciousness itself.

When I was a young physics student I once asked a professor: ‘What’s an electron?’ His answer stunned me. ‘An electron,’ he said, ‘is that to which we attribute the properties of the electron.’ That vague, circular response was a long way from the dream that drove me into physics, a dream of theories that perfectly described reality. Like almost every student over the past 100 years, I was shocked by quantum mechanics, the physics of the micro-world. In place of a clear vision of little bits of matter that explain all the big things around us, quantum physics gives us a powerful yet seemly paradoxical calculus. With its emphasis on probability waves, essential uncertainties and experimenters disturbing the reality they seek to measure, quantum mechanics made imagining the stuff of the world as classical bits of matter (or miniature billiard balls) all but impossible.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Panpsychism is crazy, but it’s also most probably true – Philip Goff | Aeon Ideas

Panpsychism is crazy, but it’s also most probably true – Philip Goff | Aeon Ideas | Philosophy everywhere everywhen | Scoop.it
Common sense tells us that only living things have an inner life. Rabbits and tigers and mice have feelings, sensations and experiences; tables and rocks and molecules do not. Panpsychists deny this datum of common sense. According to panpsychism, the smallest bits of matter – things such as electrons and quarks – have very basic kinds of experience; an electron has an inner life.

The main objection made to panpsychism is that it is ‘crazy’ and ‘just obviously wrong’. It is thought to be highly counterintuitive to suppose that an electron has some kind of inner life, no matter how basic, and this is taken to be a very strong reason to doubt the truth of panpsychism. But many widely accepted scientific theories are also crazily counter to common sense. Albert Einstein tells us that time slows down at high speeds. According to standard interpretations of quantum mechanics, particles have determinate positions only when measured. And according to Charles Darwin’s theory of evolution, our ancestors were apes. All of these views are wildly at odds with our common-sense view of the world, or at least they were when they were first proposed, but nobody thinks this is a good reason not to take them seriously. Why should we take common sense to be a good guide to how things really are?

No doubt the willingness of many to accept special relativity, natural selection and quantum mechanics, despite their strangeness from the point of view of pre-theoretical common sense, is a reflection of their respect for the scientific method. We are prepared to modify our view of the world if we take there to be good scientific reason to do so. But in the absence of hard experimental proof, people are reluctant to attribute consciousness to electrons.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How fashion moves philosophy forward – J Bradley Studemeyer | Aeon Ideas

The rise and fall of popular positions in the field of philosophy is not governed solely by reason. Philosophers are generally reasonable people but, as with the rest of the human species, their thoughts are heavily influenced by their social settings. Indeed, they are perhaps more influenced than thinkers in other fields, since popular or ‘big’ ideas in modern philosophy change more frequently than ideas in, say, chemistry or biology. Why?

The relative instability of philosophical positions is a result of how the discipline is practised. In philosophy, questions about methods and limitations are on the table in a way that they tend not to be in the physical sciences, for example. Scientists generally acknowledge a ‘gold standard’ for validity – the scientific method – and, for the most part, the way in which investigations are conducted is more or less settled. Falsifiability rules the scientific disciplines: almost all scientists are in agreement that, if a hypothesis isn’t testable, then it isn’t scientific. There is no counterpoint of this in philosophy. Here, students and professors continue to ask: ‘Which questions can we ask?’ and ‘How can we ask, much less answer, those questions?’ There is no universally agreed-upon way in which to do philosophy.

Given that philosophy’s foundational questions and methods are still far from settled – they never will be – it’s natural that there is more flux, more volatility, in philosophy than in the physical sciences. But this volatility is not like the paradigm shifts described by the US historian of science Thomas Kuhn. A better analogy, in fact, would be changes of fashion.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus

This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something - Facts So Romantic - Nautilus | Philosophy everywhere everywhen | Scoop.it
In the 1960s, the American philosopher Edmund Gettier devised a thought experiment that has become known as a “Gettier case.” It shows that something’s “off” about the way we understand knowledge. This ordeal is called the “Gettier problem,” and 50 years later, philosophers are still arguing about it. Jennifer Nagel, a philosopher of mind at the University of Toronto, sums up its appeal. “The resilience of the Gettier problem,” she says, “suggests that it is difficult (if not impossible) to develop any explicit reductive theory of knowledge.”

What is knowledge? Well, thinkers for thousands of years had more or less taken one definition for granted: Knowledge is “justified true belief.” The reasoning seemed solid: Just believing something that happens to be true doesn’t necessarily make it knowledge. If your friend says to you that she knows what you ate last night (say it’s veggie pizza), and happens to be right after guessing, that doesn’t mean she knew. That was just a lucky guess—a mere true belief. Your friend would know, though, if she said veggie pizza because she saw you eat it—that’s the “justification” part. Your friend, in that case, would have good reason to believe you ate it.

The reason the Gettier problem is renowned is because Gettier showed, using little short stories, that this intuitive definition of knowledge was flawed. His 1963 paper, titled “Is Justified True Belief Knowledge?” resembles an undergraduate assignment. It’s just three pages long. But that’s all Gettier needed to revolutionize his field, epistemology, the study of the theory of knowledge.

The “problem” in a Gettier problem emerges in little, unassuming vignettes. Gettier had his, and philosophers have since come up with variations of their own. Try this version, from the University of Birmingham philosopher Scott Sturgeon:

Suppose I burgle your house, find two bottles of Newcastle Brown in the kitchen, drink and replace them. You remember purchasing the ale and come to believe there will be two bottles waiting for you at home. Your belief is justified and true, but you do not know what’s going on.
more...
No comment yet.