Philosophy everywhere everywhen
8.7K views | +0 today
Follow
 
Scooped by Wildcat2030
onto Philosophy everywhere everywhen
Scoop.it!

Reza Negarestan “What Philosophy Does to the Mind Tuesday, April 22, 7–9pm, presented by Glass Bead | e-flux

Reza Negarestan “What Philosophy Does to the Mind Tuesday, April 22, 7–9pm, presented by Glass Bead | e-flux | Philosophy everywhere everywhen | Scoop.it

Reza Negarestani
“What Philosophy Does to the Mind”
Tuesday, April 22, 7–9pm

By approaching the game of truths—that is, making sense of what is true and making it true—as a rule-based game of navigation, philosophy opens up a new evolutionary vista for the mind’s development. Within this evolutionary landscape, the mind is understood as a set of activities or practices required to navigate a terrain which lacks a given map and a given compass—a desert bereft of natural landmarks, with a perpetually shifting scenery and furnished with transitory mirages. The mind is forced to adapt to an environment where generic trajectories replace specific trajectories, and where the consequences of making one single move unfold as future ramifying paths that not only uproot the current position in the landscape but also fundamentally change the travel history and the address of the past itinerary. It is within this environment that philosophy instigates an epochal development of yet unexplored possibilities. By simulating the truth of the mind and forcing it to interact with its own navigational horizon, philosophy sets out the conditions for the emancipation of the mind from its contingently posited settings and limits of constructability. In liberating itself from its illusions of uniqueness and ineffability, and by conceiving itself as an upgradable armamentarium of practices or abilities, the mind self-realizes itself as an expanding, constructible edifice that effectuates a mind-only system. But this is a system that is no longer comprehensible within the traditional ambit of idealism, for it involves “mind” not as a theoretical object but rather as a practical project of socio-historical wisdom.  

Throughout this presentation, we will lay out the minimal characteristics and procedures of the game of navigation by drawing on the works of Gilles Châtelet (the construction of a horizon), Guerino Mazzola (a dynamic theory of addresses), and Robert Brandom (the procedural system of commitments). We will subsequently unpack the consequences of playing this game in terms of the transition from self-conception to self-transformation of the mind, as outlined by the New Confucian philosophers Xiong Shili and Mou Zongsan.

Wildcat2030's insight:

if you're in NY

more...
No comment yet.
Philosophy everywhere everywhen
First Law of Philosophy: For every philosopher, there is an equal and opposite philosopher. The Second Law of Philosophy: They're both wrong
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030
Scoop.it!

Hilary Putnam and the mind of Aristotle | OUPblog

Hilary Putnam and the mind of Aristotle | OUPblog | Philosophy everywhere everywhen | Scoop.it
Few people have influenced contemporary philosophy of mind as profoundly as the late Hilary Putnam. One of his best known contributions was the formulation of functionalism. As he understood it, functionalism claims that mental states are functional states—postulates of abstract descriptions, like those employed in computer science, which ignore a system’s physical details and focus instead on the ways it correlates inputs with outputs. Psychological descriptions in particular focus on the ways a system correlates sensory inputs with behavioral outputs, and mental states are the internal states that correlate the two.

By the mid-1970s functionalism had become the dominant outlook in philosophy of mind. But Putnam, showing his characteristic independence of mind, became dissatisfied with the view. He did not retreat to substance dualism or idealism. He was convinced that we are physical beings whose capacities are essentially embodied in the physical mechanisms that compose us, yet he was also a committed antireductionist. He denied that physics, chemistry, and neuroscience could yield an exhaustive account of what we are and what we can do. In articulating a pro-physical yet anti-reductive view along these lines, Putnam found inspiration in a new source: Aristotle.
Aristotle’s ideas had been dismissed in many quarters of the philosophical world as expressions of a bygone pre-scientific age. But Putnam saw through the dismissive haze to the empirically and philosophically-respectable core of Aristotle’s philosophy, ‘hylomorphism’.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Bertrand Russell and the case for 'Philosophy for Everyone'

Bertrand Russell and the case for 'Philosophy for Everyone' | Philosophy everywhere everywhen | Scoop.it
One of the interesting questions we face as philosophers who are attempting to make philosophical ideas accessible for a general audience, is whether or not everyone can or should ‘do philosophy’.

Some philosophers wish to leave philosophy in the academy or university setting. Whereas others claim the downfall of modern philosophy came in the late 19th century when the subject was institutionalized within the research university setting. By condemning philosophy as only appropriate as a serious subject of study, philosophers have lost much widespread support and public recognition for its value.

Philosophers working in the public arena, such as those contributing to The Conversation and Cogito Philosophy Blog will defend the argument in favour of ‘philosophy for everyone’.
Bertrand Russell’s ‘Philosophy for Laymen’

In 1946 Bertrand Russell wrote an essay entitled Philosophy for Laymen, in which he defends the view that philosophy should be ‘a part of general education’. He proposes that,

even in the time that can easily be spared without injury to the learning of technical skills, philosophy can give certain things that will greatly increase the student’s value as a human being and as a citizen.

Clare Carlisle refers to Russell when she writes,

Russell revives an ancient conception of philosophy as a way of life in insisting that questions of cosmic meaning and value have an existential, ethical and spiritual urgency. (Of course, what we might mean by such terms is another issue for philosophers to grapple with.)
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Janna Levin’s Theory of Doing Everything | Quanta Magazine

Janna Levin’s Theory of Doing Everything |  Quanta Magazine | Philosophy everywhere everywhen | Scoop.it
The astrophysicist and author Janna Levin has two main offices: One at Barnard College of Columbia University, where she is a professor, and a studio space at Pioneer Works, a “center for art and innovation” in Brooklyn where Levin works alongside artists and musicians in an ever-expanding role as director of sciences. Beneath the rafters on the third floor of the former ironworks factory that now houses Pioneer Works, her studio is decorated (with props from a film set) like a speakeasy. There’s a bar lined with stools, a piano, a trumpet and, on the wall that serves as Levin’s blackboard, a drink rail underlining a mathematical description of a black hole spinning in a magnetic field. Whether Levin is writing words or equations, she finds inspiration just outside her gallery window, where a giant cloth-and-paper tree trunk hangs from the ceiling almost to the factory floor three stories below.

“Science is just an absolutely intrinsic part of culture,” said Levin, who runs a residency program for scientists, holds informal “office hours” for the artists and other residents, and hosts Scientific Controversies — a discussion series with a disco vibe that attracts standing-room-only crowds. “We don’t see it as different.”

Levin lives in accordance with this belief. She conducted research on the question of whether the universe is finite or infinite, then penned a book about her life and this work (written as letters to her mother) at the start of her physics career. She has also studied the limits of knowledge, ideas that found their way into her award-winning novel about the mathematicians Alan Turing and Kurt Gödel.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Being happy in the present pays off in evolution - Futurity

Being happy in the present pays off in evolution - Futurity | Philosophy everywhere everywhen | Scoop.it
Having a positive attitude could be an evolutionary advantage, say researchers.

The finding, from simulated generations of evolution in a computational model, supports ancient philosophical insights from China, Greece, and India that encourage cultivating long-term contentment—not the fleeting joys of instant gratification.

“In an evolutionary sense, you have to evaluate your life on the basis of more than what happened just now. Because usually what happens right now is you go hungry,” says Shimon Edelman, professor of psychology at Cornell University and a coauthor of the study in PLOS ONE.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Constructing the Modern Mind

Constructing the Modern Mind | Philosophy everywhere everywhen | Scoop.it
Unlike any other empirical object in Nature, the mind's presence is immediately apparent to itself, but opaque to all external observers.
—George Makari, Soul Machine, 2015

My life, as well as this column, is dedicated to understanding the conscious mind and how it relates to the brain. This presupposes that you, the reader, and I have a precise sense of what is referred to by such seemingly innocent terms as “consciousness” and “mind.” And lest it be forgotten, the allied concept of “soul” (or spirit), banned from scientific discourse, continues to remain profoundly meaningful to vast throngs of humankind here and abroad.

But there's the rub! Unlike such material objects as “egg,” “dog” or “brain,” this triptych of intangible concepts is a historical construct, endowed with a universe of religious, metaphysical, cultural and scientific meaning, as well as an array of underlying assumptions, some clearly articulated, others wholly ignored. These meanings adapt over time as society changes in response to wars and revolutions, catastrophes, trade and treaties, invention and discovery. Psychiatrist and historian George Makari tries to illuminate this historical evolution in his Soul Machine: The Invention of the Modern Mind, published last November by W. W. Norton. His intellectual history masterfully describes how consciousness, mind and soul are shape-shifters that philosophers, theologians, scholars, scientists and physicians seek to tame, by conceptualizing, defining, reifying, denying and redefining these terms through the ages to come to grips with the mystery that is our inner life.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

We might agree that death is bad – but why exactly? – Eric Olson | Aeon Essays

Most of us think it’s a bad thing to die. I certainly don’t want to die any time soon, and you probably don’t either. There are, of course, exceptions. Some people actively want to die. They might be unbearably lonely, or in chronic pain, or gradually sliding into senile dementia that will destroy their intellect without remainder. And there might be no prospect of improvement. They wake up every morning disappointed to find that they haven’t died in their sleep. In these cases, it might be better to die than to continue a life not worth living. But most of the time death is unwelcome, and we do all we can to avoid it.

Death is bad not only for those left behind. If I were to die today, my loved ones would be grief-stricken, my son would be orphaned, and my colleagues would have to mark my students’ exams. That would be terrible for them. But death would be terrible for me, too. Much as I care about my colleagues’ wellbeing, I have my own selfish reasons for staying alive. And this isn’t peculiar to me. When people die, we feel sorry for them, and not merely for ourselves at losing them – especially if death takes them when they’re young and full of promise. We consider it one of the worst things that can happen to someone.

This would be easy to understand if death were followed by a nasty time in the hereafter. It could be that death is not the end of us, but merely a transition from one sort of existence to another. We might somehow carry on in a conscious state after we die, in spite of the decay and dissolution that takes place in the grave. I might be doomed to eternal torment in hell. That would obviously be bad for me: it would make me worse off than I am now.
But what if there is no hereafter? What if death really is the end – we return to the dust from which we came and that’s it? Then death can’t make us worse off than we are now. Or at least not in the straightforward way that burning in hell could make us worse off. To be dead is not to exist at all, and there’s nothing unpleasant about that. No one minds being dead. The dead never complain, and not merely because their mouths have stopped working. They are simply no longer there to be unhappy.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Sci-fi still influences how society thinks about genes – it's time we caught up

Sci-fi still influences how society thinks about genes – it's time we caught up | Philosophy everywhere everywhen | Scoop.it
We used to think that our fate was in the stars. Now we know in large measure, our fate is in our genes.

When the Nobel laureate and co-discoverer of the DNA double helix James Watson made his famous statement in 1989, he was implying that access to a person’s genetic code allows you to predict the outcome of their life.

The troubling implications were not lost on people, of course. A few years later they were explored in the American film Gattaca, which depicted a civilisation from the near future that had embraced this kind of genetic determinism. It was a world in which most people are conceived in test tubes, and taken to term only if they passed genetic tests designed to prevent them from inheriting imperfections ranging from baldness to serious genetic diseases.

With these so-called “valids” – the dominant majority – the film was a warning about the dangers in our technological advancement. As it turns out, we were probably being optimistic about the potential of genetics. Yet too few people seem to have got that message, and this kind of mistaken thinking about the links between genes and traits is having unsettling consequences of its own.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

150 years ago, a world-famous philosopher called busyness the sign of an unhappy person

150 years ago, a world-famous philosopher called busyness the sign of an unhappy person | Philosophy everywhere everywhen | Scoop.it
If you’re reading this on your phone, rushing to the subway while hunting for your headphones, then you need to stop. At least, that’s what Søren Kierkegaard, the Danish philosopher who lived at the beginning of the 19th century, would advise. Last week, brainpickings pointed out just how relevant Kierkegaard’s writings on busyness are to our lives today.

And indeed, as we race from the office to the gym to a dinner, proudly showing off our jam-packed schedules, it’s worth remembering Kierkegaard’s warnings about busyness from centuries ago. He wrote:

Of all ridiculous things the most ridiculous seems to me, to be busy—to be a man who is brisk about his food and his work… What, I wonder, do these busy folks get done?

Stephen Evans, a philosophy professor at Baylor University, explains that Kierkegaard saw busyness as a means of distracting oneself from truly important questions, such as who you are and what life is for. Busy people “fill up their time, always find things to do,” but they have no principle guiding their life. “Everything is important but nothing is important,” he adds.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Octopuses are super-smart ... but are they conscious?

Octopuses are super-smart ... but are they conscious? | Philosophy everywhere everywhen | Scoop.it

The idea of non-human consciousness raises a host of philosophical questions.


Inky the wild octopus has escaped from the New Zealand National Aquarium. Apparently, he made it out of a small opening in his tank, and suction cup prints indicate he found his way to a drain pipe that emptied to the ocean. Nice job Inky. Your courage gives us the chance to reflect on just how smart cephalopods really are. In fact, they are real smart. Octopus expert Jennifer Mather spent years studying them and found that they not only display the capacity to learn many features of their environment, they will transition from exploration to something approaching play if given the chance. For example, Mather recounts the way two octopuses repeatedly used their water jets to blow an object towards an opposing stream of water in their tank: what she describes as “the aquatic equivalent of bouncing a ball”. Further, as Mather explains, cephalopods are inventive problem solvers. When predating clams, for example, octopuses will use a variety of strategies to remove the meat from the shell, often cycling through strategies – pulling the shell open, chipping the shell’s margin, or drilling through the shell – in a trial-and-error way. It’s not just cephalopods, of course: lots of non-humans are intelligent too. In their own kind of way, lots of machines are smart as well – some are better than the best humans at some of our most complicated games. You can probably sense the question coming next. Does this mean lots of non-humans – octopuses, crows, monkeys, machines – are conscious? And if so, what do we do about that? Such questions are attracting a lot of interest. In the past month alone, leading primatologist Franz de Waal has written on anthropomorphism and consciousness in chimpanzees; philosophers and science writers have discussed consciousness in artificial intelligences and whether machines could become self-aware without us realising; and the neuroscientist Michael Graziano has argued that current theories of consciousness are “worse than wrong” while predicting that we’ll have built a conscious machine within 50 years.

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Do people have a moral duty to have children if they can? — Richard Chappell — Aeon Essays

Many people want to have children. But they might wonder: is it ethical to bring a child into this broken world, where she might suffer – and partake in – various harms and injustices? Others prefer not to have children. This choice also raises ethical qualms: is it ‘selfish’ to refrain from procreating? Are non-parents failing to contribute to the future of humanity – to the building of the next generation – in a way that we all should if we can?

It is tempting to dismiss such questions on the grounds that whether or not you have kids is a personal matter. It is surely nobody else’s damn business. It’s not up to the government or society to tell me. This question falls securely within the ‘private sphere’ that, in a properly liberal society, other people must respect and leave well enough alone.

True enough. But the mere fact that it is a private matter, something that others have no business deciding for us, does not mean that morality is necessarily silent on the issue. We can each, individually, ask ourselves: what should I do? Are there ethical considerations that we should take into account here – considerations that might help guide us as we attempt to navigate these intensely important, intensely personal questions? And if we do undertake such ethical enquiry, the answers we reach might surprise us.

Is it fair to your would-be child to bring her into a life that will inevitably contain significant amounts of pain, discomfort, suffering and heartache? In his essay ‘On the Suffering of the World’ (1850), Arthur Schopenhauer asked:

If children were brought into the world by an act of pure reason alone, would the human race continue to exist? Would not a man rather have so much sympathy with the coming generation as to spare it the burden of existence? Or at any rate not take it upon himself to impose that burden in cold blood?
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why do we say 'sorry' so much?

Why do we say 'sorry' so much? | Philosophy everywhere everywhen | Scoop.it
Just Not Sorry is a new app that aims to draw attention to the use of apologetic language and the excessive use of sorry. People, and especially women it has been claimed, need help to be more forthright and assertive in their emails. This raises the question: why do we say sorry? And is it necessarily a sign of weakness?

The word sorry goes right back to the earliest stages of the English language, as spoken by the Anglo-Saxons. Tracing its history from Old English to the present day reveals an interesting development, in which there is a marked change from the expression of genuine heartfelt sorrow and remorse to regret for minor inconvenience. The key shift occurs in the 19th century and is accompanied by the change from “I am sorry” to plain “sorry”, thereby creating a distancing effect, taking us a further step away from the apology as a statement of personal distress to a more formulaic use. In his history of English Manners Henry Hitchings links this to the 19th-century association of politeness with detachment and aloofness, and the emergence of the concept of the “stiff upper lip”.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Troubles with Three-ism: Body, Mind, and Soul - The Los Angeles...

Troubles with Three-ism: Body, Mind, and Soul - The Los Angeles... | Philosophy everywhere everywhen | Scoop.it
WHEN I WAS a wee Catholic lad growing up in the New York City suburbs of the late 1950s and early 1960s, I learned that good people go to heaven after they die. This was consoling. But it made me wonder precisely which part of me would go to heaven: my body, my mind, or my soul. Thanks to dead hamsters and such, I understood that bodies die, decay, and disperse. There was talk in school and at church of the resurrection of the body on Judgment Day, but that event, I reckoned, might not happen for several million years, and surely I’d be well ensconced in heaven by then. My mother tentatively explained that the part of me that loved peanut butter and jelly sandwiches and chocolate ice cream sodas would most likely not go to heaven, or, if it did, would not need or want peanut butter and jelly sandwiches and chocolate ice cream sodas anymore — possibly, I speculated, because, in the heavenly state, I’d be able mentally to conjure those great pleasures without there being actual physical manifestations of me or them. I surmised that those perfectly good human desires would either be gone (because my body would be gone), or somehow be eternally satisfied.

So, which was it, my mind or my soul that would go to heaven? Or both? And how did they differ? I didn’t want to go to heaven without my personality and memories. I wanted to be in heaven with my brothers and sisters, parents and grandparents, if not bodily then at least mentally. But personality and memories were, in my little boy ontology, associated with mind, and there was talk that the part of me that would go to heaven was something more ethereal than my mind. It was my eternal soul. But my soul, unlike my mind, seemed a bit too vague and general to be “me.” I wanted to be in heaven with me as me myself. Such were the vicissitudes of boyhood. I was troubled by three-ism. I was not, and am not, alone.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why it matters that you realize you’re in a computer simulation

Why it matters that you realize you’re in a computer simulation | Philosophy everywhere everywhen | Scoop.it
What if our universe is something like a computer simulation, or a virtual reality, or a video game? The proposition that the universe is actually a computer simulation was furthered in a big way during the 1970s, when John Conway famously proved that if you take a binary system, and subject that system to only a few rules (in the case of Conway’s experiment, four); then that system creates something rather peculiar.

What Conway’s rules produced were emergent complexities so sophisticated that they seemed to resemble the behaviors of life itself. He named his demonstration The Game of Life, and it helped lay the foundation for the Simulation Argument, its counterpart the Simulation Hypothesis, and Digital Mechanics. These fields have gone on to create a massive multi-decade long discourse in science, philosophy, and popular culture around the idea that it actually makes logical, mathematical sense that our universe is indeed a computer simulation. To crib a summary from Morpheus, “The Matrix is everywhere”. But amongst the murmurs on various forums and reddit threads pertaining to the subject, it isn’t uncommon to find a word or two devoted to caution: We, the complex intelligent lifeforms who are supposedly “inside” this simulated universe, would do well to play dumb that we are at all conscious of our circumstance.

The colloquial warning says we must not betray the knowledge that we have become aware of being mere bits in the bit kingdom. To have a tipping point population of players who realize that they are actually in something like a video game would have dire and catastrophic results. Deletion, reformatting, or some kind of biblical flushing of our entire universe (or maybe just our species), would unfold. Leave the Matrix alone! In fact, please pretend it isn’t even there.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Consciousness Isn’t a Mystery. It’s Matter.

Consciousness Isn’t a Mystery. It’s Matter. | Philosophy everywhere everywhen | Scoop.it
Every day, it seems, some verifiably intelligent person tells us that we don’t know what consciousness is. The nature of consciousness, they say, is an awesome mystery. It’s the ultimate hard problem. The current Wikipedia entry is typical: Consciousness “is the most mysterious aspect of our lives”; philosophers “have struggled to comprehend the nature of consciousness.”

I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.

The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff. This point, which is at first extremely startling, was well put by Bertrand Russell in the 1950s in his essay “Mind and Matter”: “We know nothing about the intrinsic quality of physical events,” he wrote, “except when these are mental events that we directly experience.” In having conscious experience, he claims, we learn something about the intrinsic nature of physical stuff, for conscious experience is itself a form of physical stuff.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Half of Your Friends Probably Don’t Think of You As a Friend

Half of Your Friends Probably Don’t Think of You As a Friend | Philosophy everywhere everywhen | Scoop.it
Here’s a fun exercise: Take a minute and count up all your friends. Not just the close ones, or the ones you’ve seen recently — I mean every single person on this Earth that you consider a pal.

Got a number in your mind? Good. Now cut it in half.

Okay, yes, “fun” may have been a bit of a reach there. But this new, smaller number may actually be more accurate. As it turns out, we can be pretty terrible at knowing who our friends are: In what may be among the saddest pieces of social-psychology research published in quite some time, a study in the journal PLoS One recently made the case that as many as half the people we consider our friends don’t feel the same way.

The study authors gave a survey to 84 college students in the same class, asking each one to rate every other person in the study on a scale of zero (“I do not know this person”) to five (“One of my best friends”), with three as the minimum score needed to qualify for friendship. The participants also wrote down their guesses for how each person would rate them.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Are scientific theories really better when they are simpler? – Elliott Sober | Aeon Essays

Two of Barcelona’s architectural masterpieces are as different as different could be. The Sagrada Família, designed by Antoni Gaudí, is only a few miles from the German Pavilion, built by Mies van der Rohe. Gaudí’s church is flamboyant and complex. Mies’s pavilion is tranquil and simple. Mies, the apostle of minimalist architecture, used the slogan ‘less is more’ to express what he was after. Gaudí never said ‘more is more’, but his buildings suggest that this is what he had in mind.

One reaction to the contrast between Mies and Gaudí is to choose sides based on a conviction concerning what all art should be like. If all art should be simple or if all art should be complex, the choice is clear. However, both of these norms seem absurd. Isn’t it obvious that some estimable art is simple and some is complex? True, there might be extremes that are beyond the pale; we are alienated by art that is far too complex and bored by art that is far too simple. However, between these two extremes there is a vast space of possibilities. Different artists have had different goals. Artists are not in the business of trying to discover the uniquely correct degree of complexity that all artworks should have. There is no such timeless ideal.

Science is different, at least according to many scientists. Albert Einstein spoke for many when he said that ‘it can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience’. The search for simple theories, then, is a requirement of the scientific enterprise. When theories get too complex, scientists reach for Ockham’s Razor, the principle of parsimony, to do the trimming. This principle says that a theory that postulates fewer entities, processes or causes is better than a theory that postulates more, so long as the simpler theory is compatible with what we observe. But what does ‘better’ mean? It is obvious that simple theories can be beautiful and easy to understand, remember and test. The hard problem is to explain why the fact that one theory is simpler than another tells you anything about the way the world is.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

More evidence that you’re a mindless robot with no free will | KurzweilAI

More evidence that you’re a mindless robot with no free will | KurzweilAI | Philosophy everywhere everywhen | Scoop.it
The results of two Yale University psychology experiments suggest that what we believe to be a conscious choice may actually be constructed, or confabulated, unconsciously after we act — to rationalize our decisions. A trick of the mind.

“Our minds may be rewriting history,” said Adam Bear, a Ph.D. student in the Department of Psychology and lead author of a paper published April 28 in the journal Psychological Science.

Bear and Paul Bloom performed two simple experiments to test how we experience choices. In one experiment, participants were told that five white circles would appear on the computer screen in front of them and, in rapid-fire sequence, one would turn red. They were asked to predict which one would turn red and mentally note this. After a circle turned red, participants then recorded by keystroke whether they had chosen correctly, had chosen incorrectly, or had not had time to complete their choice.

The circle that turned red was always selected by the system randomly, so probability dictates that participants should predict the correct circle 20% of the time. But when they only had a fraction of a second to make a prediction, these participants were likely to report that they correctly predicted which circle would change color more than 20% of the time.

In contrast, when participants had more time to make their guess — approaching a full second — the reported number of accurate predictions dropped back to expected levels of 20% success, suggesting that participants were not simply lying about their accuracy to impress the experimenters.

(In a second experiment to eliminate artifacts, participants chose one of two different-colored circles, with similar results.)
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

At a time of zealotry, Spinoza matters more than ever – Steven Nadler | Aeon Essays

In July 1656, the 23-year-old Bento de Spinoza was excommunicated from the Portuguese-Jewish congregation of Amsterdam. It was the harshest punishment of herem (ban) ever issued by that community. The extant document, a lengthy and vitriolic diatribe, refers to the young man’s ‘abominable heresies’ and ‘monstrous deeds’. The leaders of the community, having consulted with the rabbis and using Spinoza’s Hebrew name, proclaim that they hereby ‘expel, excommunicate, curse, and damn Baruch de Spinoza’. He is to be ‘cast out from all the tribes of Israel’ and his name is to be ‘blotted out from under heaven’.

Over the centuries, there have been periodic calls for the herem against Spinoza to be lifted. Even David Ben-Gurion, when he was prime minister of Israel, issued a public plea for ‘amending the injustice’ done to Spinoza by the Amsterdam Portuguese community. It was not until early 2012, however, that the Amsterdam congregation, at the insistence of one of its members, formally took up the question of whether it was time to rehabilitate Spinoza and welcome him back into the congregation that had expelled him with such prejudice. There was, though, one thing that they needed to know: should we still regard Spinoza as a heretic?

Unfortunately, the herem document fails to mention specifically what Spinoza’s offences were – at the time he had not yet written anything – and so there is a mystery surrounding this seminal event in the future philosopher’s life. And yet, for anyone who is familiar with Spinoza’s mature philosophical ideas, which he began putting in writing a few years after the excommunication, there really is no such mystery. By the standards of early modern rabbinic Judaism – and especially among the Sephardic Jews of Amsterdam, many of whom were descendants of converso refugees from the Iberian Inquisitions and who were still struggling to build a proper Jewish community on the banks of the Amstel River – Spinoza was a heretic, and a dangerous one at that.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The Evolutionary Argument Against Reality | Quanta Magazine

The Evolutionary Argument Against Reality |  Quanta Magazine | Philosophy everywhere everywhen | Scoop.it
As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

This Philosopher Helped Ensure There Was No Nobel for Relativity - Issue 35: Boundaries - Nautilus

This Philosopher Helped Ensure There Was No Nobel for Relativity - Issue 35: Boundaries - Nautilus | Philosophy everywhere everywhen | Scoop.it
On April 6, 1922, Einstein met a man he would never forget. He was one of the most celebrated philosophers of the century, widely known for espousing a theory of time that explained what clocks did not: memories, premonitions, expectations, and anticipations. Thanks to him, we now know that to act on the future one needs to start by changing the past. Why does one thing not always lead to the next? The meeting had been planned as a cordial and scholarly event. It was anything but that. The physicist and the philosopher clashed, each defending opposing, even irreconcilable, ways of understanding time. At the Société française de philosophie—one of the most venerable institutions in France—they confronted each other under the eyes of a select group of intellectuals. The “dialogue between the greatest philosopher and the greatest physicist of the 20th century” was dutifully written down.1 It was a script fit for the theater. The meeting, and the words they uttered, would be discussed for the rest of the century.The philosopher’s name was Henri Bergson. In the early decades of the century, his fame, prestige, and influence surpassed that of the physicist—who, in contrast, is so well known today. Bergson was compared to Socrates, Copernicus, Kant, Simón Bolívar, and even Don Juan. The philosopher John Dewey claimed that “no philosophic problem will ever exhibit just the same face and aspect that it presented before Professor Bergson.” William James, the Harvard professor and famed psychologist, described Bergson’s Creative Evolution (1907) as “a true miracle,” marking the “beginning of a new era.”
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How LSD helped us probe what the 'sense of self' looks like in the brain

How LSD helped us probe what the 'sense of self' looks like in the brain | Philosophy everywhere everywhen | Scoop.it

Just where in the brain is our 'ego'?


Every single person is different. We all have different backgrounds, views, values and interests. And yet there is one universal feeling that we all experience at every single moment. Call it an “ego”, a “self” or just an “I” – it’s the idea that our thoughts and feelings are our own, and no one else has access to them in the same way. This may sound a bit like post-war French existentialism or psycho-analysis, but it’s actually a topic that’s being increasingly addressed by neuroscientists. We were part of a team interested in finding out how this sense of self is expressed in the brain – and what happens when it dissolves. To do that, we used brain imaging and the psychedelic drug LSD. Our sense of self is something so natural that we are not always fully aware of it. In fact, it is when it is disturbed that it becomes the most noticeable. This could be due to mental illnesses such as psychosis, when people might experience the delusional belief that their thoughts are no longer private, but can be accessed and even modified by other people. Or it could be due to the influence of psychedelic drugs such as LSD, when the user can feel that their ego is “dissolving” and they are becoming at one with the world. From a scientific point of view, these experiences of “ego death” or ego dissolution are also opportunities to search for this sense of self in the brain. Our study, led by Enzo Tagliazucchi and published in Current Biology, set out to probe what is happening in the brain when our sense of self becomes altered by psychedelic drugs (link to Enzo’s paper). We studied 15 healthy volunteers before and after taking LSD, which altered their normal feelings of their selves and their relationship with the environment. These subjects were scanned while intoxicated and while receiving placebo using functional MRI, a technique which allows us to study the brain’s activity by measuring changes in blood flow. By contrasting the activity of the brain when receiving a placebo with its activity after taking LSD, we could start exploring the brain mechanisms involved in the normal experience of the self.

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Forget mindfulness, stop trying to find yourself and start faking it

Forget mindfulness, stop trying to find yourself and start faking it | Philosophy everywhere everywhen | Scoop.it
Why is the history of Chinese philosophy now the most popular course at Harvard? Top tips on how to become a better person according to Confucius and co
more...
nukem777's curator insight, April 16, 5:04 AM
Food for the soul :)
Scooped by Wildcat2030
Scoop.it!

Horizontal History - Wait But Why

Horizontal History - Wait But Why | Philosophy everywhere everywhen | Scoop.it
Most of us have a pretty terrible understanding of history. Our knowledge is spotty, with large gaps all over the place, and the parts of history we do end up knowing a lot about usually depend on the particular teachers, parents, books, articles, and movies we happen to come across in our lives. Without a foundational, tree-trunk understanding of all parts of history, we often forget the things we do learn, leaving even our favorite parts of history a bit hazy in our heads. Raise your hand if you’d like to go on stage and debate a history buff on the nuances of a historical time period of your choosing. That’s what I thought.

The reason history is so hard is that it’s so soft. To truly, fully understand a time period, an event, a movement, or an important historical figure, you’d have to be there, and many times over. You’d have to be in the homes of the public living at the time to hear what they’re saying; you’d have to be a fly on the wall in dozens of secret, closed-door meetings and conversations; you’d need to be inside the minds of the key players to know their innermost thoughts and motivations. Even then, you’d be lacking context. To really have the complete truth, you’d need background—the cultural nuances and national psyches of the time, the way each of the key players was raised during childhood and the subtle social dynamics between those players, the impact of what was going on in other parts of the world, and an equally-thorough understanding of the many past centuries that all of these things grew out of.

That’s why not only can’t even the most perfect history buff fully understand history, but the key people involved at the time can’t ever know the full story. History is a giant collective tangle of thousands of interwoven stories involving millions of characters, countless chapters, and many, many narrators.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Can Integrated Information Theory Explain Consciousness?

Can Integrated Information Theory Explain Consciousness? | Philosophy everywhere everywhen | Scoop.it
How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.”

I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug.

Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily?

These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously.

At the workshop, Chalmers, Tononi, Koch and ten other speakers presented their views of IIT, which were then batted around by 30 or so other scientists and philosophers. I’m still mulling over the claims and counter-claims, some of which were dauntingly abstract and mathematical. In this post, I’ll try to assess IIT, based on the workshop and my readings. If I get some things wrong, which is highly likely, I trust workshoppers will let me know.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

‘Battling the Gods: Atheism in the Ancient World,’ by Tim Whitmarsh

‘Battling the Gods: Atheism in the Ancient World,’ by Tim Whitmarsh | Philosophy everywhere everywhen | Scoop.it
The philosopher Sydney Morgenbesser, beloved by generations of Columbia University students (including me), was known for lines of wit that yielded nuggets of insight. He kept up his instructive shtick until the end, remarking to a colleague shortly before he died: “Why is God making me suffer so much? Just because I don’t believe in him?” For Morgenbesser, nothing worth pondering, including disbelief, could be entirely de-­paradoxed.

The major thesis of Tim Whitmarsh’s excellent “Battling the Gods” is that atheism — in all its nuanced varieties, even Morgenbesserian — isn’t a product of the modern age but rather reaches back to early Western intellectual tradition in the ancient Greek world.

The period that Whitmarsh covers is roughly 1,000 years, during which the Greek-speaking population emerged from illiteracy and anomie, became organized into independent city-states that spawned a high-achieving culture, were absorbed into the Macedonian Empire and then into the Roman Empire, and finally became Christianized. These momentous political shifts are efficiently traced, with astute commentary on their reflection in religious attitudes.

But the best part of “Battling the Gods” is the Greek chorus of atheists themselves, who speak distinctively throughout each of the political transformations — until, that is, the last of them, when they go silent. If you’ve been paying attention to contemporary atheists you might be startled by the familiarity of the ancient positions.

So here is Democritus in the fifth century B.C. — he who coined the term “atom,” from the Greek for “indivisible,” speculating that reality consisted of nothing but fundamental particles swirling randomly around in the void — propounding an anthropological theory of the origins of religious beliefs. Talk of “the gods,” he argued, comes naturally to primitive people who, unable yet to grasp the laws of nature, resort to fantastical storytelling. The exact titles of his works remain in doubt, but his naturalist explanation of the origins of conventional religion might have made use of Daniel C. Dennett’s title “Breaking the Spell: Religion as a Natural Phenomenon.”
Wildcat2030's insight:

book review go read

more...
No comment yet.