Philosophy everywhere everywhen
8.5K views | +0 today
Follow
 
Rescooped by Wildcat2030 from cognition
onto Philosophy everywhere everywhen
Scoop.it!

The Internet is the God We Create-The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements.

The Internet is the God We Create-The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements. | Philosophy everywhere everywhen | Scoop.it

The Futurica Trilogy is a work of philosophy, sociology and futurology in three closely related movements. 


Via FastTFriend
Wildcat2030's insight:

The first volume, The Netocrats, deals with human history from the perspective of the new elite of Informationalism, the emerging society of information networks, shaped by digital interactivity, making prophecies about the digital future of politics, culture, economy, et cetera.

The second volume, The Global Empire, explores the near future of political globalization and the struggle to form new, functioning ideologies for a world where global decision making is a necessity.

The third volume, The Body Machines, thoroughly deals with the demise of the Cartesian subject. It discusses the implications of a materialist image of humanity and explains how it relates to the new, emerging technological paradigm. It explains why we’re all of us body machines, and why this is actually good news.

more...
FastTFriend's curator insight, January 9, 2013 8:23 AM

The first volume, The Netocrats, deals with human history from the perspective of the new elite of Informationalism, the emerging society of information networks, shaped by digital interactivity, making prophecies about the digital future of politics, culture, economy, et cetera.

The second volume, The Global Empire, explores the near future of political globalization and the struggle to form new, functioning ideologies for a world where global decision making is a necessity.

The third volume, The Body Machines, thoroughly deals with the demise of the Cartesian subject. It discusses the implications of a materialist image of humanity and explains how it relates to the new, emerging technological paradigm. It explains why we’re all of us body machines, and why this is actually good news.

Philosophy everywhere everywhen
First Law of Philosophy: For every philosopher, there is an equal and opposite philosopher. The Second Law of Philosophy: They're both wrong
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030
Scoop.it!

We might agree that death is bad – but why exactly? – Eric Olson | Aeon Essays

Most of us think it’s a bad thing to die. I certainly don’t want to die any time soon, and you probably don’t either. There are, of course, exceptions. Some people actively want to die. They might be unbearably lonely, or in chronic pain, or gradually sliding into senile dementia that will destroy their intellect without remainder. And there might be no prospect of improvement. They wake up every morning disappointed to find that they haven’t died in their sleep. In these cases, it might be better to die than to continue a life not worth living. But most of the time death is unwelcome, and we do all we can to avoid it.

Death is bad not only for those left behind. If I were to die today, my loved ones would be grief-stricken, my son would be orphaned, and my colleagues would have to mark my students’ exams. That would be terrible for them. But death would be terrible for me, too. Much as I care about my colleagues’ wellbeing, I have my own selfish reasons for staying alive. And this isn’t peculiar to me. When people die, we feel sorry for them, and not merely for ourselves at losing them – especially if death takes them when they’re young and full of promise. We consider it one of the worst things that can happen to someone.

This would be easy to understand if death were followed by a nasty time in the hereafter. It could be that death is not the end of us, but merely a transition from one sort of existence to another. We might somehow carry on in a conscious state after we die, in spite of the decay and dissolution that takes place in the grave. I might be doomed to eternal torment in hell. That would obviously be bad for me: it would make me worse off than I am now.
But what if there is no hereafter? What if death really is the end – we return to the dust from which we came and that’s it? Then death can’t make us worse off than we are now. Or at least not in the straightforward way that burning in hell could make us worse off. To be dead is not to exist at all, and there’s nothing unpleasant about that. No one minds being dead. The dead never complain, and not merely because their mouths have stopped working. They are simply no longer there to be unhappy.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Sci-fi still influences how society thinks about genes – it's time we caught up

Sci-fi still influences how society thinks about genes – it's time we caught up | Philosophy everywhere everywhen | Scoop.it
We used to think that our fate was in the stars. Now we know in large measure, our fate is in our genes.

When the Nobel laureate and co-discoverer of the DNA double helix James Watson made his famous statement in 1989, he was implying that access to a person’s genetic code allows you to predict the outcome of their life.

The troubling implications were not lost on people, of course. A few years later they were explored in the American film Gattaca, which depicted a civilisation from the near future that had embraced this kind of genetic determinism. It was a world in which most people are conceived in test tubes, and taken to term only if they passed genetic tests designed to prevent them from inheriting imperfections ranging from baldness to serious genetic diseases.

With these so-called “valids” – the dominant majority – the film was a warning about the dangers in our technological advancement. As it turns out, we were probably being optimistic about the potential of genetics. Yet too few people seem to have got that message, and this kind of mistaken thinking about the links between genes and traits is having unsettling consequences of its own.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

150 years ago, a world-famous philosopher called busyness the sign of an unhappy person

150 years ago, a world-famous philosopher called busyness the sign of an unhappy person | Philosophy everywhere everywhen | Scoop.it
If you’re reading this on your phone, rushing to the subway while hunting for your headphones, then you need to stop. At least, that’s what Søren Kierkegaard, the Danish philosopher who lived at the beginning of the 19th century, would advise. Last week, brainpickings pointed out just how relevant Kierkegaard’s writings on busyness are to our lives today.

And indeed, as we race from the office to the gym to a dinner, proudly showing off our jam-packed schedules, it’s worth remembering Kierkegaard’s warnings about busyness from centuries ago. He wrote:

Of all ridiculous things the most ridiculous seems to me, to be busy—to be a man who is brisk about his food and his work… What, I wonder, do these busy folks get done?

Stephen Evans, a philosophy professor at Baylor University, explains that Kierkegaard saw busyness as a means of distracting oneself from truly important questions, such as who you are and what life is for. Busy people “fill up their time, always find things to do,” but they have no principle guiding their life. “Everything is important but nothing is important,” he adds.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Octopuses are super-smart ... but are they conscious?

Octopuses are super-smart ... but are they conscious? | Philosophy everywhere everywhen | Scoop.it

The idea of non-human consciousness raises a host of philosophical questions.


Inky the wild octopus has escaped from the New Zealand National Aquarium. Apparently, he made it out of a small opening in his tank, and suction cup prints indicate he found his way to a drain pipe that emptied to the ocean. Nice job Inky. Your courage gives us the chance to reflect on just how smart cephalopods really are. In fact, they are real smart. Octopus expert Jennifer Mather spent years studying them and found that they not only display the capacity to learn many features of their environment, they will transition from exploration to something approaching play if given the chance. For example, Mather recounts the way two octopuses repeatedly used their water jets to blow an object towards an opposing stream of water in their tank: what she describes as “the aquatic equivalent of bouncing a ball”. Further, as Mather explains, cephalopods are inventive problem solvers. When predating clams, for example, octopuses will use a variety of strategies to remove the meat from the shell, often cycling through strategies – pulling the shell open, chipping the shell’s margin, or drilling through the shell – in a trial-and-error way. It’s not just cephalopods, of course: lots of non-humans are intelligent too. In their own kind of way, lots of machines are smart as well – some are better than the best humans at some of our most complicated games. You can probably sense the question coming next. Does this mean lots of non-humans – octopuses, crows, monkeys, machines – are conscious? And if so, what do we do about that? Such questions are attracting a lot of interest. In the past month alone, leading primatologist Franz de Waal has written on anthropomorphism and consciousness in chimpanzees; philosophers and science writers have discussed consciousness in artificial intelligences and whether machines could become self-aware without us realising; and the neuroscientist Michael Graziano has argued that current theories of consciousness are “worse than wrong” while predicting that we’ll have built a conscious machine within 50 years.

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Do people have a moral duty to have children if they can? — Richard Chappell — Aeon Essays

Many people want to have children. But they might wonder: is it ethical to bring a child into this broken world, where she might suffer – and partake in – various harms and injustices? Others prefer not to have children. This choice also raises ethical qualms: is it ‘selfish’ to refrain from procreating? Are non-parents failing to contribute to the future of humanity – to the building of the next generation – in a way that we all should if we can?

It is tempting to dismiss such questions on the grounds that whether or not you have kids is a personal matter. It is surely nobody else’s damn business. It’s not up to the government or society to tell me. This question falls securely within the ‘private sphere’ that, in a properly liberal society, other people must respect and leave well enough alone.

True enough. But the mere fact that it is a private matter, something that others have no business deciding for us, does not mean that morality is necessarily silent on the issue. We can each, individually, ask ourselves: what should I do? Are there ethical considerations that we should take into account here – considerations that might help guide us as we attempt to navigate these intensely important, intensely personal questions? And if we do undertake such ethical enquiry, the answers we reach might surprise us.

Is it fair to your would-be child to bring her into a life that will inevitably contain significant amounts of pain, discomfort, suffering and heartache? In his essay ‘On the Suffering of the World’ (1850), Arthur Schopenhauer asked:

If children were brought into the world by an act of pure reason alone, would the human race continue to exist? Would not a man rather have so much sympathy with the coming generation as to spare it the burden of existence? Or at any rate not take it upon himself to impose that burden in cold blood?
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why do we say 'sorry' so much?

Why do we say 'sorry' so much? | Philosophy everywhere everywhen | Scoop.it
Just Not Sorry is a new app that aims to draw attention to the use of apologetic language and the excessive use of sorry. People, and especially women it has been claimed, need help to be more forthright and assertive in their emails. This raises the question: why do we say sorry? And is it necessarily a sign of weakness?

The word sorry goes right back to the earliest stages of the English language, as spoken by the Anglo-Saxons. Tracing its history from Old English to the present day reveals an interesting development, in which there is a marked change from the expression of genuine heartfelt sorrow and remorse to regret for minor inconvenience. The key shift occurs in the 19th century and is accompanied by the change from “I am sorry” to plain “sorry”, thereby creating a distancing effect, taking us a further step away from the apology as a statement of personal distress to a more formulaic use. In his history of English Manners Henry Hitchings links this to the 19th-century association of politeness with detachment and aloofness, and the emergence of the concept of the “stiff upper lip”.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Troubles with Three-ism: Body, Mind, and Soul - The Los Angeles...

Troubles with Three-ism: Body, Mind, and Soul - The Los Angeles... | Philosophy everywhere everywhen | Scoop.it
WHEN I WAS a wee Catholic lad growing up in the New York City suburbs of the late 1950s and early 1960s, I learned that good people go to heaven after they die. This was consoling. But it made me wonder precisely which part of me would go to heaven: my body, my mind, or my soul. Thanks to dead hamsters and such, I understood that bodies die, decay, and disperse. There was talk in school and at church of the resurrection of the body on Judgment Day, but that event, I reckoned, might not happen for several million years, and surely I’d be well ensconced in heaven by then. My mother tentatively explained that the part of me that loved peanut butter and jelly sandwiches and chocolate ice cream sodas would most likely not go to heaven, or, if it did, would not need or want peanut butter and jelly sandwiches and chocolate ice cream sodas anymore — possibly, I speculated, because, in the heavenly state, I’d be able mentally to conjure those great pleasures without there being actual physical manifestations of me or them. I surmised that those perfectly good human desires would either be gone (because my body would be gone), or somehow be eternally satisfied.

So, which was it, my mind or my soul that would go to heaven? Or both? And how did they differ? I didn’t want to go to heaven without my personality and memories. I wanted to be in heaven with my brothers and sisters, parents and grandparents, if not bodily then at least mentally. But personality and memories were, in my little boy ontology, associated with mind, and there was talk that the part of me that would go to heaven was something more ethereal than my mind. It was my eternal soul. But my soul, unlike my mind, seemed a bit too vague and general to be “me.” I wanted to be in heaven with me as me myself. Such were the vicissitudes of boyhood. I was troubled by three-ism. I was not, and am not, alone.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Why it matters that you realize you’re in a computer simulation

Why it matters that you realize you’re in a computer simulation | Philosophy everywhere everywhen | Scoop.it
What if our universe is something like a computer simulation, or a virtual reality, or a video game? The proposition that the universe is actually a computer simulation was furthered in a big way during the 1970s, when John Conway famously proved that if you take a binary system, and subject that system to only a few rules (in the case of Conway’s experiment, four); then that system creates something rather peculiar.

What Conway’s rules produced were emergent complexities so sophisticated that they seemed to resemble the behaviors of life itself. He named his demonstration The Game of Life, and it helped lay the foundation for the Simulation Argument, its counterpart the Simulation Hypothesis, and Digital Mechanics. These fields have gone on to create a massive multi-decade long discourse in science, philosophy, and popular culture around the idea that it actually makes logical, mathematical sense that our universe is indeed a computer simulation. To crib a summary from Morpheus, “The Matrix is everywhere”. But amongst the murmurs on various forums and reddit threads pertaining to the subject, it isn’t uncommon to find a word or two devoted to caution: We, the complex intelligent lifeforms who are supposedly “inside” this simulated universe, would do well to play dumb that we are at all conscious of our circumstance.

The colloquial warning says we must not betray the knowledge that we have become aware of being mere bits in the bit kingdom. To have a tipping point population of players who realize that they are actually in something like a video game would have dire and catastrophic results. Deletion, reformatting, or some kind of biblical flushing of our entire universe (or maybe just our species), would unfold. Leave the Matrix alone! In fact, please pretend it isn’t even there.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The lost art of getting lost - BBC News

The lost art of getting lost - BBC News | Philosophy everywhere everywhen | Scoop.it
Technology means maps and directions are constantly at hand, and getting lost is more unlikely than ever before. While for many this is a thing of joy, Stephen Smith asks if we may be missing out.

When was the last time you were well and truly lost? Chances are it's been a while.

Extraordinary gadgets like smartphones and satnavs let us pinpoint our location unerringly. Like the people in Downton Abbey, we all know our place.

However, the technology which delivers the world into the palms of our hands may be ushering in a kind of social immobility undreamt of even by Julian Fellowes's hidebound little Englanders.

Discovery used to mean going out and coming across stuff - now it seems to mean turning inwards and gazing at screens. We've become reliant on machines to help us get around, so much so that it's changing the way we behave, particularly among younger people who have no experience of a time before GPS.

We're raising an entire generation of men who will never know what it is to refuse to ask for directions.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Boredom is not a problem to be solved. It's the last privilege of a free mind | Gayatri Devi

Boredom is not a problem to be solved. It's the last privilege of a free mind | Gayatri Devi | Philosophy everywhere everywhen | Scoop.it
Confessing to boredom is confessing to a character-flaw. Popular culture is littered with advice on how to shake it off: find like-minded people, take up a hobby, find a cause and work for it, take up an instrument, read a book, clean your house And certainly don’t let your kids be bored: enroll them in swimming, soccer, dance, church groups – anything to keep them from assuaging their boredom by gravitating toward sex and drugs. To do otherwise is to admit that we’re not engaging with the world around us. Or that your cellphone has died.

But boredom is not tragic. Properly understood, boredom helps us understand time, and ourselves. Unlike fun or work, boredom is not about anything; it is our encounter with pure time as form and content. With ads and screens and handheld devices ubiquitous, we don’t get to have that experience that much anymore. We should teach the young people to feel comfortable with time.

I live and teach in small-town Pennsylvania, and some of my students from bigger cities tell me that they always go home on Fridays because they are bored here.

You know the best antidote to boredom, I asked them? They looked at me expectantly, smartphones dangling from their hands. Think, I told them. Thinking is the best antidote to boredom. I am not kidding, kids. Thinking is the best antidote to boredom. Tell yourself, I am bored. Think about that. Isn’t that interesting? They looked at me incredulously. Thinking is not how they were brought up to handle boredom.

When you’re bored, time moves slowly. The German word for “boredom” expresses this: langeweile, a compound made of “lange,” which means “long,” and “weile” meaning “a while”. And slow-moving time can feel torturous for people who can’t feel peaceful alone with their minds. Learning to do so is why learning to be bored is so crucial. It is a great privilege if you can do this without going to the psychiatrist.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Walter Benjamin’s legacy, 75 years on

Walter Benjamin’s legacy, 75 years on | Philosophy everywhere everywhen | Scoop.it
Like many a refugee in southern and central Europe today, Walter Benjamin was in flight from war and persecution 75 years ago, but was blocked at an intermediate border en route to the country chosen as his haven. He was part of a Jewish group which, hoping to escape occupied France, had hiked through a Pyrenean pass in autumn 1940 with a view to entering Franco’s Spain, crossing it to Portugal and then sailing to the US. However, in the words of Hannah Arendt, they arrived in the frontier village of Portbou “only to learn that Spain had closed the border that same day” and officials were not honouring American visas such as Benjamin’s. Faced with the prospect of returning to France and being handed over to the Nazis, he “took his own life” overnight on 26 September, whereupon the officials “allowed his companions to proceed to Portugal”.

For Arendt, who successfully reached New York via his intended route a few months later, this was a tragedy of misunderstanding, a poignant but fitting end for a brilliant but misfortune-prone older relative (her cousin by marriage) whom she writes about with a kind of affectionate exasperation.

Yet Edward Stourton, in Cruel Crossing: Escaping Hitler Across the Pyrenees, notes “there are all sorts of unanswered questions surrounding Benjamin’s death. His travelling companions remembered him carrying a heavy briefcase containing a manuscript he described as ‘more important than I am’. No such manuscript was found after his death … A Spanish doctor’s report gave the cause of death as a cerebral haemorrhage, not a drugs overdose. There has been persistent speculation that he was actually murdered, perhaps by a Soviet agent who had infiltrated his escaping party.”
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

This free online encyclopedia has achieved what Wikipedia can only dream of

The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.

Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.
The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge.   But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off. The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.

go read..

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Watch: What Is Consciousness? We Now Have the Tools to Find Out - Singularity HUB

Watch: What Is Consciousness? We Now Have the Tools to Find Out - Singularity HUB | Philosophy everywhere everywhen | Scoop.it
The question of consciousness is as old as philosophy. Most animals appear to get along just fine without a sense of ‘me-ness’. But human beings are different. (At least, as far as we know we are.) We’ve evolved a sense of self awareness.

And while the exact nature of human consciousness is exceedingly difficult to pin down—that doesn’t stop us from trying. It's a puzzle that's preoccupied the world’s greatest philosophers for millennia and, in recent centuries, scientists too.

In the information age, we've begun to wonder if consciousness is a uniquely biological phenomenon or if it might arise elsewhere. Is the brain just a mushy computer running wetware—something we can replicate in hardware and software? Or is comparing the brain to a computer a misleading analogy and a vast oversimplification?

A fascinating new video from the Economist, featuring some of the brightest minds working the problem, brings us up to date on the debate and the latest thinking.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The Evolutionary Argument Against Reality | Quanta Magazine

The Evolutionary Argument Against Reality |  Quanta Magazine | Philosophy everywhere everywhen | Scoop.it
As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

This Philosopher Helped Ensure There Was No Nobel for Relativity - Issue 35: Boundaries - Nautilus

This Philosopher Helped Ensure There Was No Nobel for Relativity - Issue 35: Boundaries - Nautilus | Philosophy everywhere everywhen | Scoop.it
On April 6, 1922, Einstein met a man he would never forget. He was one of the most celebrated philosophers of the century, widely known for espousing a theory of time that explained what clocks did not: memories, premonitions, expectations, and anticipations. Thanks to him, we now know that to act on the future one needs to start by changing the past. Why does one thing not always lead to the next? The meeting had been planned as a cordial and scholarly event. It was anything but that. The physicist and the philosopher clashed, each defending opposing, even irreconcilable, ways of understanding time. At the Société française de philosophie—one of the most venerable institutions in France—they confronted each other under the eyes of a select group of intellectuals. The “dialogue between the greatest philosopher and the greatest physicist of the 20th century” was dutifully written down.1 It was a script fit for the theater. The meeting, and the words they uttered, would be discussed for the rest of the century.The philosopher’s name was Henri Bergson. In the early decades of the century, his fame, prestige, and influence surpassed that of the physicist—who, in contrast, is so well known today. Bergson was compared to Socrates, Copernicus, Kant, Simón Bolívar, and even Don Juan. The philosopher John Dewey claimed that “no philosophic problem will ever exhibit just the same face and aspect that it presented before Professor Bergson.” William James, the Harvard professor and famed psychologist, described Bergson’s Creative Evolution (1907) as “a true miracle,” marking the “beginning of a new era.”
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How LSD helped us probe what the 'sense of self' looks like in the brain

How LSD helped us probe what the 'sense of self' looks like in the brain | Philosophy everywhere everywhen | Scoop.it

Just where in the brain is our 'ego'?


Every single person is different. We all have different backgrounds, views, values and interests. And yet there is one universal feeling that we all experience at every single moment. Call it an “ego”, a “self” or just an “I” – it’s the idea that our thoughts and feelings are our own, and no one else has access to them in the same way. This may sound a bit like post-war French existentialism or psycho-analysis, but it’s actually a topic that’s being increasingly addressed by neuroscientists. We were part of a team interested in finding out how this sense of self is expressed in the brain – and what happens when it dissolves. To do that, we used brain imaging and the psychedelic drug LSD. Our sense of self is something so natural that we are not always fully aware of it. In fact, it is when it is disturbed that it becomes the most noticeable. This could be due to mental illnesses such as psychosis, when people might experience the delusional belief that their thoughts are no longer private, but can be accessed and even modified by other people. Or it could be due to the influence of psychedelic drugs such as LSD, when the user can feel that their ego is “dissolving” and they are becoming at one with the world. From a scientific point of view, these experiences of “ego death” or ego dissolution are also opportunities to search for this sense of self in the brain. Our study, led by Enzo Tagliazucchi and published in Current Biology, set out to probe what is happening in the brain when our sense of self becomes altered by psychedelic drugs (link to Enzo’s paper). We studied 15 healthy volunteers before and after taking LSD, which altered their normal feelings of their selves and their relationship with the environment. These subjects were scanned while intoxicated and while receiving placebo using functional MRI, a technique which allows us to study the brain’s activity by measuring changes in blood flow. By contrasting the activity of the brain when receiving a placebo with its activity after taking LSD, we could start exploring the brain mechanisms involved in the normal experience of the self.

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Forget mindfulness, stop trying to find yourself and start faking it

Forget mindfulness, stop trying to find yourself and start faking it | Philosophy everywhere everywhen | Scoop.it
Why is the history of Chinese philosophy now the most popular course at Harvard? Top tips on how to become a better person according to Confucius and co
more...
nukem777's curator insight, April 16, 10:04 AM
Food for the soul :)
Scooped by Wildcat2030
Scoop.it!

Horizontal History - Wait But Why

Horizontal History - Wait But Why | Philosophy everywhere everywhen | Scoop.it
Most of us have a pretty terrible understanding of history. Our knowledge is spotty, with large gaps all over the place, and the parts of history we do end up knowing a lot about usually depend on the particular teachers, parents, books, articles, and movies we happen to come across in our lives. Without a foundational, tree-trunk understanding of all parts of history, we often forget the things we do learn, leaving even our favorite parts of history a bit hazy in our heads. Raise your hand if you’d like to go on stage and debate a history buff on the nuances of a historical time period of your choosing. That’s what I thought.

The reason history is so hard is that it’s so soft. To truly, fully understand a time period, an event, a movement, or an important historical figure, you’d have to be there, and many times over. You’d have to be in the homes of the public living at the time to hear what they’re saying; you’d have to be a fly on the wall in dozens of secret, closed-door meetings and conversations; you’d need to be inside the minds of the key players to know their innermost thoughts and motivations. Even then, you’d be lacking context. To really have the complete truth, you’d need background—the cultural nuances and national psyches of the time, the way each of the key players was raised during childhood and the subtle social dynamics between those players, the impact of what was going on in other parts of the world, and an equally-thorough understanding of the many past centuries that all of these things grew out of.

That’s why not only can’t even the most perfect history buff fully understand history, but the key people involved at the time can’t ever know the full story. History is a giant collective tangle of thousands of interwoven stories involving millions of characters, countless chapters, and many, many narrators.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Can Integrated Information Theory Explain Consciousness?

Can Integrated Information Theory Explain Consciousness? | Philosophy everywhere everywhen | Scoop.it
How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.”

I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug.

Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily?

These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously.

At the workshop, Chalmers, Tononi, Koch and ten other speakers presented their views of IIT, which were then batted around by 30 or so other scientists and philosophers. I’m still mulling over the claims and counter-claims, some of which were dauntingly abstract and mathematical. In this post, I’ll try to assess IIT, based on the workshop and my readings. If I get some things wrong, which is highly likely, I trust workshoppers will let me know.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

‘Battling the Gods: Atheism in the Ancient World,’ by Tim Whitmarsh

‘Battling the Gods: Atheism in the Ancient World,’ by Tim Whitmarsh | Philosophy everywhere everywhen | Scoop.it
The philosopher Sydney Morgenbesser, beloved by generations of Columbia University students (including me), was known for lines of wit that yielded nuggets of insight. He kept up his instructive shtick until the end, remarking to a colleague shortly before he died: “Why is God making me suffer so much? Just because I don’t believe in him?” For Morgenbesser, nothing worth pondering, including disbelief, could be entirely de-­paradoxed.

The major thesis of Tim Whitmarsh’s excellent “Battling the Gods” is that atheism — in all its nuanced varieties, even Morgenbesserian — isn’t a product of the modern age but rather reaches back to early Western intellectual tradition in the ancient Greek world.

The period that Whitmarsh covers is roughly 1,000 years, during which the Greek-speaking population emerged from illiteracy and anomie, became organized into independent city-states that spawned a high-achieving culture, were absorbed into the Macedonian Empire and then into the Roman Empire, and finally became Christianized. These momentous political shifts are efficiently traced, with astute commentary on their reflection in religious attitudes.

But the best part of “Battling the Gods” is the Greek chorus of atheists themselves, who speak distinctively throughout each of the political transformations — until, that is, the last of them, when they go silent. If you’ve been paying attention to contemporary atheists you might be startled by the familiarity of the ancient positions.

So here is Democritus in the fifth century B.C. — he who coined the term “atom,” from the Greek for “indivisible,” speculating that reality consisted of nothing but fundamental particles swirling randomly around in the void — propounding an anthropological theory of the origins of religious beliefs. Talk of “the gods,” he argued, comes naturally to primitive people who, unable yet to grasp the laws of nature, resort to fantastical storytelling. The exact titles of his works remain in doubt, but his naturalist explanation of the origins of conventional religion might have made use of Daniel C. Dennett’s title “Breaking the Spell: Religion as a Natural Phenomenon.”
Wildcat2030's insight:

book review go read

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How rivalry propels creative genius – Jacob Burak – Aeon

How rivalry propels creative genius – Jacob Burak – Aeon | Philosophy everywhere everywhen | Scoop.it
On 25 May 1832, John Constable was busy adding the final touches to his masterpiece, The Opening of Waterloo Bridge. One of England’s greatest 19th-century landscape artists, he had been working on the painting for more than 10 years and was finally set to reveal it to the world the next day, at the opening of the Royal Academy of Arts’ 64th annual exhibition. Next to his piece hung Helvoetsluys by J M W Turner, an artistic genius in his own right. Watching Constable’s last-minute efforts, Turner decided to add an extra brushstroke of his own: a red buoy floating on the water.

That single daub of red paint against a background of grey sky and sea was so arresting that visitors couldn’t take their eyes off it, certainly not to look at Constable’s painting. It was yet another landmark in the bitter rivalry between the two artists. A year earlier, Constable had used his position in an exhibition committee to have a Turner painting taken down and hung in a side room, replacing it with a painting of his own.
Turner and Constable are not alone in the pantheon of epic rivalries between creative giants. Isaac Newton and Gottfried Leibniz, two of the most brilliant mathematicians and thinkers of the 17th century, laid claim to the development of calculus, the mathematical study of change. Thomas Edison and Nikola Tesla both invented electrical systems in the 1880s. Steve Jobs and Bill Gates went head-to-head as pioneers of the computer age. If you Google almost any famous figure along with ‘rivalry’, you’ll find some interesting results.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Leiter Reports: A Philosophy Blog: Should there be a Nobel Prize (or equivalent prize) in philosophy?

Leiter Reports: A Philosophy Blog: Should there be a Nobel Prize (or equivalent prize) in philosophy? | Philosophy everywhere everywhen | Scoop.it
Russell Blackford (Newcastle) thinks so. If there were, I predict it will end up like the Nobel Prize for Literature: bizarre inclusions and exclusions that will tell us more about fashions and politics than about literature. Part of the difficulty will be in deciding what counts as philosophy. Look at Blackford's gloss:

Philosophy is the reason-based, intellectually rigorous, investigation of deep questions that have always caused puzzlement and anxiety: Is there a god or an afterlife? Do we possess free will? What is a good life for a human being? What is the nature of a just society? Philosophy challenges obfuscation and orthodoxies, and extends into examining the foundations of inquiry itself.

Are these "deep questions that have always caused puzzlement and anxiety"? Doubtful. And it's doubtful that all "good" philosophy "challenges obfuscation and orthodoxies": lots of important philosophy just rationalizes orthodoxy (and sometimes contributes to obfuscation).

Would the later Wittgenstein be eligible for a Nobel Prize in philosophy by Blackford's criteria? Not clear at all.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Meet The Man Who Invents Languages For A Living

Meet The Man Who Invents Languages For A Living | Philosophy everywhere everywhen | Scoop.it
If anyone has the credentials to write a book called The Art Of Language Invention, it's David J. Peterson.

He has two degrees in linguistics. He's comfortable speaking in eight languages (English, Spanish, French, German, Russian, Esperanto, Arabic and American Sign Language) — plus a long list of others he's studied but just hasn't tried speaking yet. He's also familiar with fictional languages — both famous ones like Klingon and deep cuts like Pakuni (the caveman language from Land Of The Lost).

And of course, he's crafted languages of his own — including full alphabets, vocabularies and grammars. Game of Thrones viewers, for instance, might recognize two of these languages: Dothraki, a guttural language of warrior horsemen, and High Valyrian, a language spoken in much of the fantasy world's eastern regions.

And he didn't rest there. Peterson actually created a third language for the show — just for a single giant.

"I didn't know beforehand that he was only going to have one line," he laughs. "I thought he was going to have a bunch of stuff, but whatever. I created a full language for the giant."

Peterson also invented Shiväisith for the Marvel blockbuster Thor: The Dark World, and four languages (and counting?) for the SyFy show Defiance.

In a new book, The Art Of Language Invention, Peterson details the languages he's invented, pairing them with lots of (often technical) advice about how readers can create some of their own.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Home | History of Philosophy without any gaps

Home | History of Philosophy without any gaps | Philosophy everywhere everywhen | Scoop.it
Peter Adamson, Professor of Philosophy at the LMU in Munich and at King's College London, takes listeners through the history of philosophy, "without any gaps."
more...
Berta Civera's curator insight, September 28, 2015 8:03 AM

Historia de la Filosofía de Peter Adamson, en inglés

Scooped by Wildcat2030
Scoop.it!

How did French thought end up in crisis? – Sudhir Hazareesingh – Aeon

How did French thought end up in crisis? – Sudhir Hazareesingh – Aeon | Philosophy everywhere everywhen | Scoop.it
There are many things we have come to regard as quintessentially French: Coco Chanel’s little black dress, the love of fine wines and gastronomy, the paintings of Auguste Renoir, the smell of burnt rubber in the Paris Métro. Equally distinctive is the French mode and style of thinking, which the Irish political philosopher Edmund Burke described in 1790 as ‘the conquering empire of light and reason’. He meant this as a criticism of the French Revolution, but this expression would undoubtedly have been worn as a badge of honour by most French thinkers from the Enlightenment onwards.

Indeed, the notion that rationality is the defining quality of humankind was first celebrated by the 17th-century thinker René Descartes, the father of modern French philosophy. His skeptical method of reasoning led him to conclude that the only certainty was the existence of his own mind: hence his ‘cogito ergo sum’ (‘I think, therefore I am’). This French rationalism was also expressed in a fondness for abstract notions and a preference for deductive reasoning, which starts with a general claim or thesis and eventually works its way towards a specific conclusion – thus the consistent French penchant for grand theories. As the essayist Emile Montégut put it in 1858: ‘There is no people among whom abstract ideas have played such a great role, and whose history is rife with such formidable philosophical tendencies.’
more...
No comment yet.