David Chalmers, who coined the phrase “Hard Problem of consciousness,” is arguably the leading modern advocate for the possibility that physical reality needs to be augmented by some kind of additional ingredient in order to explain consciousness—in particular, to account for the kinds of inner mental experience pinpointed by the Hard Problem. One of his favorite tools has been yet another thought experiment: the philosophical zombie.
Unlike undead zombies, which seek out brains and generate movie franchises, philosophical zombies look and behave exactly like ordinary human beings. Indeed, they are perfectly physically identical to non‐zombie people. The difference is that they are lacking in any inner mental experience. We can ask, and be puzzled about, what it is like to be a bat, or another person. But by definition, there is no “what it is like” to be a zombie. Zombies don’t experience.
Perhaps for athletes, a genius is an Olympic medalist. In entertainment, a genius could be defined as an EGOT winner, someone who has won an Emmy, Grammy, Oscar and Tony award. For Mensa, the exclusive international society comprising members of "high intelligence," someone who scores at or above the 98th percentile on an IQ or other standardized intelligence test could be considered genius.
The most common definition of genius falls in line with Mensa's approach: someone with exceptional intelligence. Making a genius
In his new science series "Genius" on PBS, Stephen Hawking is testing out the idea that anyone can "think like a genius." By posing big questions — for instance, "Can we travel through time?" — to people with average intelligence, the famed theoretical physicist aims to find the answers through the sheer power of the human mind.
"It's a fun show that tries to find out if ordinary people are smart enough to think like the greatest minds who ever lived," Hawking said in a statement. "Being an optimist, I think they will." [Mad Geniuses: 10 Odd Tales About Famous Scientists]
Optimism aside, answering a genius-level question does not a genius make — at least, not according to psychologist Frank Lawlis, supervisory testing director for American Mensa.
"The geniuses ask questions. They don't know the answers, but they know a lot of questions and their curiosity takes them into their fields," Lawlis told Live Science. "[They're] somebody that has the capacity to inquire at that high level and to be curious to pursue that high level of understanding and then be able to communicate it to the rest of us."
The importance of loving yourself is a common catchphrase among feel-good gurus and the subject of countless self-help books.
But Harvard University’s Michael Puett argues that loving yourself—and all your flaws—can actually be quite harmful. Puett, who earlier this year published a book on what Chinese philosophy can teach us about the good life, suggests that ancient Chinese philosophers would strongly disapprove of today’s penchant for self-affirmation.
Quartz spoke to Puett as part of an occasional series that attempts to apply serious thinking from the world of philosophy to everyday life. What can great thinkers teach us about how to navigate your career path? Do people ever really change? Can philosophy inform our search for true love? The Chinese philosophy Puett studies raises questions about whether we should we accept and celebrate ourselves as we are or strive to change and improve upon our fundamental nature. And, for that matter, does our “fundamental nature” even exist?
“The common assumption most of us make about the self is that our goal as individuals is to look within, find our true selves, and try to be as authentic and true to ourselves as we can be,” Puett says. “But this assumes we have a stable self.”
By contrast, much of the Chinese philosophical tradition derived from Confucius envisions “the self” as more of a messy product of habit than a clearly-defined inner essence. “From a very young age, we’ll form patterns of responding to the world. Those patterns will harden and become what we mistakenly call a personality,” adds Puett.
Philosophy is often the whipping boy of the supposedly “easy degrees”. No one takes you seriously if you study it, and people assume you can easily get a 2:1 just by turning up.
This couldn’t be further from the truth. Unlike with Geography, where if you bring your 24 pack of Crayola you’re pretty much set, I have to try and decide whether the crayons are actually real or not.
I’m often accused of spending £9,000 a year to sit around and think, and this is largely accurate. Maybe you haven’t tried it for a while, but thinking is actually really hard. With subjects like Economics and History you are spoon fed theories and facts to learn, with your lecturer and seminar tutor holding your hand all the way. Well done! You did really well in that exam where you wrote down everything you were told you to. The difference between your average student and a trained monkey is that the monkey probably dresses slightly better.
In Philosophy you are faced with dilemmas like the trolley problem, there is no economic model that you can plug the information into to get an answer. You have to figure it all out for yourself. As an inherently flawed 21-year old male, I can barely make myself breakfast in the morning, and am shocked when I make it to the end of each day in one piece. In what sort of world can I be expected to offer anything coherent on ethical dilemmas that are asked of me in essays and exams. How am I supposed to get a good mark when I am faced with modules such as ‘The Philosophy of Time’?
People who study other degrees are lucky, only in Philosophy are you subjected to suffering an existential crisis every time you set foot in a seminar room.
A poet, somewhere in Siberia, or the Balkans, or West Africa, some time in the past 60,000 years, recites thousands of memorised lines in the course of an evening. The lines are packed with fixed epithets and clichés. The bard is not concerned with originality, but with intonation and delivery: he or she is perfectly attuned to the circumstances of the day, and to the mood and expectations of his or her listeners.
If this were happening 6,000-plus years ago, the poet’s words would in no way have been anchored in visible signs, in text. For the vast majority of the time that human beings have been on Earth, words have had no worldly reality other than the sound made when they are spoken.
As the theorist Walter J Ong pointed out in Orality and Literacy: Technologizing the Word (1982), it is difficult, perhaps even impossible, now to imagine how differently language would have been experienced in a culture of ‘primary orality’. There would be nowhere to ‘look up a word’, no authoritative source telling us the shape the word ‘actually’ takes. There would be no way to affirm the word’s existence at all except by speaking it – and this necessary condition of survival is important for understanding the relatively repetitive nature of epic poetry. Say it over and over again, or it will slip away. In the absence of fixed, textual anchors for words, there would be a sharp sense that language is charged with power, almost magic: the idea that words, when spoken, can bring about new states of affairs in the world. They do not so much describe, as invoke.
As a consequence of the development of writing, first in the ancient Near East and soon after in Greece, old habits of thought began to die out, and certain other, previously latent, mental faculties began to express themselves. Words were now anchored and, though spellings could change from one generation to another, or one region to another, there were now physical traces that endured, which could be transmitted, consulted and pointed to in settling questions about the use or authority of spoken language.
I remember my grandfather commenting—wry amusement tinged with grim resignation—that what made him finally feel old was seeing his children reach middle age. I was a child then. Now I see my own children, not quite middle aged, starting to have children of their own.
Becoming a grandparent is quite lovely, an affirmation of continuity and a front-row-seat to watch (and even, on occasion, participate) as life itself is conveyed into the future. But aging is also our most undeniable memento mori, a reminder not so much of life as one’s own eventual death. My grandfather’s death frightened me as few things have since, except for the recurring recognition (usually at night, alone, in the dark) that his life, everyone’s life, even—astoundingly—my own, is short indeed.
All things, especially living ones, are marinating in the river of time. We see and understand that our bodies will wear out and we will die. At least that’s how it looks through the lens of Western science, where all things come to an end, winding down in a final surrender to entropy. But there’s another perspective, surprisingly in harmony with science, that helps us revisit that huge and ancient terror—fear of time itself—in a new and perhaps even reassuring way. And that is the perspective offered by Buddhism.
For Buddhists, the “center cannot hold,” as the poet W.B. Yeats pointed out, because it doesn’t exist as something rigidly separate from everything else. Nothing is permanent and unchanging, ourselves included. Attempting to cling to a solid, immutable core of a self is a fool’s errand because time not only creates anarchy, it provides the unavoidable matrix within which everything—animate and inanimate, sentient and insensate—ebbs and flows.
Every day, it seems, some verifiably intelligent person tells us that we don’t know what consciousness is. The nature of consciousness, they say, is an awesome mystery. It’s the ultimate hard problem. The current Wikipedia entry is typical: Consciousness “is the most mysterious aspect of our lives”; philosophers “have struggled to comprehend the nature of consciousness.”
I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff. This point, which is at first extremely startling, was well put by Bertrand Russell in the 1950s in his essay “Mind and Matter”: “We know nothing about the intrinsic quality of physical events,” he wrote, “except when these are mental events that we directly experience.” In having conscious experience, he claims, we learn something about the intrinsic nature of physical stuff, for conscious experience is itself a form of physical stuff.
Here’s a fun exercise: Take a minute and count up all your friends. Not just the close ones, or the ones you’ve seen recently — I mean every single person on this Earth that you consider a pal.
Got a number in your mind? Good. Now cut it in half.
Okay, yes, “fun” may have been a bit of a reach there. But this new, smaller number may actually be more accurate. As it turns out, we can be pretty terrible at knowing who our friends are: In what may be among the saddest pieces of social-psychology research published in quite some time, a study in the journal PLoS One recently made the case that as many as half the people we consider our friends don’t feel the same way.
The study authors gave a survey to 84 college students in the same class, asking each one to rate every other person in the study on a scale of zero (“I do not know this person”) to five (“One of my best friends”), with three as the minimum score needed to qualify for friendship. The participants also wrote down their guesses for how each person would rate them.
Two of Barcelona’s architectural masterpieces are as different as different could be. The Sagrada Família, designed by Antoni Gaudí, is only a few miles from the German Pavilion, built by Mies van der Rohe. Gaudí’s church is flamboyant and complex. Mies’s pavilion is tranquil and simple. Mies, the apostle of minimalist architecture, used the slogan ‘less is more’ to express what he was after. Gaudí never said ‘more is more’, but his buildings suggest that this is what he had in mind.
One reaction to the contrast between Mies and Gaudí is to choose sides based on a conviction concerning what all art should be like. If all art should be simple or if all art should be complex, the choice is clear. However, both of these norms seem absurd. Isn’t it obvious that some estimable art is simple and some is complex? True, there might be extremes that are beyond the pale; we are alienated by art that is far too complex and bored by art that is far too simple. However, between these two extremes there is a vast space of possibilities. Different artists have had different goals. Artists are not in the business of trying to discover the uniquely correct degree of complexity that all artworks should have. There is no such timeless ideal.
Science is different, at least according to many scientists. Albert Einstein spoke for many when he said that ‘it can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience’. The search for simple theories, then, is a requirement of the scientific enterprise. When theories get too complex, scientists reach for Ockham’s Razor, the principle of parsimony, to do the trimming. This principle says that a theory that postulates fewer entities, processes or causes is better than a theory that postulates more, so long as the simpler theory is compatible with what we observe. But what does ‘better’ mean? It is obvious that simple theories can be beautiful and easy to understand, remember and test. The hard problem is to explain why the fact that one theory is simpler than another tells you anything about the way the world is.
The results of two Yale University psychology experiments suggest that what we believe to be a conscious choice may actually be constructed, or confabulated, unconsciously after we act — to rationalize our decisions. A trick of the mind.
“Our minds may be rewriting history,” said Adam Bear, a Ph.D. student in the Department of Psychology and lead author of a paper published April 28 in the journal Psychological Science.
Bear and Paul Bloom performed two simple experiments to test how we experience choices. In one experiment, participants were told that five white circles would appear on the computer screen in front of them and, in rapid-fire sequence, one would turn red. They were asked to predict which one would turn red and mentally note this. After a circle turned red, participants then recorded by keystroke whether they had chosen correctly, had chosen incorrectly, or had not had time to complete their choice.
The circle that turned red was always selected by the system randomly, so probability dictates that participants should predict the correct circle 20% of the time. But when they only had a fraction of a second to make a prediction, these participants were likely to report that they correctly predicted which circle would change color more than 20% of the time.
In contrast, when participants had more time to make their guess — approaching a full second — the reported number of accurate predictions dropped back to expected levels of 20% success, suggesting that participants were not simply lying about their accuracy to impress the experimenters.
(In a second experiment to eliminate artifacts, participants chose one of two different-colored circles, with similar results.)
In July 1656, the 23-year-old Bento de Spinoza was excommunicated from the Portuguese-Jewish congregation of Amsterdam. It was the harshest punishment of herem (ban) ever issued by that community. The extant document, a lengthy and vitriolic diatribe, refers to the young man’s ‘abominable heresies’ and ‘monstrous deeds’. The leaders of the community, having consulted with the rabbis and using Spinoza’s Hebrew name, proclaim that they hereby ‘expel, excommunicate, curse, and damn Baruch de Spinoza’. He is to be ‘cast out from all the tribes of Israel’ and his name is to be ‘blotted out from under heaven’.
Over the centuries, there have been periodic calls for the herem against Spinoza to be lifted. Even David Ben-Gurion, when he was prime minister of Israel, issued a public plea for ‘amending the injustice’ done to Spinoza by the Amsterdam Portuguese community. It was not until early 2012, however, that the Amsterdam congregation, at the insistence of one of its members, formally took up the question of whether it was time to rehabilitate Spinoza and welcome him back into the congregation that had expelled him with such prejudice. There was, though, one thing that they needed to know: should we still regard Spinoza as a heretic?
Unfortunately, the herem document fails to mention specifically what Spinoza’s offences were – at the time he had not yet written anything – and so there is a mystery surrounding this seminal event in the future philosopher’s life. And yet, for anyone who is familiar with Spinoza’s mature philosophical ideas, which he began putting in writing a few years after the excommunication, there really is no such mystery. By the standards of early modern rabbinic Judaism – and especially among the Sephardic Jews of Amsterdam, many of whom were descendants of converso refugees from the Iberian Inquisitions and who were still struggling to build a proper Jewish community on the banks of the Amstel River – Spinoza was a heretic, and a dangerous one at that.
As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
On April 6, 1922, Einstein met a man he would never forget. He was one of the most celebrated philosophers of the century, widely known for espousing a theory of time that explained what clocks did not: memories, premonitions, expectations, and anticipations. Thanks to him, we now know that to act on the future one needs to start by changing the past. Why does one thing not always lead to the next? The meeting had been planned as a cordial and scholarly event. It was anything but that. The physicist and the philosopher clashed, each defending opposing, even irreconcilable, ways of understanding time. At the Société française de philosophie—one of the most venerable institutions in France—they confronted each other under the eyes of a select group of intellectuals. The “dialogue between the greatest philosopher and the greatest physicist of the 20th century” was dutifully written down.1 It was a script fit for the theater. The meeting, and the words they uttered, would be discussed for the rest of the century.The philosopher’s name was Henri Bergson. In the early decades of the century, his fame, prestige, and influence surpassed that of the physicist—who, in contrast, is so well known today. Bergson was compared to Socrates, Copernicus, Kant, Simón Bolívar, and even Don Juan. The philosopher John Dewey claimed that “no philosophic problem will ever exhibit just the same face and aspect that it presented before Professor Bergson.” William James, the Harvard professor and famed psychologist, described Bergson’s Creative Evolution (1907) as “a true miracle,” marking the “beginning of a new era.”
Bryan Magee (1930 – ) has had a multifaceted career as a professor of philosophy, music and theater critic, BBC broadcaster, public intellectual and member of Parliament. He has starred in two acclaimed television series about philosophy: Men of Ideas (1978) and The Great Philosophers (1987). He is best known as a popularizer of philosophy. His easy-to-read books, which have been translated into more than twenty languages, include:
Confessions of a Philosopher: A Personal Journey Through Western Philosophy from Plato to Popper; The Great Philosophers: An Introduction to Western Philosophy; Talking Philosophy: Dialogues with Fifteen Leading Philosophers; Philosophy and the Real World: An Introduction to Karl Popper; The Story of Philosophy: 2,500 Years of Great Thinkers from Socrates to the Existentialists and Beyond; Men of Ideas.
Now, at age 86, he has written Ultimate Questions, a summary of a lifetime of thinking about “the fundamentals of the human condition.” Its basic theme is that we know little about the human condition, since reality comes to us filtered through the senses and the limitations of our intellect and language. And the most honest response to this predicament is agnosticism.
Magee begins considering that “What we call civilization has existed for something like six thousand years.” If you remember that there have always been some individuals who have lived a hundred years this means that “the whole of civilization has occurred with the successive lifetimes of sixty people …” Furthermore, “most people are as provincial in time as they are in space: they huddle down into their time and regard it as their total environment…” They don’t think about the little sliver of time and space that they occupy. Thus begins this meditation on agnosticism.
Furthermore, we are ignorant of knowledge of our ultimate nature: “We, who do not know what we are, have to fashion lives for ourselves in a universe of which we know little and understand less.” Yet this situation doesn’t lead Magee to despair. Instead he calls for “an active agnosticism,” which is “a positive principle of procedure, an openness to the fact that we do not know, followed by intellectually honest enquiry in full receptivity of mind.” If he had to choose a tag he says, it would be “the agnostic.”
Many people cheat on taxes—no mystery there. But many people don’t, even if they wouldn’t be caught—now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.
It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”
IT was going to be the biggest presentation of my life — my first appearance on the TED Conference main stage — and I had already thrown out seven drafts. Searching for a new direction, I asked colleagues and friends for suggestions. “The most important thing,” the first one said, “is to be yourself.” The next six people I asked gave me the same tip.
We are in the Age of Authenticity, where “be yourself” is the defining advice in life, love and career. Authenticity means erasing the gap between what you firmly believe inside and what you reveal to the outside world. As Brené Brown, a research professor at the University of Houston, defines it, authenticity is “the choice to let our true selves be seen.”
We want to live authentic lives, marry authentic partners, work for an authentic boss, vote for an authentic president. In university commencement speeches, “Be true to yourself” is one of the most common themes (behind “Expand your horizons,” and just ahead of “Never give up”).
“I certainly had no idea that being your authentic self could get you as rich as I have become,” Oprah Winfrey said jokingly a few years ago. ”If I’d known that, I’d have tried it a lot earlier.”
But for most people, “be yourself” is actually terrible advice.
If I can be authentic for a moment: Nobody wants to see your true self. We all have thoughts and feelings that we believe are fundamental to our lives, but that are better left unspoken.
A decade ago, the author A. J. Jacobs spent a few weeks trying to be totally authentic. He announced to an editor that he would try to sleep with her if he were single and informed his nanny that he would like to go on a date with her if his wife left him. He informed a friend’s 5-year-old daughter that the beetle in her hands was not napping but dead. He told his in-laws that their conversation was boring. You can imagine how his experiment worked out.
“Deceit makes our world go round,” he concluded. “Without lies, marriages would crumble, workers would be fired, egos would be shattered, governments would collapse.” Continue reading the main story Adam Grant Work, motivation and values.
See More »
How much you aim for authenticity depends on a personality trait called self-monitoring. If you’re a high self-monitor, you’re constantly scanning your environment for social cues and adjusting accordingly. You hate social awkwardness and desperately want to avoid offending anyone.
But if you’re a low self-monitor, you’re guided more by your inner states, regardless of your circumstances. In one fascinating study, when a steak landed on their plates, high self-monitors tasted it before pouring salt, whereas low self-monitors salted it first. As the psychologist Brian Little explains, “It is as though low self-monitors know their salt personalities very well.”
When I began studying how animals swim, I didn’t feel much like a physicist. I’d just finished my bachelor’s in physics during which time I’d been taught that physicists work on one of a handful of buzzwords: quantum mechanics, cosmology, gauge theory, and so on. To see if graduate school was right for me, I shadowed a friendly research group at the University of California, San Diego—but they didn’t study any of these buzzwords. They used high-powered mathematics to understand things like the locomotion of snails, worms, and microorganisms.
I was grateful for the opportunity, and I thought the problems they studied were beautiful and interesting—just not fundamental physics. As I became more involved in the group, this distinction grew into an identity crisis. Theoretical physicists are kind of like artists, or athletes: If you feel yourself drifting further from Klee or Peyton Manning, it can seem like a catastrophe. I thought I could feel Einstein and Feynman looking down at me and frowning as I took a turn down the wrong path. Sapolsky_TH-F1
It would take some impressive feats by microorganisms to convince me that they were as sexy as smashing atoms together—and they did not fail to deliver. Some are capable of shooting small needles, or even segments of their DNA, which accelerate about 1,000 times faster than a space shuttle launch; others share genetic information with their unrelated neighbors, forming an Internet many millennia older than ours; many outlive us, despite being 1 million times smaller. Even more interesting was that microorganisms do not obey Newton’s laws, which govern basic motion and are a pillar of classical physics.
These remarkable facts changed not only how I perceived bacteria, but how I defined what it meant to be a physicist.
Few people have influenced contemporary philosophy of mind as profoundly as the late Hilary Putnam. One of his best known contributions was the formulation of functionalism. As he understood it, functionalism claims that mental states are functional states—postulates of abstract descriptions, like those employed in computer science, which ignore a system’s physical details and focus instead on the ways it correlates inputs with outputs. Psychological descriptions in particular focus on the ways a system correlates sensory inputs with behavioral outputs, and mental states are the internal states that correlate the two.
By the mid-1970s functionalism had become the dominant outlook in philosophy of mind. But Putnam, showing his characteristic independence of mind, became dissatisfied with the view. He did not retreat to substance dualism or idealism. He was convinced that we are physical beings whose capacities are essentially embodied in the physical mechanisms that compose us, yet he was also a committed antireductionist. He denied that physics, chemistry, and neuroscience could yield an exhaustive account of what we are and what we can do. In articulating a pro-physical yet anti-reductive view along these lines, Putnam found inspiration in a new source: Aristotle. Aristotle’s ideas had been dismissed in many quarters of the philosophical world as expressions of a bygone pre-scientific age. But Putnam saw through the dismissive haze to the empirically and philosophically-respectable core of Aristotle’s philosophy, ‘hylomorphism’.
One of the interesting questions we face as philosophers who are attempting to make philosophical ideas accessible for a general audience, is whether or not everyone can or should ‘do philosophy’.
Some philosophers wish to leave philosophy in the academy or university setting. Whereas others claim the downfall of modern philosophy came in the late 19th century when the subject was institutionalized within the research university setting. By condemning philosophy as only appropriate as a serious subject of study, philosophers have lost much widespread support and public recognition for its value.
Philosophers working in the public arena, such as those contributing to The Conversation and Cogito Philosophy Blog will defend the argument in favour of ‘philosophy for everyone’. Bertrand Russell’s ‘Philosophy for Laymen’
In 1946 Bertrand Russell wrote an essay entitled Philosophy for Laymen, in which he defends the view that philosophy should be ‘a part of general education’. He proposes that,
even in the time that can easily be spared without injury to the learning of technical skills, philosophy can give certain things that will greatly increase the student’s value as a human being and as a citizen.
Clare Carlisle refers to Russell when she writes,
Russell revives an ancient conception of philosophy as a way of life in insisting that questions of cosmic meaning and value have an existential, ethical and spiritual urgency. (Of course, what we might mean by such terms is another issue for philosophers to grapple with.)
The astrophysicist and author Janna Levin has two main offices: One at Barnard College of Columbia University, where she is a professor, and a studio space at Pioneer Works, a “center for art and innovation” in Brooklyn where Levin works alongside artists and musicians in an ever-expanding role as director of sciences. Beneath the rafters on the third floor of the former ironworks factory that now houses Pioneer Works, her studio is decorated (with props from a film set) like a speakeasy. There’s a bar lined with stools, a piano, a trumpet and, on the wall that serves as Levin’s blackboard, a drink rail underlining a mathematical description of a black hole spinning in a magnetic field. Whether Levin is writing words or equations, she finds inspiration just outside her gallery window, where a giant cloth-and-paper tree trunk hangs from the ceiling almost to the factory floor three stories below.
“Science is just an absolutely intrinsic part of culture,” said Levin, who runs a residency program for scientists, holds informal “office hours” for the artists and other residents, and hosts Scientific Controversies — a discussion series with a disco vibe that attracts standing-room-only crowds. “We don’t see it as different.”
Levin lives in accordance with this belief. She conducted research on the question of whether the universe is finite or infinite, then penned a book about her life and this work (written as letters to her mother) at the start of her physics career. She has also studied the limits of knowledge, ideas that found their way into her award-winning novel about the mathematicians Alan Turing and Kurt Gödel.
Having a positive attitude could be an evolutionary advantage, say researchers.
The finding, from simulated generations of evolution in a computational model, supports ancient philosophical insights from China, Greece, and India that encourage cultivating long-term contentment—not the fleeting joys of instant gratification.
“In an evolutionary sense, you have to evaluate your life on the basis of more than what happened just now. Because usually what happens right now is you go hungry,” says Shimon Edelman, professor of psychology at Cornell University and a coauthor of the study in PLOS ONE.
Unlike any other empirical object in Nature, the mind's presence is immediately apparent to itself, but opaque to all external observers. —George Makari, Soul Machine, 2015
My life, as well as this column, is dedicated to understanding the conscious mind and how it relates to the brain. This presupposes that you, the reader, and I have a precise sense of what is referred to by such seemingly innocent terms as “consciousness” and “mind.” And lest it be forgotten, the allied concept of “soul” (or spirit), banned from scientific discourse, continues to remain profoundly meaningful to vast throngs of humankind here and abroad.
But there's the rub! Unlike such material objects as “egg,” “dog” or “brain,” this triptych of intangible concepts is a historical construct, endowed with a universe of religious, metaphysical, cultural and scientific meaning, as well as an array of underlying assumptions, some clearly articulated, others wholly ignored. These meanings adapt over time as society changes in response to wars and revolutions, catastrophes, trade and treaties, invention and discovery. Psychiatrist and historian George Makari tries to illuminate this historical evolution in his Soul Machine: The Invention of the Modern Mind, published last November by W. W. Norton. His intellectual history masterfully describes how consciousness, mind and soul are shape-shifters that philosophers, theologians, scholars, scientists and physicians seek to tame, by conceptualizing, defining, reifying, denying and redefining these terms through the ages to come to grips with the mystery that is our inner life.
Most of us think it’s a bad thing to die. I certainly don’t want to die any time soon, and you probably don’t either. There are, of course, exceptions. Some people actively want to die. They might be unbearably lonely, or in chronic pain, or gradually sliding into senile dementia that will destroy their intellect without remainder. And there might be no prospect of improvement. They wake up every morning disappointed to find that they haven’t died in their sleep. In these cases, it might be better to die than to continue a life not worth living. But most of the time death is unwelcome, and we do all we can to avoid it.
Death is bad not only for those left behind. If I were to die today, my loved ones would be grief-stricken, my son would be orphaned, and my colleagues would have to mark my students’ exams. That would be terrible for them. But death would be terrible for me, too. Much as I care about my colleagues’ wellbeing, I have my own selfish reasons for staying alive. And this isn’t peculiar to me. When people die, we feel sorry for them, and not merely for ourselves at losing them – especially if death takes them when they’re young and full of promise. We consider it one of the worst things that can happen to someone.
This would be easy to understand if death were followed by a nasty time in the hereafter. It could be that death is not the end of us, but merely a transition from one sort of existence to another. We might somehow carry on in a conscious state after we die, in spite of the decay and dissolution that takes place in the grave. I might be doomed to eternal torment in hell. That would obviously be bad for me: it would make me worse off than I am now. But what if there is no hereafter? What if death really is the end – we return to the dust from which we came and that’s it? Then death can’t make us worse off than we are now. Or at least not in the straightforward way that burning in hell could make us worse off. To be dead is not to exist at all, and there’s nothing unpleasant about that. No one minds being dead. The dead never complain, and not merely because their mouths have stopped working. They are simply no longer there to be unhappy.
We used to think that our fate was in the stars. Now we know in large measure, our fate is in our genes.
When the Nobel laureate and co-discoverer of the DNA double helix James Watson made his famous statement in 1989, he was implying that access to a person’s genetic code allows you to predict the outcome of their life.
The troubling implications were not lost on people, of course. A few years later they were explored in the American film Gattaca, which depicted a civilisation from the near future that had embraced this kind of genetic determinism. It was a world in which most people are conceived in test tubes, and taken to term only if they passed genetic tests designed to prevent them from inheriting imperfections ranging from baldness to serious genetic diseases.
With these so-called “valids” – the dominant majority – the film was a warning about the dangers in our technological advancement. As it turns out, we were probably being optimistic about the potential of genetics. Yet too few people seem to have got that message, and this kind of mistaken thinking about the links between genes and traits is having unsettling consequences of its own.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.