Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilisation and our species”.
Hawking was speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research.
“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”
Instead of going to traditional psychotherapists for advice and support, growing numbers of people are turning to philosophical counselors for particularly wise guidance. These counselors work much like traditional psychotherapists. But instead of offering solutions based solely on their understanding of mental health or psychology, philosophical counselors offer solutions and guidance drawn from the writings of great thinkers.
Millennia of philosophical studies can provide practical advice for those experiencing practical difficulties: There’s an entire field of philosophy that explores moral issues; stoic philosophers show us how to weather hardship; the existentialists advise on anxiety; and Aristotle was one of the first thinkers to question what makes a “good life.” All these topics make up a good chunk of any therapy session, philosophical or otherwise.
Philosophical counseling has been available since the early 1990s, when Elliot Cohen came up with the idea and founded the National Philosophical Counseling Association (NPCA) with around 20 counselors. The NPCA’s website suggests writer’s block, job loss, procrastination, and rejection are all appropriate subjects for philosophical guidance. (However, counselors will refer clients to a psychiatrist if they think they’re suffering from a serious mental health issue.) Clients pay about $100 a session for philosophically guided advice, and each session lasts roughly an hour.
“I saw so many people who had all these problems of living that seemed to be amenable to the thinking that students do in Philosophy 101 and Introduction to Logic,” Cohen says. He often draws on French existentialist Jean Paul Sartre, who believed that you are nothing more than your own actions. “If you don’t act, you don’t define yourself and you don’t become anything but a disappointed dream or expectation,” he adds.
For two professors, the opening words of Goethe’s Faust have always been slightly disturbing, but only recently, as we’ve grown older, have they come to haunt us.
Faust sits in his dusty library, surrounded by tomes, and laments the utter inadequacy of human knowledge. He was no average scholar but a true savant — a master in the liberal arts of philosophy and theology and the practical arts of jurisprudence and medicine. In the medieval university, those subjects were the culminating moments of a lifetime of study in rhetoric, logic, grammar, arithmetic, geometry, music, and astronomy.
In other words, Faust knows everything worth knowing. And still, after all his careful bookwork, he arrives at the unsettling realization that none of it has really mattered. His scholarship has done pitifully little to unlock the mystery of human life.
Are we and our students in that same situation? Are we teaching them everything without teaching them anything regarding the big questions that matter most? Is there a curriculum that addresses why we are here? And why we live only to suffer and die?
Those questions are at the root of every great myth and wisdom tradition: the Katha Upanishad, the opening lines of the Bhagavad Gita, Sophocles’ Ajax, and the Book of Job among them. Job cries to the heavens, entreating God to clarify the tortuous perplexity of being human. But God does not oblige, and Job is left in a whirlwind, in the dark, just like Faust at the beginning of Goethe’s modern remake of the ancient biblical story.
Even though you’re not fluent in different languages, you may be able to recognise words in others. In German for water is ‘wasser’, in Dutch it's 'water' and in Serbian ‘voda’. Similar sounds and letters are used to form the word across languages.
Looking at this phenomenon, researchers at Cornell’s Cognitive Neuroscience Lab in the US have found we use similar sounds for the words of common objects and ideas, suggesting that humans may speak the same language.
By analysing around 40-100 basic vocabulary words in around 3,700 languages, approximately 62 per cent of the world’s current languages, the researchers came to the conclusion that for basic concepts such as body parts or aspects of the natural natural world, there are common sounds. The research was published in the journal Proceedings of the National Academy of Sciences.
Body parts in particular stood out. The word ‘nose’ was likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze’. The words ‘knee’ ‘bone’ and 'breasts’ were also similar across the language spectrum. The word for tongue is likely to have an ‘l’, as in 'langue' in French.
The words 'red' and 'round' were more likely to include the ‘r’ sound. 'Leaf' was found to include the sounds ‘l’, ‘b’ or ‘p’. The words 'bite', 'dog', 'star' and 'water' also stood out as words with strong similar sounds.
Certain words were also found to avoid specific sounds. Words for ‘I’ were found to be unlikely to include sounds involving ‘b’, ‘l’, ‘p’, ‘r’, ‘s’, ‘t’ or ‘u’.
Some days, you might feel like a pretty substantial person. Maybe you have a lot of friends, or an important job, or a really big car.
But it might humble you to know that all of those things – your friends, your office, your really big car, you yourself, and even everything in this incredible, vast Universe – are almost entirely, 99.9999999 percent empty space.
Here’s the deal. As I previously wrote in a story for the particle physics publication Symmetry, the size of an atom is governed by the average location of its electrons: how much space there is between the nucleus and the atom’s amorphous outer shell.
Nuclei are around 100,000 times smaller than the atoms they’re housed in.
If the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of dust, and the entire human species would fit into the volume of a sugar cube.
So then where does all our mass come from?
Energy! At a pretty basic level, we’re all made of atoms, which are made of electrons, protons, and neutrons.
And at an even more basic, or perhaps the most basic level, those protons and neutrons, which hold the bulk of our mass, are made of a trio of fundamental particles called quarks.
But, as I explained in Symmetry, the mass of these quarks accounts for just a tiny per cent of the mass of the protons and neutrons. And gluons, which hold these quarks together, are completely massless.
A lot of scientists think that almost all the mass of our bodies comes from the kinetic energy of the quarks and the binding energy of the gluons.
So if all of the atoms in the Universe are almost entirely empty space, why does anything feel solid?
The idea of empty atoms huddling together, composing our bodies and buildings and trees might be a little confusing.
If our atoms are mostly space, why can’t we pass through things like weird ghost people in a weird ghost world? Why don’t our cars fall through the road, through the centre of the Earth, and out the other side of the planet? Why don’t our hands glide through other hands when we give out high fives? It’s time to reexamine what we mean by empty space. Because as it turns out, space is never truly empty. It’s actually full of a whole fistful of good stuff, including wave functions and invisible quantum fields.
What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.
And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?
Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.
In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?
Other people going about their business? Rooms with tables and chairs? Nature with its sky, grass and trees?
All that stuff, it's really there, right? Even if you were to disappear right now — poof! — the rest of the world would still exist in all forms you're seeing now, right?
Or would it?
This kind of metaphysical question is something you'd expect in a good philosophy class — or maybe even a discussion of quantum physics. But most of us wouldn't expect an argument denying the reality of the objective world to come out of evolutionary biology. After all, doesn't evolution tell us we've been tuned to reality by billions of years of natural selection? It makes sense that creatures that can't tell a poisonous snake from a stick shouldn't last long and, therefore, shouldn't pass their genes on to the next generation.
That is certainly how the standard argument goes. But Donald Hoffman, a cognitive scientist, isn't buying it.
For decades, Hoffman, a professor at the University of California, Irvine, has been studying the links between evolution, perception and intelligence (both natural and machine). Based on that body of work, he thinks we've been missing something fundamental when it comes to fundamental reality.
Fundamentally, Hoffman argues, evolution and reality (the objective kind) have almost nothing to do with each other.
Hoffman's been making a lot of news in recent months with these claims. His March 2015 TED talk went viral, gaining more than 2 million views. After a friend sent me the video, I was keen to learn more. I called Dr. Hoffman, and he graciously set aside some time for us to talk. What followed was a delightful conversation with a guy who does, indeed, have a big radical idea. At the same time, Hoffman doesn't come off as someone with an ax to grind. He seems genuinely open and truly curious. At his core, Hoffman says, he's a scientist with a theory that must either live or die by data.
So, what exactly is Hoffman's big radical idea? He begins with a precisely formulated theorem:
"Given an arbitrary world and arbitrary fitness functions, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but that is just tuned to fitness."
Philosophers of science are not known for agreeing with each other—contrariness is part of the job description. But for thousands of years, from Aristotle to Thomas Kuhn, those who study what science is have roughly categorized themselves into two basic camps: “realists” and “anti-realists.”
In philosophical terms, “anti-realists” or “empiricists” understand science as investigating the properties of observable objects via experiments. Empirical theories are constrained by the experimental results. “Realists,” on the other hand, speculate more freely about the possible shape of the unobservable world, often designing mathematical explanations that cannot (yet) be tested. Isaac Newton was a realist, as are string theorists.
Most scientists do not lose sleep worrying about philosophical divides. But maybe they should; Albert Einstein certainly did, as did Niels Bohr, and Erwin Schrödinger. In the 20th century, Kuhn’s cataloguing of the “paradigmatic” nature of scientific revolutions entered the scientific consciousness. As did Karl Popper’s requirement that only theories that can in principle be determined to be false are scientific. “God exists,” for example, is not falsifiable.
But outside the halls of the academy, the influential works of philosophers of science, such as Rudolf Carnap, Wilfrid Sellars, Paul Feyerabend, and Bas C. van Fraassen, to list but a few, are little known to many scientists and the public.
Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?
Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.
Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons.
Many other studies support this explanation. For example, if people are instructed to nod their heads while listening to a tape (in order, they are told, to test the headphones), they express more agreement with what they hear than if they are asked to shake their heads. And if they are required to choose between two items they previously rated as equally desirable, they subsequently say that they prefer the one they had chosen. Again, it seems, they are unconsciously interpreting their own behaviour, taking their nodding to indicate agreement and their choice to reveal a preference.
The question of being is the darkest in all philosophy.” So concluded William James in thinking about that most basic of riddles: how did something come from nothing? The question infuriates, James realized, because it demands an explanation while denying the very possibility of explanation. “From nothing to being there is no logical bridge,” he wrote.
In science, explanations are built of cause and effect. But if nothing is truly nothing, it lacks the power to cause. It’s not simply that we can’t find the right explanation—it’s that explanation itself fails in the face of nothing.
Imagine a Martian zoologist, visiting Earth and observing Homo sapiens for the first time. He, she or it would see a species of primate that differs from the others in many ways, all of them involving our complex cultural, intellectual, linguistic, symbolic and technologic lifestyles. But looking at us through a zoologist’s lens, our observer wouldn’t be especially impressed. To be sure, we have some distinctive anatomical traits (mostly hairless, bipedal, big brains, non-prognathic jaws, unimpressive teeth, and so forth) but being unique isn’t itself unique. Every species is special in its own way.
Among our catalogue of not-so-special traits would be the fact that men are on the whole larger than women: about 7 per cent taller and 15 per cent heavier, with this difference somewhat greater when it comes to muscularity. Also notable: men outnumber women when it comes to lethal violence by a factor of roughly 10:1, a differential found not only cross-culturally among adults, but even recognisable among young children (as a proclivity for violence).
Given these facts, our zoologist would strongly suspect that these humans are paradigmatic harem-holding mammals, notwithstanding the fact that, in the Western world at least, monogamy is the designated standard. In our sexual dimorphism (physical and behavioural male-female differences), we fit the normal polygynous profile for all other animal species. This profile arises as a result of sexual selection, whereby males compete with other males, and more fit males garner a payoff of enhanced reproductive success via an increased number of sexual partners.
This diagnosis of polygyny would be enhanced if the observer visited a high school: girls are physically and socially more mature than same-age boys (to the consternation of both). This pattern, known as sexual bimaturism, is also a polygyny give-away, if rather a counter-intuitive one. In order to reproduce, women undergo considerably more physiological stress than do men; they must nourish an embryo in utero, give birth and then lactate. By contrast, men need only produce a few cubic centimetres of semen. One might expect that males would mature sexually earlier than females since so much less is required of them, but this is not the case. In polygynous species, males must participate in fierce same-sex competition if they are to reproduce at all. Woe betide a male who enters the reproductive arena when too young, small, weak and inexperienced. Just as the degree of sexual dimorphism maps very closely upon the degree of polygyny (average harem size) in a species, the extent of sexual bimaturism is also strongly correlated with the extent to which males compete with each other for access to females. Humans fall into the moderate polygynous part of that spectrum.
There is a well-documented organ shortage throughout the world. For example, 3,000 kidney transplants were made last year in the United Kingdom, but that still left 5,000 people on the waiting list at the end of the period. A lucrative trade in organs has grown up, and transplant tourism has become relatively common. While politicians wring their hands about sensible solutions to the shortage, including the nudge of opt-out donation, scientists using genetic manipulations have been making significant progress in growing transplantable organs inside pigs.
Scientists in the United States are creating so-called ‘human-pig chimeras’ which will be capable of growing the much-needed organs. These chimeras are animals that combine human and pig characteristics. They are like mules that will provide organs that can be transplanted into humans. A mule is the offspring of a male donkey (jack) and a female horse (mare). Horses and donkeys are different species with different numbers of chromosomes, but they can breed together.
In this case, the scientists take a skin cell from a human and from this make stem cells capable of producing any cell or tissue in the body, known as ‘induced pluripotent stem cells’. They then inject these into a pig embryo to make a human-pig chimera. In order to create the desired organ, they use gene editing, or CRISPR, to knock out the embryo’s pig’s genes that produce, for example, the pancreas. The human stem cells for the pancreas then make an almost entirely human pancreas in the resulting human-pig chimera, with just the blood vessels remaining porcine. Using this controversial technology, a human skin cell, pre-treated and injected into a genetically edited pig embryo, could grow a new liver, heart, pancreas or lung as required.
This is a technique with wider possibilities, too: other US teams are working on a chimera-based treatment, this time for Parkinson’s disease, which will use chimeras to create human neurones. CRISPR is also credited with enhancing the safety of this technique: last year, a team from Harvard was able to use the new and revolutionary technique to remove copies of a pig retrovirus.
Safety is always a major concern when science allows new medical developments. But even if a sufficient guarantee of safety could be achieved, there are further ethical problems that should concern us.
A chimera is a genetic mix. This means that, although the aim might be to isolate only certain organs to express human genetic material, the whole chimera will in fact comprise the genetic material of both humans and pigs. It is not a pig with a human pancreas inserted into it – it is a human-animal chimera, whose pancreas resembles a human’s, and whose other organs are a blend of pig and human. This could affect the chimera’s brain. Pablo Ross, the lead researcher in the pig experiment, is quoted by the BBC as saying: ‘We think there is very low potential for a human brain to grow.’ Even if in this particular case he is correct, given that some of this kind of research is indeed focused on neurons, it is possible that some future chimeras will develop human or human-like brains.
Where the genetic material of humans and animals are mixed, this might result in characteristics that we usually think of as having moral relevance. ‘Moral status’ is the standing or position of a being within a hierarchical framework of moral obligations. The moral status of a chimera entails relevant obligations to treat it in certain ways while it is alive, in virtue of its nature, and has implications for whether it is wrong to kill it.
It's impressive enough that our human brains are made up of the same 'star stuff' that forms the Universe, but new research suggests that this might not be the only thing the two have in common.
Just like the Universe, our brains might be programmed to maximise disorder - similar to the principle of entropy - and our consciousness could simply be a side effect.
The quest to understand human consciousness - our ability to be aware of ourselves and our surroundings - has been going on for centuries. Although consciousness is a crucial part of being human, researchers still don't truly understand where it comes from, and why we have it.
But a new study, led by researchers from France and Canada, puts forward a new possibility: what if consciousness arises naturally as a result of our brains maximising their information content? In other words, what if consciousness is a side effect of our brain moving towards a state of entropy?
Entropy is basically the term used to describe the progression of a system from order to disorder. Picture an egg: when it's all perfectly separated into yolk and white, it has low entropy, but when you scramble it, it has high entropy - it's the most disordered it can be.
This is what many physicists believe is happening to our Universe. After the Big Bang, the Universe has gradually been moving from a state of low entropy to high entropy, and because the second law of thermodynamics states that entropy can only increase in a system, it could explain why the arrow of time only ever moves forwards.
So researchers decided to apply the same thinking to the connections in our brains, and investigate whether they show any patterns in the way they choose to order themselves while we're conscious.
Bob Dylan’s Nobel prize win and the ensuing debate as to whether a musician should have been considered is a striking comment on the seemingly glib question of what literature actually is. And with the Man Booker prize also just around the corner, how and why literature matters are topics currently animating plenty of cultural debate.
Assessing the literary merit of Dylan’s work is nothing new. Christopher Ricks, a former Professor of Poetry at the University of Oxford, published a book on Dylan back in 2003, and a Cambridge Companion to Bob Dylan was released a few years later. But others have argued that his Nobel award snubs those who write “literature” — as in, in books.
The Nobel Prize in Literature is awarded to a writer who has produced for the field of literature, in Alfred Nobel’s words, “the most outstanding work in an ideal direction”. Dylan won the prize for having “created new poetic expressions”. The UK’s former Poet Laureate Andrew Motion commented that Dylan’s songs are “often the best words in the best order”. And Professor Sara Danius, Permanent Secretary of the Swedish Academy, spoke of Dylan’s “pictorial thinking”. The week before Dylan’s win, David Szalay’s All That Man Is, shortlisted for the Man Booker, won the Gordon Burn prize. The judges said the novel “subtly changes the way you look at the contemporary world”.
But what is an “ideal direction” for literature? And how exactly does literature change our relationship with “the contemporary world”?
The rise in popularity of Donald Trump and the emotionally-charged Brexit referendum have led many observers to proclaim that we are in an era of post-truth politics. Leave campaigner and Conservative MP Michael Gove spoke to this new reality when he declared that people “have had enough of experts”.
Truth doesn’t seem matter so much as truthiness – the quality of something seeming to be true even if it’s false.
Have we lost faith in reason? Philosopher Julian Baggini asks this question in a well-timed and cogently argued new book, The Edge of Reason: A Rational Skeptic in an Irrational World. Baggini looks at why rationality gets a bad rap these days. In many fields, the experts have let us down. Science has arguably over-reached. And religion – something billions of people continue to hold dear – is frequently portrayed in secular society as incompatible with intellectual coherence if not sanity.
A related phenomenon is the modern penchant for reducing reason to just logic or scientific reasoning. This would seem to rule out the possibility of there being moral truths, something humanity should be slow to surrender.
As Baggini puts it, “reason is not only disinterested reason, acting independently of anything other than value-free facts and logic”. Providing today’s “Unthinkable” idea, he adds: “We cannot look to disinterested reason to provide the basis of morality.”
You say reason is under fire: in what way?
Julian Baggini: “Let me count the ways! First and most recently, the widely documented loss of faith in experts and elites assumes that having greater knowledge and experience in thinking about issues counts for nothing and could even get in the way of a superior common sense. The brain is seen as having failed us and so the gut is trusted instead.
“Second, reason is associated with a dry, scientific world view that has no place for emotion, intuition or faith. Logic is for robots, we are not robots, therefore logic is not for us, which is ironically an attempt at arguing logically.
“Third, reason is routinely dismissed as merely a means of rationalising our prejudices and instinctive beliefs. ‘We all know’ that psychology has shown that the rational mind is not in charge and that the unconscious, driven by emotion and automatic processing, rules.
I have published widely on Islamic political thought, including an encyclopedia entry on the topic. Reading the Quran, Islamic jurisprudence (fiqh), philosophy (falsafa) and Ibn Khaldun’s history of the premodern world, the Muqaddimah (1377), has enriched my life and thought. Yet I disagree with the call, made by Jay L Garfield and Bryan W Van Norden in The New York Times, for philosophy departments to diversify and immediately incorporate courses in African, Indian, Islamic, Jewish, Latin American and Native American ‘philosophy’ into their curriculums. It might seem broadminded to call for philosophy professors to teach ancient Asian scholars such as Confucius and Candrakīrti in addition to dead white men such as David Hume and Immanuel Kant. However, this approach undermines what is distinct about philosophy as an intellectual tradition, and pays other traditions the dubious compliment of saying that they are just like ours. Furthermore, this demand fuels the political campaign to defund academic philosophy departments.
Philosophy originates in Plato’s Republic. It is a restless pursuit for truth through contentious dialogue. It takes place among ordinary human beings in cities, not sages and disciples on mountaintops, and it requires the fearless use of reason even in the face of established traditions or religious commitments. Plato’s book is the first text of philosophy and a reference point for texts as diverse as Aristotle’s Politics, Augustine’s City of God, al-Fārābī’s The Political Regime, and the French philosopher Alain Badiou’s book Plato’s Republic (2013). The British philosopher Alfred North Whitehead once said that the history of philosophy is a series of footnotes to Plato. Even philosophers who do not mention Plato directly still use his words – including ‘ideas’ – and his general orientation that prioritises truth over piety. Philosophy is the love of wisdom rather than the love of blood or country. It is in principle open to everybody, and people all around the world heed Plato’s call to live an examined life.
I am wary of the argument, however, that all serious reflection upon fundamental questions ought to be called philosophy. Philosophy is one among many ways to think about questions such as the origin of the Universe, the nature of justice, or the limits of knowledge. Philosophy, at its best, aims to be a dialogue between people of different viewpoints, but, again, it is a love of wisdom, rather than the possession of wisdom. This restless character has often made it the enemy of religion and tradition.
Since the dawn of anthropology, sociology and psychology, religion has been an object of fascination. Founding figures such as Sigmund Freud, Émile Durkheim and Max Weber all attempted to dissect it, taxonomise it, and explore its psychological and social functions. And long before the advent of the modern social sciences, philosophers such as Xenophanes, Lucretius, David Hume and Ludwig Feuerbach have pondered the origins of religion.
In the century since the founding of the social sciences, interest in religion has not waned – but confidence in grand theorising about it has. Few would now endorse Freud’s insistence that the origins of religion are entwined with Oedipal sexual desires towards mothers. Weber’s linkage of a Protestant work ethic and the origins of capitalism might remain influential, but his broader comparisons between the religion and culture of the occidental and oriental worlds are now rightly regarded as historically inaccurate and deeply Euro-centric.
Today, such sweeping claims about religion are looked upon skeptically, and a circumscribed relativism has instead become the norm. However, a new empirical approach to examining religion – dubbed the cognitive science of religion (CSR) – has recently perturbed the ghosts of theoretical grandeur by offering explanations for religious beliefs and practices that are informed by theories of evolution and therefore involve cognitive processes thought to be prevalent, if not universal, among human beings.
This approach, like its Victorian predecessors, offers the possibility of discovering universal commonalities among the many idiosyncracies in religious concepts, beliefs and practices found across history and culture. But unlike previous efforts, modern researchers largely eschew any attempt to provide a single monocausal explanation for religion, arguing that to do so is as meaningless as searching for a single explanation for art or science. These categories are just too broad for such an analysis. Instead, as the cognitive anthropologist Harvey Whitehouse at the University of Oxford puts it, a scientific study of religion must begin by ‘fractionating’ the concept of religion, breaking down the category into specific features that can be individually explored and explained, such as the belief in moralistic High Gods or participation in collective rituals.
For critics of the cognitive science of religion, this approach repeats the mistakes of the old grand theorists, just dressed up in trendy theoretical garb. The charge is that researchers are guilty of reifying the concept of religion as a universal, an ethnocentric approach that fails to appreciate the cultural diversity of the real world. Perhaps ironically, it is scholars in the Study of Religions discipline that now express the most skepticism about the usefulness of the term ‘religion’. They argue that it is inextricably Western and therefore loaded with assumptions related to the Abrahamic religious institutions that dominate in the West. For instance, the religious studies scholar Russell McCutcheon at the University of Alabama argues in Manufacturing Religion (1997) that scholars treating religion as a natural category have produced analyses that are ‘ahistorical, apolitical [and] fetishised’.
We live with six rescued dogs. With the exception of one, who was born in a rescue for pregnant dogs, they all came from very sad situations, including circumstances of severe abuse. These dogs are non-human refugees with whom we share our home. Although we love them very much, we strongly believe that they should not have existed in the first place.
We oppose domestication and pet ownership because these violate the fundamental rights of animals.
The term ‘animal rights’ has become largely meaningless. Anyone who thinks that we should give battery hens a small increase in cage space, or that veal calves should be housed in social units rather than in isolation before they are dragged off and slaughtered, is articulating what is generally regarded as an ‘animal rights’ position. This is attributable in large part to Peter Singer, author of Animal Liberation (1975), who is widely considered the ‘father of the animal rights movement’.
The problem with this attribution of paternity is that Singer is a utilitarian who rejects moral rights altogether, and supports any measure that he thinks will reduce suffering. In other words, the ‘father of the animal rights movement’ rejects animal rights altogether and has given his blessing to cage-free eggs, crate-free pork, and just about every ‘happy exploitation’ measure promoted by almost every large animal welfare charity. Singer does not promote animal rights; he promotes animal welfare. He does not reject the use of animals by humans per se. He focuses only on their suffering. In an interview with The Vegan magazine in 2006, he said, for example, that he could ‘imagine a world in which people mostly eat plant foods, but occasionally treat themselves to the luxury of free-range eggs, or possibly even meat from animals who live good lives under conditions natural for their species, and are then humanely killed on the farm’.
We use the term ‘animal rights’ in a different way, similar to the way that ‘human rights’ is used when the fundamental interests of our own species are concerned. For example, if we say that a human has a right to her life, we mean that her fundamental interest in continuing to live will be protected even if using her as a non-consenting organ donor would result in saving the lives of 10 other humans. A right is a way of protecting an interest; it protects interests irrespective of consequences. The protection is not absolute; it may be forfeited under certain circumstances. But the protection cannot be abrogated for consequential reasons alone.
You’ve been cheated of your birthright: a complete education. In the words of Martin Luther King Jr. (at your age of 18), a "complete education" gives "not only power of concentration, but worthy objectives upon which to concentrate."
But now your education is in your own hands. And my advice is: Don’t let yourself be cheated anymore, and do not cheat yourself. Take advantage of the autonomy and opportunities that college permits by approaching it in the spirit of the 16th century. You’ll become capable of a level of precision, inventiveness, and empathy worthy to be called Shakespearean.
Building a bridge to the 16th century must seem like a perverse prescription for today’s ills. I’m the first to admit that English Renaissance pedagogy was rigid and rightly mocked for its domineering pedants. Few of you would be eager to wake up before 6 a.m. to say mandatory prayers, or to be lashed for tardiness, much less translate Latin for hours on end every day of the week. Could there be a system more antithetical to our own contemporary ideals of student-centered, present-focused, and career-oriented education?
An orangutan named Rocky is using “wookies” to reveal new insights into the origins of language.
In experiments conducted by a researcher at Amsterdam University, Rocky learned and recited a basic vocabulary of sounds, producing vocalizations no orangutan is known to make. By learning to mimic his human instructor, this talkative primate is lending support to one of the leading theories of language evolution. Repeat After Me
Adriano Lameira, now a professor in the department of anthropology at Durham University, used food rewards to train Rocky to mimic the sounds a human was making. The sounds, called “wookies”, differ from vocalizations naturally produced by orangutans, termed “grumphs.”
Over time, Rocky got better at producing the wookies, learning to modulate his vocal folds — thin curtains of tissue that vibrate when air is passed over them — and other components of sound production to match the human enunciations. Rocky’s abilities prove that primates can manipulate their vocal folds at a fine scale to create distinct sounds, a key component for building up and using a complex vocabulary. Language Evolved Gradually
Theories about how protean languages first came to be are widespread, and cover a pretty broad spectrum. Some say that language emerged from instinctive vocalizations that our ancestors uttered when experiencing strong emotions. Others hold that language emerged from the rhythmic “songs” and vocalizations of early hominins. Another theory holds that language is simply a natural progression from gesture-based communication, which is limited by sight lines and darkness.
The findings lend credence to the idea that language developed slowly, growing more complex over time. The findings were published Wednesday in Scientific Reports.
Wherever language came from, it has two essential components: physical and cognitive capabilities. We need to have both the mental faculties to form and communicate ideas and the bodily structures necessary to produce gestures or sounds.
Signing gorillas can communicate via gestures, proving they have the mental abilities to do so exist. Now, Rocky has shown that primates can learn to produce new sounds as well, illustrating that the physical underpinnings of language go back millions of years.
If there is a subtext to the principle of selection, it lies in an idealised notion of American national values, as showcased by Hollywood films for more than a century. Across generations, millions have laughed with the Marx Brothers comedy Duck Soup and sung along with The Sound of Music.
These, and others such as Citizen Kane and Casablanca make the NASA list as much more than remarkable films. They are cultural articulations of the ethos of America as sought to be portrayed by its establishment. This ethos includes the championing of its national power, as well as its much publicised ability to introspect on a national scale through films such as 12 Years A Slave or To Kill a Mockingbird, which also feature in the selection. The American dream
The great narrative arc of Hollywood film fundamentally reinforces the belief that, its blemishes notwithstanding, there is no country quite like America. Predictable selections, therefore, include films such as The Wizard of Oz and It’s a Wonderful Life which celebrate conservative ideas about American values, reiterating that “there’s no place like home”. The Seven Samurai, later remade as The Magnificent Seven in Hollywood, remains a rare example of a non-English language film on the list.
In the fictional world, the film hero is the most prominent saviour of the Western way of life, and a fascination for all things heroic underlies the selection. Consequently, James Bond appears many times in the list, as does Die Hard’s incorrigible movie cop John Mclane and his television equivalent, the indefatigable Jack Bauer in 24.
Imagine a world in which most people worked only 15 hours a week. They would be paid as much as, or even more than, they now are, because the fruits of their labor would be distributed more evenly across society. Leisure would occupy far more of their waking hours than work. It was exactly this prospect that John Maynard Keynes conjured up in a little essay published in 1930 called "Economic Possibilities for Our Grandchildren." Its thesis was simple. As technological progress made possible an increase in the output of goods per hour worked, people would have to work less and less to satisfy their needs, until in the end they would have to work hardly at all. Then, Keynes wrote, "for the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well." He thought this condition might be reached in about 100 years—that is, by 2030.
Peter Singer is considered a founding father of the modern animal rights movement. He is a vegan who gives away a third of his income to charity. So why has he been described as the most dangerous men on the planet?
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.