Some days, you might feel like a pretty substantial person. Maybe you have a lot of friends, or an important job, or a really big car.
But it might humble you to know that all of those things – your friends, your office, your really big car, you yourself, and even everything in this incredible, vast Universe – are almost entirely, 99.9999999 percent empty space.
Here’s the deal. As I previously wrote in a story for the particle physics publication Symmetry, the size of an atom is governed by the average location of its electrons: how much space there is between the nucleus and the atom’s amorphous outer shell.
Nuclei are around 100,000 times smaller than the atoms they’re housed in.
If the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of dust, and the entire human species would fit into the volume of a sugar cube.
So then where does all our mass come from?
Energy! At a pretty basic level, we’re all made of atoms, which are made of electrons, protons, and neutrons.
And at an even more basic, or perhaps the most basic level, those protons and neutrons, which hold the bulk of our mass, are made of a trio of fundamental particles called quarks.
But, as I explained in Symmetry, the mass of these quarks accounts for just a tiny per cent of the mass of the protons and neutrons. And gluons, which hold these quarks together, are completely massless.
A lot of scientists think that almost all the mass of our bodies comes from the kinetic energy of the quarks and the binding energy of the gluons.
So if all of the atoms in the Universe are almost entirely empty space, why does anything feel solid?
The idea of empty atoms huddling together, composing our bodies and buildings and trees might be a little confusing.
If our atoms are mostly space, why can’t we pass through things like weird ghost people in a weird ghost world? Why don’t our cars fall through the road, through the centre of the Earth, and out the other side of the planet? Why don’t our hands glide through other hands when we give out high fives? It’s time to reexamine what we mean by empty space. Because as it turns out, space is never truly empty. It’s actually full of a whole fistful of good stuff, including wave functions and invisible quantum fields.
What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.
And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?
Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.
In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?
Other people going about their business? Rooms with tables and chairs? Nature with its sky, grass and trees?
All that stuff, it's really there, right? Even if you were to disappear right now — poof! — the rest of the world would still exist in all forms you're seeing now, right?
Or would it?
This kind of metaphysical question is something you'd expect in a good philosophy class — or maybe even a discussion of quantum physics. But most of us wouldn't expect an argument denying the reality of the objective world to come out of evolutionary biology. After all, doesn't evolution tell us we've been tuned to reality by billions of years of natural selection? It makes sense that creatures that can't tell a poisonous snake from a stick shouldn't last long and, therefore, shouldn't pass their genes on to the next generation.
That is certainly how the standard argument goes. But Donald Hoffman, a cognitive scientist, isn't buying it.
For decades, Hoffman, a professor at the University of California, Irvine, has been studying the links between evolution, perception and intelligence (both natural and machine). Based on that body of work, he thinks we've been missing something fundamental when it comes to fundamental reality.
Fundamentally, Hoffman argues, evolution and reality (the objective kind) have almost nothing to do with each other.
Hoffman's been making a lot of news in recent months with these claims. His March 2015 TED talk went viral, gaining more than 2 million views. After a friend sent me the video, I was keen to learn more. I called Dr. Hoffman, and he graciously set aside some time for us to talk. What followed was a delightful conversation with a guy who does, indeed, have a big radical idea. At the same time, Hoffman doesn't come off as someone with an ax to grind. He seems genuinely open and truly curious. At his core, Hoffman says, he's a scientist with a theory that must either live or die by data.
So, what exactly is Hoffman's big radical idea? He begins with a precisely formulated theorem:
"Given an arbitrary world and arbitrary fitness functions, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but that is just tuned to fitness."
Philosophers of science are not known for agreeing with each other—contrariness is part of the job description. But for thousands of years, from Aristotle to Thomas Kuhn, those who study what science is have roughly categorized themselves into two basic camps: “realists” and “anti-realists.”
In philosophical terms, “anti-realists” or “empiricists” understand science as investigating the properties of observable objects via experiments. Empirical theories are constrained by the experimental results. “Realists,” on the other hand, speculate more freely about the possible shape of the unobservable world, often designing mathematical explanations that cannot (yet) be tested. Isaac Newton was a realist, as are string theorists.
Most scientists do not lose sleep worrying about philosophical divides. But maybe they should; Albert Einstein certainly did, as did Niels Bohr, and Erwin Schrödinger. In the 20th century, Kuhn’s cataloguing of the “paradigmatic” nature of scientific revolutions entered the scientific consciousness. As did Karl Popper’s requirement that only theories that can in principle be determined to be false are scientific. “God exists,” for example, is not falsifiable.
But outside the halls of the academy, the influential works of philosophers of science, such as Rudolf Carnap, Wilfrid Sellars, Paul Feyerabend, and Bas C. van Fraassen, to list but a few, are little known to many scientists and the public.
Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?
Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.
Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons.
Many other studies support this explanation. For example, if people are instructed to nod their heads while listening to a tape (in order, they are told, to test the headphones), they express more agreement with what they hear than if they are asked to shake their heads. And if they are required to choose between two items they previously rated as equally desirable, they subsequently say that they prefer the one they had chosen. Again, it seems, they are unconsciously interpreting their own behaviour, taking their nodding to indicate agreement and their choice to reveal a preference.
The question of being is the darkest in all philosophy.” So concluded William James in thinking about that most basic of riddles: how did something come from nothing? The question infuriates, James realized, because it demands an explanation while denying the very possibility of explanation. “From nothing to being there is no logical bridge,” he wrote.
In science, explanations are built of cause and effect. But if nothing is truly nothing, it lacks the power to cause. It’s not simply that we can’t find the right explanation—it’s that explanation itself fails in the face of nothing.
Imagine a Martian zoologist, visiting Earth and observing Homo sapiens for the first time. He, she or it would see a species of primate that differs from the others in many ways, all of them involving our complex cultural, intellectual, linguistic, symbolic and technologic lifestyles. But looking at us through a zoologist’s lens, our observer wouldn’t be especially impressed. To be sure, we have some distinctive anatomical traits (mostly hairless, bipedal, big brains, non-prognathic jaws, unimpressive teeth, and so forth) but being unique isn’t itself unique. Every species is special in its own way.
Among our catalogue of not-so-special traits would be the fact that men are on the whole larger than women: about 7 per cent taller and 15 per cent heavier, with this difference somewhat greater when it comes to muscularity. Also notable: men outnumber women when it comes to lethal violence by a factor of roughly 10:1, a differential found not only cross-culturally among adults, but even recognisable among young children (as a proclivity for violence).
Given these facts, our zoologist would strongly suspect that these humans are paradigmatic harem-holding mammals, notwithstanding the fact that, in the Western world at least, monogamy is the designated standard. In our sexual dimorphism (physical and behavioural male-female differences), we fit the normal polygynous profile for all other animal species. This profile arises as a result of sexual selection, whereby males compete with other males, and more fit males garner a payoff of enhanced reproductive success via an increased number of sexual partners.
This diagnosis of polygyny would be enhanced if the observer visited a high school: girls are physically and socially more mature than same-age boys (to the consternation of both). This pattern, known as sexual bimaturism, is also a polygyny give-away, if rather a counter-intuitive one. In order to reproduce, women undergo considerably more physiological stress than do men; they must nourish an embryo in utero, give birth and then lactate. By contrast, men need only produce a few cubic centimetres of semen. One might expect that males would mature sexually earlier than females since so much less is required of them, but this is not the case. In polygynous species, males must participate in fierce same-sex competition if they are to reproduce at all. Woe betide a male who enters the reproductive arena when too young, small, weak and inexperienced. Just as the degree of sexual dimorphism maps very closely upon the degree of polygyny (average harem size) in a species, the extent of sexual bimaturism is also strongly correlated with the extent to which males compete with each other for access to females. Humans fall into the moderate polygynous part of that spectrum.
There is a well-documented organ shortage throughout the world. For example, 3,000 kidney transplants were made last year in the United Kingdom, but that still left 5,000 people on the waiting list at the end of the period. A lucrative trade in organs has grown up, and transplant tourism has become relatively common. While politicians wring their hands about sensible solutions to the shortage, including the nudge of opt-out donation, scientists using genetic manipulations have been making significant progress in growing transplantable organs inside pigs.
Scientists in the United States are creating so-called ‘human-pig chimeras’ which will be capable of growing the much-needed organs. These chimeras are animals that combine human and pig characteristics. They are like mules that will provide organs that can be transplanted into humans. A mule is the offspring of a male donkey (jack) and a female horse (mare). Horses and donkeys are different species with different numbers of chromosomes, but they can breed together.
In this case, the scientists take a skin cell from a human and from this make stem cells capable of producing any cell or tissue in the body, known as ‘induced pluripotent stem cells’. They then inject these into a pig embryo to make a human-pig chimera. In order to create the desired organ, they use gene editing, or CRISPR, to knock out the embryo’s pig’s genes that produce, for example, the pancreas. The human stem cells for the pancreas then make an almost entirely human pancreas in the resulting human-pig chimera, with just the blood vessels remaining porcine. Using this controversial technology, a human skin cell, pre-treated and injected into a genetically edited pig embryo, could grow a new liver, heart, pancreas or lung as required.
This is a technique with wider possibilities, too: other US teams are working on a chimera-based treatment, this time for Parkinson’s disease, which will use chimeras to create human neurones. CRISPR is also credited with enhancing the safety of this technique: last year, a team from Harvard was able to use the new and revolutionary technique to remove copies of a pig retrovirus.
Safety is always a major concern when science allows new medical developments. But even if a sufficient guarantee of safety could be achieved, there are further ethical problems that should concern us.
A chimera is a genetic mix. This means that, although the aim might be to isolate only certain organs to express human genetic material, the whole chimera will in fact comprise the genetic material of both humans and pigs. It is not a pig with a human pancreas inserted into it – it is a human-animal chimera, whose pancreas resembles a human’s, and whose other organs are a blend of pig and human. This could affect the chimera’s brain. Pablo Ross, the lead researcher in the pig experiment, is quoted by the BBC as saying: ‘We think there is very low potential for a human brain to grow.’ Even if in this particular case he is correct, given that some of this kind of research is indeed focused on neurons, it is possible that some future chimeras will develop human or human-like brains.
Where the genetic material of humans and animals are mixed, this might result in characteristics that we usually think of as having moral relevance. ‘Moral status’ is the standing or position of a being within a hierarchical framework of moral obligations. The moral status of a chimera entails relevant obligations to treat it in certain ways while it is alive, in virtue of its nature, and has implications for whether it is wrong to kill it.
Where is your mind? Where does your thinking occur? Where are your beliefs? René Descartes thought that the mind was an immaterial soul, housed in the pineal gland near the centre of the brain. Nowadays, by contrast, we tend to identify the mind with the brain. We know that mental processes depend on brain processes, and that different brain regions are responsible for different functions. However, we still agree with Descartes on one thing: we still think of the mind as (in a phrase coined by the philosopher of mind Andy Clark) brainbound, locked away in the head, communicating with the body and wider world but separate from them. And this might be quite wrong. I’m not suggesting that the mind is non-physical or doubting that the brain is central to it; but it could be that (as Clark and others argue) the mind extends beyond the brain.
An astronomer, mathematician, philosopher, and active public figure, Hypatia played a leading role in Alexandrian civic affairs. Her public lectures were popular, and her technical contributions to geometry, astronomy, number theory, and philosophy made Hypatia a highly regarded teacher and scholar.
Philosophy is a remarkably un-diverse discipline. Compared with other scholars who read, interpret and assign texts, philosophers in the United States typically choose a much higher percentage of their sources (often, 100 per cent) from Europe and countries settled by Europeans. Philosophy teachers, too, look homogeneous: 86 per cent of new PhD researchers in philosophy are white, and 72 per cent are male. In the whole country, only about 30 African-American women work as philosophy professors.
In The New York Times’ philosophy blog ‘The Stone’, Jay L Garfield and Bryan W Van Norden recently wrote: ‘No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain.’ They urge philosophy departments to diversify their curricula – and, if they can’t or won’t, to rename themselves departments of ‘Anglo-European Philosophical Studies’.
In my own view, philosophy must become more diverse in order to make progress on its fundamental questions. But cultural diversity means something different in philosophy, compared with other humanities disciplines.
The humanities primarily seek to understand what other people mean. Interpretation requires sensitivity, empathy and openness. A humanistic discipline should range across all of human experience, past and present, investigating and learning what people have thought throughout the world.
In contrast, many scientists see themselves as part of one global, culturally neutral, 21st-century community devoted to understanding nature itself. For instance, the physicist Freeman Dyson wrote recently that Fang Lizhi, the late Chinese astrophysicist and dissident, ‘believed passionately in science, not only as an intellectual pursuit of understanding of nature, but also as an international enterprise in which people of diverse cultures and traditions could work together. Scientists throughout the world speak a common language and find it easy to collaborate.’
Bryan Magee (1930 – ) has had a multifaceted career as a professor of philosophy, music and theater critic, BBC broadcaster, public intellectual and member of Parliament. He has starred in two acclaimed television series about philosophy: Men of Ideas (1978) and The Great Philosophers (1987). He is best known as a popularizer of philosophy. His easy-to-read books, which have been translated into more than twenty languages, include:
Confessions of a Philosopher: A Personal Journey Through Western Philosophy from Plato to Popper; The Great Philosophers: An Introduction to Western Philosophy; Talking Philosophy: Dialogues with Fifteen Leading Philosophers; Philosophy and the Real World: An Introduction to Karl Popper; The Story of Philosophy: 2,500 Years of Great Thinkers from Socrates to the Existentialists and Beyond; Men of Ideas.
Now, at age 86, he has written Ultimate Questions, a summary of a lifetime of thinking about “the fundamentals of the human condition.” Its basic theme is that we know little about the human condition, since reality comes to us filtered through the senses and the limitations of our intellect and language. And the most honest response to this predicament is agnosticism.
Magee begins considering that “What we call civilization has existed for something like six thousand years.” If you remember that there have always been some individuals who have lived a hundred years this means that “the whole of civilization has occurred with the successive lifetimes of sixty people …” Furthermore, “most people are as provincial in time as they are in space: they huddle down into their time and regard it as their total environment…” They don’t think about the little sliver of time and space that they occupy. Thus begins this meditation on agnosticism.
Furthermore, we are ignorant of knowledge of our ultimate nature: “We, who do not know what we are, have to fashion lives for ourselves in a universe of which we know little and understand less.” Yet this situation doesn’t lead Magee to despair. Instead he calls for “an active agnosticism,” which is “a positive principle of procedure, an openness to the fact that we do not know, followed by intellectually honest enquiry in full receptivity of mind.” If he had to choose a tag he says, it would be “the agnostic.”
I have published widely on Islamic political thought, including an encyclopedia entry on the topic. Reading the Quran, Islamic jurisprudence (fiqh), philosophy (falsafa) and Ibn Khaldun’s history of the premodern world, the Muqaddimah (1377), has enriched my life and thought. Yet I disagree with the call, made by Jay L Garfield and Bryan W Van Norden in The New York Times, for philosophy departments to diversify and immediately incorporate courses in African, Indian, Islamic, Jewish, Latin American and Native American ‘philosophy’ into their curriculums. It might seem broadminded to call for philosophy professors to teach ancient Asian scholars such as Confucius and Candrakīrti in addition to dead white men such as David Hume and Immanuel Kant. However, this approach undermines what is distinct about philosophy as an intellectual tradition, and pays other traditions the dubious compliment of saying that they are just like ours. Furthermore, this demand fuels the political campaign to defund academic philosophy departments.
Philosophy originates in Plato’s Republic. It is a restless pursuit for truth through contentious dialogue. It takes place among ordinary human beings in cities, not sages and disciples on mountaintops, and it requires the fearless use of reason even in the face of established traditions or religious commitments. Plato’s book is the first text of philosophy and a reference point for texts as diverse as Aristotle’s Politics, Augustine’s City of God, al-Fārābī’s The Political Regime, and the French philosopher Alain Badiou’s book Plato’s Republic (2013). The British philosopher Alfred North Whitehead once said that the history of philosophy is a series of footnotes to Plato. Even philosophers who do not mention Plato directly still use his words – including ‘ideas’ – and his general orientation that prioritises truth over piety. Philosophy is the love of wisdom rather than the love of blood or country. It is in principle open to everybody, and people all around the world heed Plato’s call to live an examined life.
I am wary of the argument, however, that all serious reflection upon fundamental questions ought to be called philosophy. Philosophy is one among many ways to think about questions such as the origin of the Universe, the nature of justice, or the limits of knowledge. Philosophy, at its best, aims to be a dialogue between people of different viewpoints, but, again, it is a love of wisdom, rather than the possession of wisdom. This restless character has often made it the enemy of religion and tradition.
Since the dawn of anthropology, sociology and psychology, religion has been an object of fascination. Founding figures such as Sigmund Freud, Émile Durkheim and Max Weber all attempted to dissect it, taxonomise it, and explore its psychological and social functions. And long before the advent of the modern social sciences, philosophers such as Xenophanes, Lucretius, David Hume and Ludwig Feuerbach have pondered the origins of religion.
In the century since the founding of the social sciences, interest in religion has not waned – but confidence in grand theorising about it has. Few would now endorse Freud’s insistence that the origins of religion are entwined with Oedipal sexual desires towards mothers. Weber’s linkage of a Protestant work ethic and the origins of capitalism might remain influential, but his broader comparisons between the religion and culture of the occidental and oriental worlds are now rightly regarded as historically inaccurate and deeply Euro-centric.
Today, such sweeping claims about religion are looked upon skeptically, and a circumscribed relativism has instead become the norm. However, a new empirical approach to examining religion – dubbed the cognitive science of religion (CSR) – has recently perturbed the ghosts of theoretical grandeur by offering explanations for religious beliefs and practices that are informed by theories of evolution and therefore involve cognitive processes thought to be prevalent, if not universal, among human beings.
This approach, like its Victorian predecessors, offers the possibility of discovering universal commonalities among the many idiosyncracies in religious concepts, beliefs and practices found across history and culture. But unlike previous efforts, modern researchers largely eschew any attempt to provide a single monocausal explanation for religion, arguing that to do so is as meaningless as searching for a single explanation for art or science. These categories are just too broad for such an analysis. Instead, as the cognitive anthropologist Harvey Whitehouse at the University of Oxford puts it, a scientific study of religion must begin by ‘fractionating’ the concept of religion, breaking down the category into specific features that can be individually explored and explained, such as the belief in moralistic High Gods or participation in collective rituals.
For critics of the cognitive science of religion, this approach repeats the mistakes of the old grand theorists, just dressed up in trendy theoretical garb. The charge is that researchers are guilty of reifying the concept of religion as a universal, an ethnocentric approach that fails to appreciate the cultural diversity of the real world. Perhaps ironically, it is scholars in the Study of Religions discipline that now express the most skepticism about the usefulness of the term ‘religion’. They argue that it is inextricably Western and therefore loaded with assumptions related to the Abrahamic religious institutions that dominate in the West. For instance, the religious studies scholar Russell McCutcheon at the University of Alabama argues in Manufacturing Religion (1997) that scholars treating religion as a natural category have produced analyses that are ‘ahistorical, apolitical [and] fetishised’.
We live with six rescued dogs. With the exception of one, who was born in a rescue for pregnant dogs, they all came from very sad situations, including circumstances of severe abuse. These dogs are non-human refugees with whom we share our home. Although we love them very much, we strongly believe that they should not have existed in the first place.
We oppose domestication and pet ownership because these violate the fundamental rights of animals.
The term ‘animal rights’ has become largely meaningless. Anyone who thinks that we should give battery hens a small increase in cage space, or that veal calves should be housed in social units rather than in isolation before they are dragged off and slaughtered, is articulating what is generally regarded as an ‘animal rights’ position. This is attributable in large part to Peter Singer, author of Animal Liberation (1975), who is widely considered the ‘father of the animal rights movement’.
The problem with this attribution of paternity is that Singer is a utilitarian who rejects moral rights altogether, and supports any measure that he thinks will reduce suffering. In other words, the ‘father of the animal rights movement’ rejects animal rights altogether and has given his blessing to cage-free eggs, crate-free pork, and just about every ‘happy exploitation’ measure promoted by almost every large animal welfare charity. Singer does not promote animal rights; he promotes animal welfare. He does not reject the use of animals by humans per se. He focuses only on their suffering. In an interview with The Vegan magazine in 2006, he said, for example, that he could ‘imagine a world in which people mostly eat plant foods, but occasionally treat themselves to the luxury of free-range eggs, or possibly even meat from animals who live good lives under conditions natural for their species, and are then humanely killed on the farm’.
We use the term ‘animal rights’ in a different way, similar to the way that ‘human rights’ is used when the fundamental interests of our own species are concerned. For example, if we say that a human has a right to her life, we mean that her fundamental interest in continuing to live will be protected even if using her as a non-consenting organ donor would result in saving the lives of 10 other humans. A right is a way of protecting an interest; it protects interests irrespective of consequences. The protection is not absolute; it may be forfeited under certain circumstances. But the protection cannot be abrogated for consequential reasons alone.
You’ve been cheated of your birthright: a complete education. In the words of Martin Luther King Jr. (at your age of 18), a "complete education" gives "not only power of concentration, but worthy objectives upon which to concentrate."
But now your education is in your own hands. And my advice is: Don’t let yourself be cheated anymore, and do not cheat yourself. Take advantage of the autonomy and opportunities that college permits by approaching it in the spirit of the 16th century. You’ll become capable of a level of precision, inventiveness, and empathy worthy to be called Shakespearean.
Building a bridge to the 16th century must seem like a perverse prescription for today’s ills. I’m the first to admit that English Renaissance pedagogy was rigid and rightly mocked for its domineering pedants. Few of you would be eager to wake up before 6 a.m. to say mandatory prayers, or to be lashed for tardiness, much less translate Latin for hours on end every day of the week. Could there be a system more antithetical to our own contemporary ideals of student-centered, present-focused, and career-oriented education?
An orangutan named Rocky is using “wookies” to reveal new insights into the origins of language.
In experiments conducted by a researcher at Amsterdam University, Rocky learned and recited a basic vocabulary of sounds, producing vocalizations no orangutan is known to make. By learning to mimic his human instructor, this talkative primate is lending support to one of the leading theories of language evolution. Repeat After Me
Adriano Lameira, now a professor in the department of anthropology at Durham University, used food rewards to train Rocky to mimic the sounds a human was making. The sounds, called “wookies”, differ from vocalizations naturally produced by orangutans, termed “grumphs.”
Over time, Rocky got better at producing the wookies, learning to modulate his vocal folds — thin curtains of tissue that vibrate when air is passed over them — and other components of sound production to match the human enunciations. Rocky’s abilities prove that primates can manipulate their vocal folds at a fine scale to create distinct sounds, a key component for building up and using a complex vocabulary. Language Evolved Gradually
Theories about how protean languages first came to be are widespread, and cover a pretty broad spectrum. Some say that language emerged from instinctive vocalizations that our ancestors uttered when experiencing strong emotions. Others hold that language emerged from the rhythmic “songs” and vocalizations of early hominins. Another theory holds that language is simply a natural progression from gesture-based communication, which is limited by sight lines and darkness.
The findings lend credence to the idea that language developed slowly, growing more complex over time. The findings were published Wednesday in Scientific Reports.
Wherever language came from, it has two essential components: physical and cognitive capabilities. We need to have both the mental faculties to form and communicate ideas and the bodily structures necessary to produce gestures or sounds.
Signing gorillas can communicate via gestures, proving they have the mental abilities to do so exist. Now, Rocky has shown that primates can learn to produce new sounds as well, illustrating that the physical underpinnings of language go back millions of years.
If there is a subtext to the principle of selection, it lies in an idealised notion of American national values, as showcased by Hollywood films for more than a century. Across generations, millions have laughed with the Marx Brothers comedy Duck Soup and sung along with The Sound of Music.
These, and others such as Citizen Kane and Casablanca make the NASA list as much more than remarkable films. They are cultural articulations of the ethos of America as sought to be portrayed by its establishment. This ethos includes the championing of its national power, as well as its much publicised ability to introspect on a national scale through films such as 12 Years A Slave or To Kill a Mockingbird, which also feature in the selection. The American dream
The great narrative arc of Hollywood film fundamentally reinforces the belief that, its blemishes notwithstanding, there is no country quite like America. Predictable selections, therefore, include films such as The Wizard of Oz and It’s a Wonderful Life which celebrate conservative ideas about American values, reiterating that “there’s no place like home”. The Seven Samurai, later remade as The Magnificent Seven in Hollywood, remains a rare example of a non-English language film on the list.
In the fictional world, the film hero is the most prominent saviour of the Western way of life, and a fascination for all things heroic underlies the selection. Consequently, James Bond appears many times in the list, as does Die Hard’s incorrigible movie cop John Mclane and his television equivalent, the indefatigable Jack Bauer in 24.
Imagine a world in which most people worked only 15 hours a week. They would be paid as much as, or even more than, they now are, because the fruits of their labor would be distributed more evenly across society. Leisure would occupy far more of their waking hours than work. It was exactly this prospect that John Maynard Keynes conjured up in a little essay published in 1930 called "Economic Possibilities for Our Grandchildren." Its thesis was simple. As technological progress made possible an increase in the output of goods per hour worked, people would have to work less and less to satisfy their needs, until in the end they would have to work hardly at all. Then, Keynes wrote, "for the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well." He thought this condition might be reached in about 100 years—that is, by 2030.
Peter Singer is considered a founding father of the modern animal rights movement. He is a vegan who gives away a third of his income to charity. So why has he been described as the most dangerous men on the planet?
Frisson. What a strange word. It evokes that peculiar intermingling of excitement and fear that can attend momentous events. The spark of electricity when you lock eyes with someone who is yet unknown to you, but who might just be ‘the one.’ The queasy sensation of anxious adrenaline when a big news story breaks. The fearful joy as you plunge downhill on a vertiginous rollercoaster. The word ‘thrill’ perhaps comes close. But not quite. As such, realising that all near-equivalents in English are imperfect, we gladly alight upon the French loanword. And as we do, our existence feels just a little richer and more nuanced.
Almost the entirety of Western literature can be fit neatly into just six story arcs, according to a new data-mining study.
From the panoply of novels that Western society has produced, distinct narrative patterns emerge, and many attempts have been made to pin down the shape of a story and categorize a protagonist’s journey. French writer Georges Polti claims there are 36 different types dramatic stories, while others have counted seven narrative arcs or 20.
But new research from the University of Vermont utilizing data-mining techniques suggests that the majority of the Western canon falls into one of six basic categories. A Story’s Path
Researchers from the Computational Story Lab looked at over 1,700 books from Project Gutenberg for their study, winnowing out books such as dictionaries or those with less than 150 downloads. They analyzed the content of each book by taking samples of text, what they called “windows”, from throughout the story. They used the aptly named “hedonometer” , also developed by the Computational Story Lab, to compile a list of over 10,000 words and rate them on a spectrum of positive to negative using Amazon’s Mechanical Turk service. They published their results last month on arXiv.org.
Adding up these windows over the course of a whole book produced graphs of characters’ fortunes — the highs and lows — throughout a given novel, and generated a broad visualization of the arc the story takes. According to the researchers, theses are the six story arcs that appear time and time again in Western literature: “Rags to riches” (the story gets better over time); “Man in a hole” (fortunes fall, but the protagonist bounces back); “Cinderella” (there’s an initial rise in good fortunes, followed by a setback, but a happy ending) “Tragedy” or “riches to rags” (things only get worse); “Oedipus” (bad luck, followed by promise, ending in a final fall) “Icarus” (opens with good fortunes, but doomed to fail)
Scientists working on animal cognition often dwell on their desire to talk to the animals. Oddly enough, this particular desire must have passed me by, because I have never felt it. I am not waiting to hear what my animals have to say about themselves, taking the rather Wittgensteinian position that their message might not be all that enlightening. Even with respect to my fellow humans, I am dubious that language tells us what is going on in their heads. I am surrounded by colleagues who study members of our species by presenting them with questionnaires. They trust the answers they receive and have ways, they assure me, of checking their veracity. But who says that what people say about themselves reveals actual emotions and motivations?
This might be true for simple attitudes free from moralisations (‘What is your favourite music?’), but it seems almost pointless to ask people about their love life, eating habits, or treatment of others (‘Are you pleasant to work with?’). It is far too easy to invent post-hoc reasons for one’s behaviour, to be silent about one’s sexual habits, to downplay excessive eating or drinking, or to present oneself as more admirable than one really is.
No one is going to admit to murderous thoughts, stinginess or being a jerk. People lie all the time, so why would they stop in front of a psychologist who writes down everything they say? In one study, female college students reported more sex partners when they were hooked up to a fake lie-detector machine, demonstrating that they had been lying when interviewed without the lie-detector. I am in fact relieved to work with subjects that don’t talk. I don’t need to worry about the truth of their utterances. Instead of asking them how often they engage in sex, I just count the occasions. I am perfectly happy being an animal watcher.
David Chalmers, who coined the phrase “Hard Problem of consciousness,” is arguably the leading modern advocate for the possibility that physical reality needs to be augmented by some kind of additional ingredient in order to explain consciousness—in particular, to account for the kinds of inner mental experience pinpointed by the Hard Problem. One of his favorite tools has been yet another thought experiment: the philosophical zombie.
Unlike undead zombies, which seek out brains and generate movie franchises, philosophical zombies look and behave exactly like ordinary human beings. Indeed, they are perfectly physically identical to non‐zombie people. The difference is that they are lacking in any inner mental experience. We can ask, and be puzzled about, what it is like to be a bat, or another person. But by definition, there is no “what it is like” to be a zombie. Zombies don’t experience.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.