A cowboy walks into a saloon. He removes his dusty hat, orders a whiskey, and sinks wearily onto a stool. He downs the whiskey, looks around, and notices that an attractive woman has joined him at the bar. She looks him over and asks, “Are you a real cowboy?” The cowboy pauses to consider the question. He orders another whiskey. “Well,” he says, “I wake at dawn, climb into a saddle, and herd cattle all day. I eat by a campfire and pitch my bedroll under the stars. Yep, I reckon I am a cowboy.” He tosses back the second whiskey and reciprocates: “You a cowgirl?”
“Oh, no,” the woman replies, “I’m a lesbian.” The cowboy looks puzzled. “How d’ya reckon?” he asks. “Well, I wake up in the morning thinking about girls. I think about ‘em all day long. Then at night, I dream about girls.” The cowboy ponders this revelation in silence. The situation grows awkward. He pays for his drinks, mumbles a goodbye, and heads for the door. Unhitching his horse outside, the cowboy is approached by a tourist. “You really a cowboy?” the tourist asks. “I thought I was,” replies the cowboy, “Turns out I’m a lesbian.”
Our cowboy’s grasp of the concept “lesbian” is a bit shaky. If we set that aside, though, we have a story about someone learning a new concept, realizing that it applies to him, and in the process, discovering something important about himself.
Such discoveries happen, and they can be transformational. The right concept can connect a person to a group, project or cause larger than himself — and thereby afford him (or her) a sense of purpose, belonging, and identity. Did our cowboy find his true self in the community of lesbians? Probably not. A similar epiphany, though, could have resulted in a profoundly meaningful discovery. “The secret to happiness,” writes philosopher Daniel Dennett, “is to find something more important than you are and dedicate your life to it.”
I agree with that statement. So do the vast majority of today’s scientists; neurology and psychology journals increasingly define free will as “an illusion… a figment of our imagination.”
In his 1932 “My Credo” Albert Einstein wrote “I do not believe in free will.” In the best-seller Free Will, Sam Harris declares the notion “incoherent.” Neuro-philosopher Garrett Merriam opines in an IEET interview “the notion of ‘free will’.. [is a] useless concept… I have high hopes that neuroscience will…eliminate [it]…”
We don’t have free will because human physiology isn’t wired that way. In 1983 Benjamin LIbet published research in Brain proving our motor cortex initiates action before the “I” is informed about it. Gary Weber PhD., agrees: “The research is conclusive; the brain determines what you will do, well before you are aware that you will do it. What does your “free will” mean? We no more initiate events “consciously”, than we cause our hearts to beat, or our stomach to digest our lunch.
A couple of years ago, I got to fly in the ultra-luxurious business class of an especially high-end airline; and now all lesser air travel – which means all other air travel, basically – is ruined for me forever. I’m not expecting an outpouring of sympathy for my plight. But I did feel a flicker of vindication when I read, via Scientific American, about a new study on the psychology of restaurant diners: serve them a really delicious appetizer followed by a mediocre main course, it seems, and they’ll rate the main course much more negatively than if had been preceded by something equally mediocre.
The researchers – whose results were published in the appropriately titled journal Food Quality and Preference – gave participants a boring pasta dish, preceded by an appetizer of bruschetta, made either with excellent fresh ingredients, or uninspiring dried ones. The resulting difference in their assessments of the pasta illustrates a phenomenon known as “hedonic contrast”, and it’s a familiar one to food psychologists and restaurateurs alike: what counts as tasty depends on what came before. If you’re planning to dine at Olive Garden, don’t pop into Nobu for a quick amuse-bouche first.
Unbeknownst to many people, our emotions, cognition, behavior, and mental health are influenced by a large number of entities that reside in our bodies while pursuing their own interests, which need not coincide with ours. Such selfish entities include microbes, viruses, foreign human cells, and imprinted genes regulated by viruslike elements. This article provides a broad overview, aimed at a wide readership, of the consequences of our coexistence with these entities. Its aim is to show that we are not unitary individuals in control of ourselves but rather “holobionts” or superorganisms—meant here as collections of human and nonhuman elements that are to varying degrees integrated and, in an incessant struggle, jointly define who we are.
Since I first read it in a high school Spanish class, I’ve been fascinated by the theory of language implicit in Borges’s “The Library of Babel.” The story describes a universal library containing, in 410-page volumes, every possible permutation of twenty-two letters, spaces, commas, and periods—every book that’s ever been written and every book that ever could be, drowned out by endless pages of gibberish. Its librarians are addicted to the search for certain master texts, the complete catalog of the library, or the future history of one’s own life, but their quest inevitably ends in failure, despair, even suicide.
Perhaps I was obsessed by the same desire for revelation, or haunted by the same subversion of all rational pursuit. In either case, fifteen years later the idea came to me one night of using the vast calculative capacities of a computer to re-create the Library of Babel as a Web site. For those interested in experiencing the futile hope of Borges’s bibliotecarios, I’ve made libraryofbabel.info, which now contains anything we ever have written or ever will write, including these sentences I struggle to compose now.
What do biologists want? If, unlike their counterparts in physics, biologists are generally wary of a grand, overarching theory, at what kinds of explanation do biologists aim? How will we know when we have “made sense” of life? Such questions, Evelyn Fox Keller suggests, offer no simple answers. Explanations in the biological sciences are typically provisional and partial, judged by criteria as heterogeneous as their subject matter. It is Keller’s aim in this bold and challenging book to account for this epistemological diversity—particularly in the discipline of developmental biology.
In particular, Keller asks, what counts as an “explanation” of biological development in individual organisms? Her inquiry ranges from physical and mathematical models to more familiar explanatory metaphors to the dramatic contributions of recent technological developments, especially in imaging, recombinant DNA, and computer modeling and simulations.
A history of the diverse and changing nature of biological explanation in a particularly charged field, Making Sense of Life draws our attention to the temporal, disciplinary, and cultural components of what biologists mean, and what they understand, when they propose to explain life.
A warm welcome to the latest Digest post, dear reader. You won’t find it hard work – my editor made some small changes, eliminating any sour notes to ensure a light read.
Did you notice how the metaphor phrases scattered through my previous sentences each relate to a sense – touch, sight, taste? This is common to many popular phrases, and to understand why, a new article draws on a combination of the Google Books dataset and a series of lab experiments. The research reveals that sensory metaphors owe their cultural success to the fact that we find sensory information easier to process and recall.
The first study by Egi Akpinar and Jonah Berger identified a set of 32 sensory metaphors in the adjective + noun structure I used above, each matched to a further three non-metaphorical equivalent phrases (e.g. "warm welcome" versus "friendly welcome", "kind welcome", "sincere welcome"). The researchers used Google Books’ frequency data on 5 million books to track the popularity of all these phrases since 1800, finding sensory metaphors enjoyed a steeper rise in popularity than their non-metaphorical equivalents.
To explore why, the researchers gave a subset of the metaphorical phrases together with their non-metaphorical equivalent phrases to 229 students. The students then rated each phrase on two criteria: How strongly did it relate to the senses? And does it have many or fewer associations with other ideas? After a filler task, the students tried to recall the phrases, and were able to remember only 18 per cent of the non-metaphorical phrases, but 28 per cent of the sensory metaphors, which also received higher ratings in sensory quality and associations. The higher its ratings, the better a phrase was remembered, and, critically, the steeper the increase in the popularity of that phrase in the Google Books data set.
This fits Akpinar and Berger’s hypothesis that cultural success stories are in debt to their psychological convenience. In their account, we will favour well-remembered concepts and phrases: those that are processed more automatically – as sensory knowledge is known to be – and that come to mind more easily. Sensory metaphors can be triggered by real-world phenomena: for example, bright future from the sight of a morning, torch or star. These small gains in popularity then snowball, as we lean on better-known phrases rather than the obscure, so that our listeners can understand us.
One note of caution is that the researchers may simply be backing winners, as sensory metaphors that were true failures would be unknown, and wouldn’t make it into their set of stimuli. To address this, the next study included each and every sensory metaphor found in the corpus of Jane Austin – 226 in all – including such gems as "clamorous happiness" and "delicious harmony", to see what characterised the phrases that succeeded versus those that did not. One hundred and thirty-five students studied, rated and recalled these metaphors, and those that enjoyed a rise to popularity in the Google dataset were, again, those rated higher in sensory quality and associations, and more easily recalled by participants.
It’s great to see research leveraging big cultural data and marrying it with experimental technique. “We study how the senses shape language,” the authors begin their general discussion, and given the clear evidence they present, it’s hard to disagree.
Metaphor is not the sole preserve of Shakespearean scholarship or high literary endeavour but has governed how we think about and describe our daily lives for centuries, according to researchers at Glasgow University.
Experts have now created the world’s first online Metaphor Map, which contains more than 14,000 metaphorical connections sourced from 4m pieces of lexical data, some of which date back to 700AD.
While it is impossible to pinpoint the oldest use of metaphor in English, because some may have been adopted from earlier languages such as Germanic, the map reveals that the still popular link between sheep and timidity dates back to Old English. Likewise, we do not always recognise modern use of metaphor: for example, the word “comprehend” comes from Latin, where it meant to physically grasp an object.
The three-year-long project to map the use of metaphor across the entire history of the English language, undertaken by researchers at the School of Critical Studies, was based on data contained in the Historical Thesaurus of English, which spans 13 centuries.
Dr Wendy Anderson, the project’s principal investigator, said that the findings supported the view that metaphor is pervasive in language and is also a major mechanism of meaning-change.
It has been well established that people have a “bias blind spot,” meaning that they are less likely to detect bias in themselves than others. However, it hasn’t been clear how blind we are to our own actual degree of bias, and how many of us think we are less biased than others.
Researchers have developed a tool to measure the bias blind spot, and their findings reveal that believing you are less biased than your peers has detrimental consequences on judgments and behaviors, such as accurately judging whether advice is useful.
“When physicians receive gifts from pharmaceutical companies, they may claim that the gifts do not affect their decisions about what medicine to prescribe because they have no memory of the gifts biasing their prescriptions.
“However, if you ask them whether a gift might unconsciously bias the decisions of other physicians, most will agree that other physicians are unconsciously biased by the gifts, while continuing to believe that their own decisions are not. This disparity is the bias blind spot, and occurs for everyone, for many different types of judgments and decisions,” says Erin McCormick, an author of the study and PhD student in behavioral decision research at Carnegie Mellon University.
IN SEEING THINGS AS THEY ARE, John Searle turns his attention to perception — visual perception, to be precise. Perception is both the basic way that minds connect with configurations of objects and attributes in a local environment, and an epicenter for sensory feeling and experience. That is, perception is a site of both representation and phenomenology. And since the capacities for representation and phenomenology have long been taken by philosophers to be characteristic marks of the mental, philosophical questions about perception provide a window into philosophical questions about minds more generally.
When it comes to the long tradition of thinking and writing about perception, Searle takes the situation to be rather bleak. He believes that the entirety of philosophical work on perception since Descartes has been bewitched by what he calls “the Bad Argument” and, as a consequence, is unnecessary and incoherent. Yet Searle wants to not just bury philosophical theories of perception but also praise them. In particular, he believes that once the bad argument is identified and diagnosed, nothing will prevent us from endorsing a form of direct realism about perception, of the sort Searle himself developed in his 1983 classic Intentionality. According to this form of direct realism, we do not perceive external objects by way of first perceiving intermediate ideas, impressions, or sense-data; instead, perception serves to provide us with immediate presentations of external objects and attributes themselves. In short, our perceptual capacities enable us to see things as they are in the local environments in which we find ourselves, and this fact should serve as the backbone, rather than an optional add-on, to philosophical reflection on minds and their epistemological condition.
Editor's note: The structure of the living cell is defined by the difference between what’s inside and what’s not. Biologists have taken great pains over the years to document the minute workings of the openings in cell membranes that allow hydrogen, sodium, calcium and other ions to make their way inside across the barrier that envelops the cell and its contents.
Five scholars of the brain have built upon these observations to suggest that these activities may provide a foundation for a badly needed theory to understand consciousness and some of the cognitive processes that underlie it. They contend that when animal cells open and close themselves to the outside world, these actions can be construed as more than just responses to external stimuli. In fact, they constitute the basis for perception, cognition and movement in the animal kingdom—and may underlie consciousness itself.
The five authors and NYU neurology professor Oliver Sacks; Antonio Damasio and Gil B. Carvalho from the University of Southern California, Norman D. Cook from the faculty of Kansai University in Osaka, Japan and Harry T. Hunt from Brock University in Ontario. They have framed their ideas in the form of an open letter to Christof Koch, president of the Allen Institute for Brain Science, and a Scientific American MIND columnist (Consciousness Redux) and member of Scientific American’s board of advisers. Read about what the five have to say and then continue to Koch’s reply.
Cases like this one have long puzzled philosophers. In everyday speech, it seems perfectly correct to say that a corporation can “intend,” “know,” “believe,” “want” or “decide.” Yet, when we begin thinking the matter over from a more theoretical standpoint, it may seem that there is something deeply puzzling here. What could people possibly mean when they talk about corporations in this way?
One possible approach would be to try to dismiss this whole issue as merely a misleading figure of speech. Sure, people sometimes describe a corporation using words like “decides” or “knows,” but they don’t necessarily mean this literally. Maybe all they really mean to say is that it is able to take in certain information and then use that information to adjust its plans and policies.
Then again, maybe the way people talk about corporations is getting at something more fundamental. One of our most basic psychological capacities is our ability to think about things as having mental states, such as intentions and beliefs. Researchers refer to this capacity as “theory of mind.” Our capacity for theory of mind appears to be such a fundamental aspect of our way of understanding the world that we apply it even to completely inanimate entities.
The immense success of writers such as Richard David Precht, festivals of ideas and philosophy magazines is has made thinking hip again. But is this legitimate philosophy, or more a lifestyle trend?
As a European cultural center, Cologne is used to being overrun. During Carnival the city doubles in population and a bevy of landmark festivals, fairs and fiestas hosted in the western German city cater to interest groups of every stripe. While the attendees of the third phil.Cologne, which opened this week and runs until June 3, may not be sporting striking costumes, their numbers are impressive. Organizers expect 10,000 visitors to attend the festival where people come to listen to intellectual discourse.
The public image of philosophy had long been in crisis. The last philosophical schools to prove a social sensation were Existentialism and the Frankfurt School, both originating in the 1940s. After a brief public explosion during the student protests of the 1960s, the discipline of thinking withdrew once again to its ivory tower. At the end of the 1960s German news weekly "Der Spiegel" asked: "What is philosophy today?"
From the ivory tower to the masses
In recent years there has been a noticeable paradigm shift. Videos from the international "ideas lectures" such as the TED Talks are certified YouTube hits and frequently go viral on social networks, alongside the flood of cat videos. Philosophy in 2015 has little to do with the hermit-like tendencies of Martin Heidegger - today it's more about ideas for everyday use rather than esoteric evaluations and complex concepts.
When was the last time you read a snappy op ed by another philosopher surgically dismantling a topical issue or slamming a public figure for shoddy reasoning?
Yeah, I can’t remember either.
This is not to say you aren’t writing occasional missives about the “real world”. But you’re largely doing so in specialist journals or on personal blogs. You’re effectively writing to each other and a tiny self-selected choir rather than to the public at large.
Yet, if philosophy has a practical value (and I feel strongly it has many), it is as a force for clarity. Philosophy can carve the world into thinkable chunks, it can clarify concepts, highlight assumptions and presuppositions that lurk beneath our notice, and tease out sloppy thinking where it taints our reasoning.
In a world informed by popular media brimming with partisan pundits and professional opinionators, philosophy can offer a breath of reason that can bring genuine progress to stalled debates.
Yet philosophers are largely absent from the great popular debates of our times. Issues such as inequality, the future of work, our relationship to nature or religion or technology, understanding cultural identity and ideological conflict, or even just understanding our values and ensuring they actually direct our behaviour, all these are fundamentally philosophical issues.
In less than two years Slack Technologies has become one of the most glistening of tech’s ten-digit “unicorn” startups, boasting 1.1 million users and a private market valuation of $2.8 billion. If you’ve used Slack’s team-based messaging software, you know that one of its catchiest innovations is Slackbot, a helpful little avatar that pops up periodically to provide tips so jaunty that it seems human.
Such creativity can’t be programmed. Instead, much of it is minted by one of Slack’s 180 employees, Anna Pickard, the 38-year-old editorial director. She earned a theater degree from Britain’s Manchester Metropolitan University before discovering that she hated the constant snubs of auditions that didn’t work out. After dabbling in blogging, videogame writing and cat impersonations, she found her way into tech, where she cooks up zany replies to users who type in “I love you, Slackbot.” It’s her mission, Pickard explains, “to provide users with extra bits of surprise and delight.” The pay is good; the stock options, even better.
What kind of boss hires a thwarted actress for a business-to-business software startup? Stewart Butterfield, Slack’s 42-year-old cofounder and CEO, whose estimated double-digit stake in the company could be worth $300 million or more. He’s the proud holder of an undergraduate degree in philosophy from Canada’s University of Victoria and a master’s degree from Cambridge in philosophy and the history of science.
“Studying philosophy taught me two things,” says Butterfield, sitting in his office in San Francisco’s South of Market district, a neighborhood almost entirely dedicated to the cult of coding. “I learned how to write really clearly. I learned how to follow an argument all the way down, which is invaluable in running meetings. And when I studied the history of science, I learned about the ways that everyone believes something is true–like the old notion of some kind of ether in the air propagating gravitational forces–until they realized that it wasn’t true.”
I teach an undergraduate class on Nietzsche, a philosopher who has a reputation for captivating young minds. After one class, a student came to see me. There was something bothering her. "Is it OK to be changed by reading a philosopher?" she asked. "I mean, do you get inspired by Nietzsche? Do you use him in your life?"
You have to be careful about questions like that, and not only because the number of murderers claiming Nietzsche as their inspiration is higher than I would like. What the student usually means is: "Nietzsche mocks careful scholarship: Can I, in his spirit, write my paper however the hell I want and still get a good grade?" In this case, though, the student knew perfectly well how to write a scholarly paper. She wanted to do something else too: Be Nietzschean!
Here’s my line, for what it’s worth: You can do whatever you want in life — take inspiration from The Smurfs, for all I care — but I’m here to teach you how to read a philosopher, slowly and carefully, which is not an easy thing to do. If you want to be inspired by Nietzsche, you have to read him precisely, to make sure that it is Nietzsche who inspires you, not a preconception or a misappropriation or a scholarly reading, mine or anybody else’s, which is vulnerable to the interpreter’s peculiar agenda or the fashions of the hour. And what if, when you read him carefully, you find that he actually wrote things you think are false, wrong-headed, racist or sexist? Don’t choose between inspiration and careful scholarship, I say: Choose both.
It can be strangely appealing to be very down about the future of humanity. When we think of the future, apocalyptic scenarios come so naturally: flooded cities, energy crises, civil wars. We’re afraid of being naive. After all, we were promised jetpacks and won’t be so easily taken in again.
We like to tell ourselves we’re too intelligent to be excited by the future. But there’s another possible explanation. We’re oddly a bit resentful: it can be painful to imagine all the things we’re going to miss out on.
Imagine going back in time to 14th-century Europe and confronting people with the solutions which – a few centuries later – became readily available.
One might meet a woman whose six-year-old daughter has just died of scarlet fever. She’d be hysterical with grief. One would then explain that in 744 years time, there would be (on the very spot where she was standing) a chemist where – for £2.50 – she could have got the antibiotics that would have saved her beloved child.
Or imagine describing Heathrow Airport and the Boeing Dreamliner to a man who had starved, suffered seasickness and been taken prisoner on a three-year-round pilgrimage from Berkshire to Jerusalem.
Or telling a guy with toothache about the anesthetics which he wouldn’t be able to get for another 623 years.
Or confronting Jamshīd al-Kāshī – a 15th-century mathematician who spent years of his life doing vastly complicated calculations to work out the ratio between the diameter and circumference of a circle – with the fact that his sums would one day be do-able with a small plastic box picked up next to the express checkout at the supermarket.
We could imagine such people coming to deeply resent news about the future. Instead of seeming inevitable, our miseries can come to appear like cruel, temporary accidents. We stand to realise that in the broad span of history, people don’t have to suffer many of the things we’re going through – we just happen to be condemned to them because we’ve been born at the wrong time.
I’m pleased to be able to welcome readers to this Living Book titled Biosemiotics: Nature/Culture/Science/Semiosis. Biosemiotics – as its name suggests – is committed to science-humanities interdisciplinarity. As readers of these Living Books will doubtless know, this kind of interdisciplinarity is no mean task, but we have come a long way since C. P. Snow complained that humanities scholars knew nothing of the Second Law of Thermodynamics (Snow, 1998: 15). The sciences of modernity developed their methodological strengths and practical successes on the basis of ‘objective’1 observation and measurement, drawing on forms of description (preferentially mathematical models) as far removed as possible (which may not be that far (Pimm, 1981: 47-50; Manin, 2007; Lakoff & Núñez, 2000)) from the poetic, metaphor-rich and intersubjective language and the hermeneutical assumptions of the humanities. Although natural and cultural evolution (and, in the latter, the arts and humanities and the sciences) equally depend on continuities as well as what Thomas Kuhn called ‘revolutionary’ alterations,2 in the end both the practice of science and judgments concerning radical revisions of theory belong (as Kuhn noted in his 1969 ‘Postscript’) to the relevant scientific community (Kuhn, 1996). (more...)
A programme to teach young children the basics of philosophical thinking in UK schools has been shown to help them progress in maths and reading. A new study evaluated the use of the Philosophy for Children (P4C) programme in which primary school children are guided through discussions of questions such as “Should a healthy heart be donated to a person who has not looked after themselves?” or “Is it acceptable for people to wear their religious symbols at work places?” The programme is intended to help children become more willing and able to question, reason, construct arguments and collaborate.
A randomised controlled trial in 48 primary schools compared more than 1,500 pupils who took philosophy lessons over the course of a year with a further 1,500 who didn’t, but then took the lessons the following year. The children who had the philosophy lessons first improved their maths and reading by around an extra two months' of progress compared to those children who weren’t taking part. And the poorest children made the most progress of all.
You ask advice: ah, what a very human and very dangerous thing to do! For to give advice to a man who asks what to do with his life implies something very close to egomania. To presume to point a man to the right and ultimate goal — to point with a trembling finger in the RIGHT direction is something only a fool would take upon himself.
I am not a fool, but I respect your sincerity in asking my advice. I ask you though, in listening to what I say, to remember that all advice can only be a product of the man who gives it. What is truth to one may be disaster to another. I do not see life through your eyes, nor you through mine. If I were to attempt to give you specific advice, it would be too much like the blind leading the blind.
“To be, or not to be: that is the question: Whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles … ” (Shakespeare)
And indeed, that IS the question: whether to float with the tide, or to swim for a goal. It is a choice we must all make consciously or unconsciously at one time in our lives. So few people understand this! Think of any decision you’ve ever made which had a bearing on your future: I may be wrong, but I don’t see how it could have been anything but a choice however indirect — between the two things I’ve mentioned: the floating or the swimming.
But why not float if you have no goal? That is another question. It is unquestionably better to enjoy the floating than to swim in uncertainty. So how does a man find a goal? Not a castle in the stars, but a real and tangible thing. How can a man be sure he’s not after the “big rock candy mountain,” the enticing sugar-candy goal that has little taste and no substance?
Pixar’s new animated feature, Inside Out, is based on a rather straightforward premise: emotions situated in 11-year-old Riley’s brain control her behaviours, help organise her memories and, overall, seek to maintain her well-being.
Which emotions are these? Joy, Sadness, Fear, Anger and Disgust. A few minutes into the film it becomes clear that Joy has the power. The other emotions defer to her, and increasingly so, particularly as Riley’s life is disrupted by a cross-country move. The sole goal appears to be: keep Riley happy!
At at the outset of the film, it’s unclear why the negative emotions are even there (aside from comic relief and story arc). Why not just have Joy up there, controlling the reins?
Given that lead emotion researcher Dacher Keltner advised the film team, it’s perhaps not surprising that a diversity of emotions was represented.
We are thus introduced to a new principle of relativity, which holds that all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated. …The relativity of all conceptual systems, ours included, and their dependence upon language stand revealed (1956, p. 214f, italics added).
We dissect nature along lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds--and this means largely by the linguistic systems in our minds (p. 213).
…no individual is free to describe nature with absolute impartiality but is constrained to certain modes of interpretation even while he thinks himself most free (p. 214).
Here is an outline of the argument for the urgent need to bring about a revolution in the aims and methods of academic inquiry, its whole character and structure, so that it takes up its proper task of promoting wisdom rather than just acquiring knowledge.
Academia as it exists today is the product of two past great intellectual revolutions.
The first is the scientific revolution of the 16th and 17th centuries, associated with Galileo, Kepler, Descartes, Boyle, Newton and many others, which in effect created modern science. A method was discovered for the progressive acquisition of knowledge, the famous empirical method of science.
The second revolution is that of the Enlightenment, especially the French Enlightenment, in the 18th century. Voltaire, Diderot, Condorcet and the other philosophes had the profoundly important idea that it might be possible to learn from scientific progress how to achieve social progress towards an enlightened world.
They did not just have the idea: they did everything they could to put the idea into practice in their lives. They fought dictatorial power, superstition, and injustice with weapons no more lethal than those of argument and wit. They gave their support to the virtues of tolerance, openness to doubt, readiness to learn from criticism and from experience. Courageously and energetically they laboured to promote reason and enlightenment in personal and social life.
The Large Hadron Collider (LHC) at the European Organization for Nuclear Research CERN in Geneva is the world’s largest and most powerful particle accelerator and possibly the most complex scientific instrument ever built. The LHC has just begun its second three-year run, at the record-breaking collision energy of 13 TeV. About 10,000 scientists from 60 countries will search for new phenomena beyond the Standard Model of particle physics, in pursuit of a simple, beautiful and all-encompassing theory of nature.The sheer complexity of the LHC experiments, and of LHC data acquisition and processing, poses tremendous challenges and affects the way knowledge is acquired. Since any analysis and interpretation of LHC data involves theoretical models and computer simulations, can one still consider such experimental results indisputable facts of nature? And given the strong theoretical bias in data selection, can the LHC explore unknown territory, or are we limited to search for the “known unknowns”? And what if we look in the wrong places? Maybe our idea of a simple and yet all-encompassing is flawed, and new physics could show up in some unexpected way? Could one not perform searches for new physics at the LHC in a model-independent way?
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.