Scientists are using hypnosis to make healthy people delusional. To find out why, David Robson discovered what it’s like to lose control of his mind
am lying on my back and trapped in a gleaming white tunnel, the surface barely six inches from my nose. There is a strange mechanical rumbling in the background, and I hear footsteps padding around the room beyond. In my mounting claustrophobia, I ask myself why I am here – but there is no way out now. A few moments later, the light dims, and as the man speaks, my thoughts begin to fade.
“The engineer has developed a way of taking control of your thoughts from the inside. He does this because he is fascinated by mind control, and wants to apply the most direct method of controlling your thoughts. He is doing this to advance his research into mind control. You will soon be aware of the engineer inserting his thoughts.”
A strange serenity descends as I realise that soon, my will won’t be my own. Then the experiment begins. I am about to be possessed.
The man who will soon take control of my thoughts is Eamonn Walsh, a psychologist who uses hypnosis to investigate psychoses at the Institute of Psychiatry in London. The idea is to turn healthy subjects into ‘virtual patients’ suffering full-on delusions, such as being possessed by a paranormal entity, allowing the scientists to understand the underlying illness in a new way, and potentially find treatments
The scientists are understandably keen to distance themselves from stage hypnotists. “It’s not flaky, it’s not for entertainment – we’ve got carefully specific research goals,” says Mitul Mehta, who collaborates with Walsh on these studies. It’s a bold idea, but can it possibly work? And what does it feel like to lose total control of your mind?
Ever wondered what that chair you saw at your friend’s place was called? Ever wanted to know all about dance, and looked for its compact visual summary?
A team of scientists from the University of Washington and the Allen Institute for Artificial Intelligence is developing a program that teaches itself all there is to know about a concept and presents the findings in pictures and phrases. The program is called Learning Everything About Anything, or LEVAN."
Try LEVAN! It is a fully-automated system that learns everything visual about any concept, by processing lots of books and images on the web. It acts as a visual encyclopedia for you, helping you explore and understand any topic that you are curious about, in great detail.
The activities of users of Twitter and other social media services were recorded and analysed as part of a major project funded by the US military, in a program that covers ground similar to Facebook’s controversial experiment into how to control emotions by manipulating news feeds.
Research funded directly or indirectly by the US Department of Defense’s military research department, known as Darpa, has involved users of some of the internet’s largest destinations, including Facebook, Twitter, Pinterest and Kickstarter, for studies of social connections and how messages spread.
While some elements of the multi-million dollar project might raise a wry smile – research has included analysis of the tweets of celebrities such as Lady Gaga and Justin Bieber, in an attempt to understand influence on Twitter – others have resulted in the buildup of massive datasets of tweets and additional types social media posts.
Several of the DoD-funded studies went further than merely monitoring what users were communicating on their own, instead messaging unwitting participants in order to track and study how they responded.
The COLING conference, organised under the auspices of the International Committee on Computational Linguistics (ICCL), has a history that dates back to the 1960s. The conference is held every two years and regularly attracts more than 700 delegates. The 1st conference was held in New York, 1965. Since then, the conference has developed into one of the premier Natural Language Processing conferences world-wide held every two years. The last five conferences were held in Geneva (COLING 2004), Sydney (COLING - ACL 2006), Manchester (COLING 2008), Beijing (COLING 2010) and Mumbai (COLING 2012).
Disasters lead to devastating structural damage not only to buildings and transport infrastructure, but also to other critical infrastructure, such as the power grid and communication backbones. Following such an event, the availability of minimal communication services is however crucial to allow efficient and coordinated disaster response, to enable timely public information, or to provide individuals in need with a default mechanism to post emergency messages. The Internet of Things consists in the massive deployment of heterogeneous devices, most of which battery-powered, and interconnected via wireless network interfaces. Typical IoT communication architectures enables such IoT devices to not only connect to the communication backbone (i.e. the Internet) using an infrastructure-based wireless network paradigm, but also to communicate with one another autonomously, without the help of any infrastructure, using a spontaneous wireless network paradigm. In this paper, we argue that the vast deployment of IoT-enabled devices could bring benefits in terms of data network resilience in face of disaster. Leveraging their spontaneous wireless networking capabilities, IoT devices could enable minimal communication services (e.g. emergency micro-message delivery) while the conventional communication infrastructure is out of service. We identify the main challenges that must be addressed in order to realize this potential in practice. These challenges concern various technical aspects, including physical connectivity requirements, network protocol stack enhancements, data traffic prioritization schemes, as well as social and political aspects.
This paper presents an agent-based artificial cryptocurrency market in which heterogeneous agents buy or sell cryptocurrencies, in particular Bitcoins. In this market, there are two typologies of agents, Random Traders and Chartists, which interact with each other by trading Bitcoins. Each agent is initially endowed with a finite amount of crypto and/or fiat cash and issues buy and sell orders, according to her strategy and resources. The number of Bitcoins increases over time with a rate proportional to the real one, even if the mining process is not explicitly modelled.
The model proposed is able to reproduce some of the real statistical properties of the price absolute returns observed in the Bitcoin real market. In particular, it is able to reproduce the autocorrelation of the absolute returns, and their cumulative distribution function. The simulator has been implemented using object-oriented technology, and could be considered a valid starting point to study and analyse the cryptocurrency market and its future evolutions.
I don’t know about other disciplines, but academic writing in the humanities has become notorious for its jargon-laden wordiness, tangled constructions, and seemingly deliberate vagary and obscurity. A popular demonstration of this comes via the University of Chicago’s academic sentence generator, which allows one to plug in a number of stock phrases, verbs, and “-tion” words to produce corkers like “The reification of post-capitalist hegemony is always already participating in the engendering of print culture” or “The discourse of the gaze gestures toward the linguistic construction of the gendered body”—the point, of course, being that the language of academia has become so meaningless that randomly generated sentences closely resemble and make as much sense as those pulled from the average journal article (a point well made by the so-called “Sokal hoax”).
The personal information that your smart phone can collect about you is increasingly detailed. Apps can record your location, your level of exercise, the phone calls that you make and receive, the photographs that you take and who you share them with and so on. Various studies have shown that this data provides a detailed and comprehensive insight into an individual’s habits and lifestyle, information that advertisers and marketers dearly love to have.
Indeed, this information can surprisingly useful. The Google Now smartphone app uses information such as your location to provide details it thinks you might find useful, such as directions home or nearby restaurants.
But this service isn’t entirely altruistic. Google knows perfectly well that it can use this information to sell adverts and other services.
Understanding the psychology behind the way we tick might help us to tick even better.
Many studies and much research has been invested into the how and why behind our everyday actions and interactions. The results are revealing. If you are looking for a way to supercharge your personal development, understanding the psychology behind our actions is an essential first step.
Fortunately, knowing is half the battle. When you realize all the many ways in which our minds create perceptions, weigh decisions, and subconsciously operate, you can see the psychological advantages start to take shape. It’s like a backstage pass to the way we work, and being backstage, you have an even greater understanding of what it takes to succeed.
Recordings from the brain’s surface are giving scientists unprecedented views into how the brain controls speech.
Could a person who is paralyzed and unable to speak, like physicist Stephen Hawking, use a brain implant to carry on a conversation?
That’s the goal of an expanding research effort at U.S. universities, which over the last five years has proved that recording devices placed under the skull can capture brain activity associated with speaking.
While results are preliminary, Edward Chang, a neurosurgeon at the University of California, San Francisco, says he is working toward building a wireless brain-machine interface that could translate brain signals directly into audible speech using a voice synthesizer.
The effort to create a speech prosthetic builds on success at experiments in which paralyzed volunteers have used brain implants to manipulate robotic limbs using their thoughts (see “The Thought Experiment”). That technology works because scientists are able to roughly interpret the firing of neurons inside the brain’s motor cortex and map it to arm or leg movements.
Animals must react quickly to objects and events in the environment to survive, especially when their decisions could result in a reward or punishment. Based on this fact, scientists have assumed that the speed of decision-making during behavioral tasks is affected by motivational salience—the extent to which an object or event predicts important behavioral outcomes. Neurons in a brain region called the basal forebrain (BF) respond to motivationally salient stimuli, but the influence of these BF neurons on decision-making speed has been unclear.
In a study published this month in PLOS Biology, Irene Avila and Shih-Chieh Lin of the National Institute on Aging at the National Institutes of Health provide new insights into how motivational salience not only speeds up reaction times but also reduces variability in decision-making speed. Avila and Lin's findings suggest that the activity of BF neurons determines the speed of rats' decisions in response to motivationally salient stimuli, providing a possible neural explanation for the slower decision-making speeds seen in conditions ranging from depression to dementia.
To examine the relationship between motivational salience and decision-making speed, Avila and Lin trained rats to stick their nose through a port in a Plexiglas chamber and wait for a noise that signaled a reward. White noise indicated that the rats would receive a large reward of four drops of water, whereas a clicking sound signaled a small reward of only one drop of water. During some trials, no noise or reward was presented.
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior.
Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control, but may be hindered by data incompleteness or unavailability. Here we explore the opportunity of using proxies for individual mobility to describe commuting flows and predict the diffusion of an influenza-like-illness epidemic. We consider three European countries and the corresponding commuting networks at different resolution scales, obtained from (i) official census surveys, (ii) proxy mobility data extracted from mobile phone call records, and (iii) the radiation model calibrated with census data. Metapopulation models defined on these countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data capture the empirical commuting patterns well, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from mobile phones and census sources are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, once the mobile phone commuting network is considered in the epidemic model, however preserving to a high degree the order of infection of newly affected locations. Proxies' calibration affects the arrival times' agreement across different models, and the observed topological and traffic discrepancies among mobility sources alter the resulting epidemic invasion patterns. Results also suggest that proxies perform differently in approximating commuting patterns for disease spread at different resolution scales, with the radiation model showing higher accuracy than mobile phone data when the seed is central in the network, the opposite being observed for peripheral locations. Proxies should therefore be chosen in light of the desired accuracy for the epidemic situation under study.
Scientists studying consciousness are attempting to identify correlations between measurements of consciousness and the physical world. Consciousness can only be measured through first-person reports, which raises problems about the accuracy of first-person reports, the possibility of non-reportable consciousness and the causal closure of the physical world. Many of these issues could be resolved by assuming that consciousness is entirely physical or functional. However, this would sacrifice the theory-neutrality that is a key attraction of a correlates-based approach to the study of consciousness. This paper puts forward a different solution that uses a framework of definitions and assumptions to explain how consciousness can be measured. This addresses the problems associated with first-person reports and avoids the issues with the causal closure of the physical world. This framework is compatible with most of the current theories of consciousness and it leads to a distinction between two types of correlates of consciousness.
Earth's magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.
The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet's surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.
The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth's magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA's Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.
In fact over the past 20 million years, our planet has settled into a pattern of a pole reversal about every 200,000 to 300,000 years; as of 2012, however, it has been more than twice that long since the last reversal. These reversals aren't split-second flips, and instead occur over hundreds or thousands of years. During this lengthy stint, the magnetic poles start to wander away from the region around the spin poles (the axis around which our planet spins), and eventually end up switched around, according to Cornell University astronomers.
ALIFE 14, the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, presents the current state of the art of Artificial Life—the highly interdisciplinary research area on artificially constructed living systems, including mathematical, computational, robotic, and biochemical ones. The understanding and application of such generalized forms of life, or “life as it could be,” have been producing significant contributions to various fields of science and engineering.
This volume contains papers that were accepted through rigorous peer reviews for presentations at the ALIFE 14 conference. The topics covered in this volume include: Evolutionary Dynamics; Artiﬁcial Evolutionary Ecosystems; Robot and Agent Behavior; Soft Robotics and Morphologies; Collective Robotics; Collective Behaviors; Social Dynamics and Evolution; Boolean Networks, Neural Networks and Machine Learning; Artiﬁcial Chemistries, Cellular Automata and Self-Organizing Systems; In-Vitro and In-Vivo Systems; Evolutionary Art, Philosophy and Entertainment; and Methodologies.
We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95% of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1/10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.
We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways---(1) as a theoretical framework for reinterpreting the dynamics of idea generation and selection, (2) as a computational simulation model of collective human decision making processes, and (3) as a research tool for collecting high-resolution experimental data of actual collaborative design and decision making from human subjects. We believe our work demonstrates untapped potential of EC for interdisciplinary research involving human and social dynamics.
The way we navigate in cities has been revolutionized in the last few years by the advent of GPS mapping programs. Enter your start and end location and these will give you the shortest route from A to B.
That’s usually the best bet when driving, but walking is a different matter. Often, pedestrians want the quietest route or the most beautiful but if they turn to a mapping application, they’ll get little help.
That could change now thanks to the work of Daniele Quercia at Yahoo Labs in Barcelona, Spain, and a couple of pals. These guys have worked out how to measure the “beauty” of specific locations within cities and then designed an algorithm that automatically chooses a route between two locations in a way that maximizes the beauty along it. “The goal of this work is to automatically suggest routes that are not only short but also emotionally pleasant,” they say.
Quercia and co begin by creating a database of images of various parts of the center of London taken from Google Street View and Geograph, both of which have reasonably consistent standards of images. They then crowdsourced opinions about the beauty of each location using a website called UrbanGems.org.
Each visitor to UrbanGems sees two photographs and chooses the one which shows the more beautiful location. That gives the team a crowdsourced opinion about the beauty of each location. They then plot each of these locations and their beauty score on a map which they use to provide directions.
People may not realize it: Microsoft has more than twenty years of experience in creating machine learning systems and applying them to real problems. This experience is much longer than the recent buzz around Big Data and Deep Learning. It certainly gives us a good perspective on a variety of technologies and what it takes to actually deploy ML in production.
The story of ML at Microsoft started in 1992. We started working with Bayesian Networks, language modeling, and speech recognition. By 1993, Eric Horvitz, David Heckerman, and Jack Breese started the Decision Theory Group in Research and XD Huang started the Speech Recognition Group. In the 90s, we found that many problems, such as text categorization and email prioritization, were solvable through a combination of linear classification and Bayes networks. That work produced the first content-based spam detector and a number of other prototypes and products.
As we were working on solving specific problems for Microsoft products, we also wanted to get our tools directly into the hands of our customers. Making usable tools requires more than just clever algorithms: we need to consider the end-to-end user experience. We added predictive analytics to the Commerce Server product in order to provide recommendation service to our customers. We shipped the SQL Server Data Mining product in 2005, which allowed customers to build analytics on top of our SQL Server product.
As our algorithms became more sophisticated, we started solving tougher problems in fields related to ML, such as information retrieval, computer vision, and speech recognition. We blended the best ideas from ML and from these fields to make substantial forward progress. As I mentioned in my previous post, there are a number of such examples. Jamie Shotton, Antonio Criminisi, and others used decision forests to perform pixel-wise classification, both for human pose estimation and for medical imaging. Li Deng, Frank Seide, Dong Yu, and colleagues applied deep learning to speech recognition.
Use these strategies to increase click-through rates and purchases when presenting choices.
Imagine the following scenario: All your friends have long been involved in relationships, and you are tired of being the third wheel at every social gathering. After a few failed matchmaking attempts, you decide to try a dating website. Soon you discover a new and exciting world that had previously been unknown to you. Suddenly you have many suitors and your dating possibilities become endless.
This experience makes you feel truly attractive and desired and, without even noticing, you become addicted. But make no mistake, you are not addicted to love or dating -- you are addicted to the idea of having many possibilities available to you.
The above scenario exemplifies a basic human trait: People love to have many options, even if they only exist in theory. When asked, who wouldn’t prefer to choose from a list of five different items over a list of only two?
Intuitively, people feel that the more options they have, the greater their chances are of finding the choice that will perfectly satisfy their needs. But this intuitive assumption turns out to be an illusion -- the more options we have, the less likely we are to make a decision at all.
Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose. Sequential sampling models (SSMs) have greatly advanced the decision sciences by assuming decisions to emerge from a bounded evidence accumulation process so that response times (RTs) become predictable. Here, we demonstrate a difficulty of SSMs that occurs when people are not forced to respond at once but are allowed to sample information sequentially: The decision maker might decide to delay the choice and terminate the accumulation process temporarily, a scenario not accounted for by the standard SSM approach. We developed several SSMs for predicting RTs from two independent samples of an electroencephalography (EEG) and a functional magnetic resonance imaging (fMRI) study. In these studies, participants bought or rejected fictitious stocks based on sequentially presented cues and were free to respond at any time. Standard SSM implementations did not describe RT distributions adequately. However, by adding a mechanism for postponing decisions to the model we obtained an accurate fit to the data. Time-frequency analysis of EEG data revealed alternating states of de- and increasing oscillatory power in beta-band frequencies (14–30 Hz), indicating that responses were repeatedly prepared and inhibited and thus lending further support for the existence of a decision not to decide. Finally, the extended model accounted for the results of an adapted version of our paradigm in which participants had to press a button for sampling more information. Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.
Marketing has steadily evolved toward more precise and scientific methods. Focus groups and surveys are bowing to crowdsourcing and social listening, while manual data collection is fading in favor of big data and sophisticated analytics.
But somewhere between all of the data collection, number-crunching, and magical algorithms lies another, less obvious, marketing tool: neuroscience.
Simply put, neuroscience detects how the brain and body respond to messages. The concept is based on the study of sensorimotor, cognitive, and affective response to stimuli. It uses equipment, electrodes and sensors, and biometrics that measure heart rate, skin response, or eye movement--to understand how a person reacts to images, audio, and other sensory information. So-called neuromarketing is already used by a number of companies, including CBS (CBS), Coca-Cola (KO), Frito-Lay (PEP), Google (GOOGL), Hyundai, Microsoft (MSFT), and PayPal (EBAY).