"As the regular General Assemblies are where all constituents gather to listen and contribute to the discussions using the methodology of the ‘stack’, which allows anyone seeking to propose a group or report on current activities, joins a queue and takes their turn to speak. This allows each their turn to vocalise and articulate for all to hear and vote on. In a ‘leaderless’ holarchic society, the necessity for a self organising infrastructure to support the intrinsic momentum, and the forum to voice the fomenting processes of each, are both vital components. What is being revealed here is the desire for a new manner of building community, responsive to those who have been inspired to collaborate, as working groups become the lifeblood of the movement."
Via june holley, Howard Rheingold
In a world defined by rapid change, the search for solutions to societal and environmental challenges has become more complex. While market systems have become interconnected and supply chains have become supply webs, public policy and industry norms are not changing as fast. As a result, they are increasingly inadequate [...]
Each scientist needs to be aware of the complexity of cellular life and the modeling possibilities to be able to reconstruct, analyze, and simulate biological systems. Bioinformatics modeling, analysis, and simulation are highly interdisciplinary disciplines using techniques and concepts from computer science, statistics, mathematics, chemistry, biology, biochemistry, genetics, and physics, among others. Without knowledge about these research topics, it is almost impossible to produce good theoretical models, which can be used for hypothesis testing. Therefore, this chapter gives an impression of what can be modeled from the bioinformatics and biological point of view and introduces into biological networks, common analysis techniques from graph theory, and possibilities to reconstruct, simulate, and share biological networks based on database content.Biological Network Modeling and Analysis Sebastian Jan Janowski, Barbara Kaltschmidt, Christian KaltschmidtApproaches in Integrative Bioinformatics2014, pp 203-244http://dx.doi.org/10.1007/978-3-642-41281-3_8
Via Complexity Digest
A trio of researchers at Tohoku University in Japan, led by Masahiro Hotta, has proposed a new way to teleport energy that allows for doing so over long distances. In their paper published in Physical Review A, the team describes a theory they've developed that takes advantage of the properties of squeezed light or vacuum states to allow for "teleporting" information about an energy state, allowing for making use of that energy—in essence, teleporting energy over long distances.On television shows such as Star Trek, people are moved from one location to another via teleportation, where the people (or objects) are not literally sent—instead their essence is reestablished in another local, giving the illusion of movement. In real life, nothing like that exists, though scientists have begun using the term teleportation to describe the results of entanglement experiments—where two entangled particles are joined somehow despite no apparent connection between them. Changes to one particle happen automatically to the other. Scientists have broadened their experiments to include light and matter, and more recently, energy.Back in 2008, Hotta, with another team, first devised a theory for teleporting energy based on taking advantages of vacuum states—theory suggests they are not truly empty, instead there are particles in them that pop in and out of existence, some of which are entangled. While interesting, the theory suggested that teleporting energy could only be carried out over very short distances. In this new effort, Hotta et al have found a way to increase the teleportation distance by making use of a property known as squeezed light which is tied to a squeezed vacuum state.Quantum mechanics laws limit the ways that values in a system (such as a vacuum) can be measured—physicists have found however, that increasing the uncertainty of one value, decrease the uncertainty of the value of others—a sort of squeezing effect. When applied to light, theory suggests, it leads to more pairs traveling together through a vacuum, which in turn leads to more of them being entanglement, and that the team suggests should allow for teleporting energy over virtually any distance.The researchers suggest their theory could be put to the test in a lab and Hotta hints that he and another partner are in the process of doing just that.Reference: Physical Review A and arXiv
Via Dr. Stefan Gruenwald
“ When, in November, the publisher Stewart Brand was asked about who carries the flag of counterculture today, he pointed to the maker movement. The maker era might not be upon us yet, but the maker movement has arrived.” In January of 1903, the small Boston magazine Handicraft ran an essay by the Harvard professor Denman W. Ross, who argued that the American Arts and Crafts movement was in deep crisis. The movement was concerned with promoting good taste and self-fulfillment through the creation and the appreciation of beautiful objects; its more radical wing also sought to advance worker autonomy. The problem was that no one in America seemed to need its products. The solution, according to Ross, was to provide technical education to the critics and the consumers of art alike. This would stimulate demand for high-quality objects and encourage more workers to take up craftsmanship. The cause of the Arts and Crafts movement would be achieved, he maintained, only “when the philosopher goes to work and the working man becomes a philosopher.” ...
Via Aurelie Ghalim, Jacques Urbanska
Can gaming cure disease? By creating games like EteRNA for protein folding and nano-engineering, Adrien Treuille and his colleagues are outsourcing research, each week scoring and then actually synthesizing top players' work. By studying players' strategies, scientists can improve their computer modeling while also creating new ways to fight disease.
The human brain makes predictions by finding similarities between the patterns in recent sensory inputs and previous experiences stored in its vast memory. The same process is now perfectly feasible for those engaged in promoting economic development.
“My graduate school supervisor taught me all I know about professional email etiquette. Vague language? Poor form. Typos? Nothing worse. Run-on paragraphs? A big no-no. Spelling your recipient’s name wrong?”
Via Laura Brown
We propose a bare-bones stochastic model that takes into account both the geographical distribution of people within a country and their complex network of connections. The model, which is designed to give rise to a scale-free network of social connections and to visually resemble the geographical spread seen in satellite pictures of the Earth at night, gives rise to a power-law distribution for the ranking of cities by population size (but for the largest cities) and reflects the notion that highly connected individuals tend to live in highly populated areas. It also yields some interesting insights regarding Gibrat’s law for the rates of city growth (by population size), in partial support of the findings in a recent analysis of real data [Rozenfeld et al., Proc. Natl. Acad. Sci. U.S.A. 105 18702 (2008)]. The model produces a nontrivial relation between city population and city population density and a superlinear relationship between social connectivity and city population, both of which seem quite in line with real data.Spatially Distributed Social Complex NetworksGerald F. Frasco, Jie Sun, Hernán D. Rozenfeld, and Daniel ben-AvrahamPhys. Rev. X 4, 011008 (2014)http://dx.doi.org/10.1103/PhysRevX.4.011008
Via Complexity Digest
Thomas Piketty’s new book, “Capital in the Twenty-First Century,”described by one French newspaper as a “a political and theoretical bulldozer,” defies left and right orthodoxy by arguing that worsening inequality is an inevitable outcome of free market capitalism.
Via jean lievens
“Scientists begin to unravel how neurons recognize specific language sounds.”The sounds that make up speech, built from slight variations in vowels and consonants, trigger specific responses in the part of the brain responsible for speech processing, researchers report today in Science. Phonemes — such as the 'buh' sound in 'bad' or the 'duh' in 'dad' — are thought to be the smallest linguistic elements that change a word's meaning. But the study suggests that the brain's superior temporal gyrus can recognize even smaller bits of speech, called features, that may be common across languages.“We’ve known for a pretty long time now what area of the brain is really important for processing speech sounds,” says lead author Edward Chang, a neuroscientist at the University of California in San Francisco. “What we haven’t known is the details about how individual sounds are processed.”Chang's team made the discovery by working with six patients who were preparing to undergo brain surgery to treat epilepsy. An array of electrodes was implanted in the brain of each person as part of pre-surgical testing. Each volunteer then listened to speech samples comprising 500 sentences spoken by 400 people that covered the entire inventory of phonetic American English sounds.When researchers compared the electrode data to the different phonemes heard by the volunteers, they found that phonemes with similar features seemed to elicit characteristic electric responses in neurons located within each patient's superior temporal gyrus.Chang sees this as the starting point for understanding the mechanism that underlies the brain's seemingly effortless decoding of a stream of speech. “One of the things that happens in speech and language is that we transform sounds into meaning,” he says. A set of feature units in some combination gives rise to a phoneme; those combine to create a word, and together, groups of words create meaning.Josef Rauschecker, a neuroscientist at Georgetown University in Washington DC, notes that monkeys are known to have neurons that respond to phonetic features. The discovery of a similar capability in the human brain opens the door to studying the evolution of speech recognition, he says.Identifying the neural mechanisms that make up normal phonetic coding in the brain can lead to a better understanding of abnormalities, says Mitchell Steinschneider, a neuroscientist at Albert Einstein College of Medicine of Yeshiva University in New York. For people with hearing loss, for instance, this might mean the development of more sophisticated processors to aid artificial hearing, he adds.
Via Dr. Stefan Gruenwald