Systems Theory
6.6K views | +3 today
Follow
 
Systems Theory
theoretical aspects of (social) systems theory
Curated by Ben van Lier
Your new post is loading...
Your new post is loading...
Scooped by Ben van Lier
Scoop.it!

Blockchain, distributed ledgers and learning machines

Empty description
Ben van Lier's insight:
Weblog published with the title: ‘Blockchain, distributed ledgers and learning machines’ Part three about blockchain
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

China Is Building a Robot Army of Model Workers

China Is Building a Robot Army of Model Workers | Systems Theory | Scoop.it
Can China reboot its manufacturing industry—and the global economy—by replacing millions of workers with machines?
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Europe Plans Giant Billion-Euro Quantum Technologies Project

Europe Plans Giant Billion-Euro Quantum Technologies Project | Systems Theory | Scoop.it
Third European Union flagship will be similar in size and ambition to graphene and human brain initiatives
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Frontiers | Hybrid Societies: Challenges and Perspectives in the Design of Collective Behavior in Self-organizing Systems | Computational Intelligence

Hybrid societies are self-organizing, collective systems composed of different components, for example, natural and artificial parts (bio-hybrid) or human beings interacting with and through technical systems (socio-technical). Many different disciplines investigate methods and systems closely related to the design of hybrid societies. A~stronger collaboration between these disciplines could allow for re-use of methods and create significant synergies. We identify three main areas of challenges in the design of self-organizing hybrid societies. First, we identify the formalization challenge. There is an urgent need for a generic model that allows a description and comparison of collective hybrid societies. Second, we identify the system design challenge. Starting from the formal specification of the system, we need to develop an integrated design process. Third, we identify the challenge of interdisciplinarity. Current research on self-organizing hybrid societies stretches over many different fields and hence requires the re-use and synthesis of methods at intersections between disciplines. We then conclude by presenting our perspective for future approaches with high potential in this area.
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Bots Need to Learn Some Manners, and It’s on Us to Teach Them

Bots Need to Learn Some Manners, and It’s on Us to Teach Them | Systems Theory | Scoop.it
Suddenly the whole tech industry is knee-deep in AI-powered assistants. Now if we could just get them to be less asshole-y.
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Machines are becoming more creative than humans

Machines are becoming more creative than humans | Systems Theory | Scoop.it

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?

 

The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in thestock market evolved to make real money.

 

These AI agents are different from those commonly seen in robotics, vision, and speech processing in that they were not taught to perform specific actions. Instead, they learned the best behaviors on their own by exploring possible behaviors and determining which ones lead to the best outcomes. Many such methods are modeled after similar adaptation in biology. For instance, evolutionary computation takes concepts from biological evolution. The idea is to encode candidate solutions (such as videogame players) in such a way that it is possible to recombine and mutate them to get new solutions. Then, given a large population of candidates with enough variation, a parallel search method is run to find a candidate that actually solves the problem. The most promising candidates are selected for mutation and recombination in order to construct even better candidates as offspring. In this manner, only an extremely tiny fraction of the entire group of possible candidates needs to be searched to find one that actually solves the problem, e.g. plays the game really well.

 

We can apply the same approach to many domains where it is possible to evaluate the quality of candidates computationally. It applies to many design domains, including the design of the space antenna mentioned above, the design of a control system for a finless rocket, or the design of a multilegged, walking robot. Often evolution comes up with solutions that are truly unexpected but still effective — in other words, creative. For instance, when working on a controller that would navigate a robotic arm around obstacles, we accidentally disabled its main motor. It could no longer reach targets far away, because it could not turn around its vertical axis. What the controller evolved to do instead was slowly turn the arm away from the target, using its remaining motors, and then swing it back really hard, turning the whole robot towards the target through inertia!


Via Levin Chin, Dr. Stefan Gruenwald
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Interconnectivity and the IoT revolution - raconteur.net

A global network of interconnected devices linked to the internet is about to revolutionise the way we live and work
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

The momentous advance in artificial intelligence demands a new set of ethics

The momentous advance in artificial intelligence demands a new set of ethics | Systems Theory | Scoop.it

Let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence (AI) and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.

 

But as we drink to its early arrival, we should also begin trying to understand what the surprise means for the future – with regard, chiefly, to the ethics and governance implications that stretch far beyond a game.

 

As AlphaGo and AIs like it become more sophisticated – commonly outperforming us at tasks once thought to be uniquely human – will we feel pressured to relinquish control to the machines?

 

The number of possible moves in a game of Go is so massive that, in order to win against a player of Lee’s calibre, AlphaGo was designed to adopt an intuitive, human-like style of gameplay. Relying exclusively on more traditional brute-force programming methods was not an option. Designers at DeepMind made AlphaGo more human-like than traditional AI by using a relatively recent development – deep learning.

 

Deep learning uses large data sets, “machine learning” algorithms and deep neural networks – artificial networks of “nodes” that are meant to mimic neurons – to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.

 

Possessing a more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. For example, actions with high levels of unpredictablility – talking, driving, serving as a soldier – which were previously unmanageable for AI are now considered technically solvable, thanks in large part to deep learning.


Via Dr. Stefan Gruenwald
more...
Leonardo Wild's curator insight, March 27, 11:20 PM
The subject matter of one of my so-far unpublished novels, the third book in the Unemotion series *(Yo Artificial, in Spanish). It's starting to happen and we think Climate Change is big.
Scooped by Ben van Lier
Scoop.it!

Baidu Translate: The Inside Story | Slator

Exclusive look behind the scenes at China’s most advanced Natural Language Processing program supporting 100m daily translation requests
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Craig Venter has just created a synthetic living cell with the smallest known genome

Craig Venter has just created a synthetic living cell with the smallest known genome | Systems Theory | Scoop.it

Genomics entrepreneur Craig Venter has created a synthetic cell that contains the smallest genome of any known, independent organism. Functioning with 473 genes, the cell is a milestone in his team’s 20-year quest to reduce life to its bare essentials and, by extension, to design life from scratch.

 

Venter, who has co-founded a company that seeks to harness synthetic cells for making industrial products, says that the feat heralds the creation of customized cells to make drugs, fuels and other products. But an explosion in powerful ‘gene-editing’ techniques, which enable relatively easy and selective tinkering with genomes, raises a niggling question: why go to the trouble of making life from scratch when you can simply tweak it?

 

Unlike the first synthetic cells made in 20101, in which Venter’s team at the J. Craig Venter Institute in La Jolla, California, copied an existing bacterial genome and transplanted it into another cell, the genome of the minimal cells is like nothing in nature. Venter says that the cell, which is described in a paper released on 24 March inScience2, constitutes a brand new, artificial species.

 

Microbiologists were just starting to characterize the bacterial immune system that scientists would eventually co-opt and name CRISPR when Venter’s team began its effort to whittle life down to its bare essentials. In a 1995 Science paper, Venter’s team sequenced the genome of Mycoplasma genitalium, a sexually transmitted microbe with the smallest genome of any known free-living organism3, and mapped its 470 genes. By inactivating genes one by one and testing to see whether the bacterium could still function, the group slimmed this list down to 375 genes that seemed essential.

 

One way to test this hypothesis is to make an organism that contains just those genes. So Venter, together with his close colleagues Clyde Hutchison and Hamilton Smith and their team, set out to build a minimal genome from scratch, by joining together chemically synthesized DNA segments. The effort required the development of new technologies, but by 2008, they had used this method to make what was essentially an exact copy of the M. genitalium genome that also included dozens of non-functional snippets of DNA ‘watermarks’4.

 

But the sluggish growth of natural M. genitalium cells prompted them to switch to the more prolificMycoplasma mycoides. This time, they not only synthesized its genome and watermarked it with their names and with famous quotes, but also implanted it into another bacterium that had been emptied of its own genome.

 

The resulting ‘JCVI-syn1.0’ cells were unveiled1 in 2010 and hailed — hyperbolically, many say — as the dawn of synthetic life. (The feat prompted US President Barack Obama to launch a bioethics review, and the Vatican to question Venter’s claim that he had created life.) However, the organism’s genome was built by copying an existing plan and not through design — and its bloated genome of more than 1 million DNA bases was anything but minimal.

 

“The idea of building whole genomes is one of the dreams and promises of synthetic biology,” says Paul Freemont, a synthetic biologist at Imperial College London, who is not involved in the work.

 

In an attempt to complete its long-standing goal of designing a minimal genome, Venter’s team designed and synthesized a 483,000-base, 471-gene M. mycoides chromosome from which it had removed genes responsible for the production of nutrients that could be provided externally, and other genetic ‘flotsam’. But this did not produce a viable organism.

 

So, in a further move, the team developed a ‘design-build-and-test’ cycle. It broke theM. mycoides genome into eight DNA segments and mixed and matched these to see which combinations produced viable cells; lessons learned from each cycle informed which genes were included in the next design. This process highlighted DNA sequences that do not encode proteins but that are still needed because they direct the expression of essential genes, as well as pairs of genes that perform the same essential task — when such genes are deleted one at a time, both mistakenly seem to be dispensable.

 

Eventually, the team hit on the 531,000-base, 473-gene design that became known as JCVI-syn3.0 (syn2.0 was a less streamlined intermediary). Syn3.0 has a respectable doubling time of 3 hours, compared with, for instance, 1 hour for M. mycoides and 18 hours for M. genitalium.

 

“This old Richard Feynman quote, ‘what I cannot create, I do not understand’, this principle is now served,” says Martin Fussenegger, a synthetic biologist at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland. “You can add in genes and see what happens.”

 

With nearly all of its nutrients supplied through growth media, syn3.0’s essential genes tend to be those involved in cellular chores such as making proteins, copying DNA and building cellular membranes. Astoundingly, Venter says that his team could not identify the function of 149 of the genes in syn3.0’s genome, many of which are found in other life forms, including humans. “We don’t know about a third of essential life, and we’re trying to sort that out now,” he says. 


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Learning to Program Cellular Memory

Learning to Program Cellular Memory | Systems Theory | Scoop.it
Combining synthetic biology approaches with time-lapse movies, a team led by Caltech biologists has determined how some proteins shape a cell's ability to remember particular states of gene expression.

 

What if we could program living cells to do what we would like them to do in the body? Having such control—a major goal of synthetic biology—could allow for the development of cell-based therapies that might one day replace traditional drugs for diseases such as cancer. In order to reach this long-term goal, however, scientists must first learn to program many of the key things that cells do, such as communicate with one another, change their fate to become a particular cell type, and remember the chemical signals they have encountered.

 

Now a team of researchers led by Caltech biologists Michael Elowitz, Lacramioara Bintu, and John Yong (PhD '15) have taken an important step toward being able to program that kind of cellular memory using tools that cells have evolved naturally. By combining synthetic biology approaches with time-lapse movies that track the behaviors of individual cells, they determined how four members of a class of proteins known as chromatin regulators establish and control a cell's ability to maintain a particular state of gene expression—to remember it—even once the signal that established that state is gone.

 

The researchers reported their findings in the February 12 issue of the journal Science.


Via Dr. Stefan Gruenwald
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Claude Shannon’s information theory built the foundation for the digital era

Claude Shannon’s information theory built the foundation for the digital era | Systems Theory | Scoop.it
Claude Shannon, born 100 years ago, devised the mathematical representation of information that made the digital era possible.
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

The Atom Without Properties

The Atom Without Properties | Systems Theory | Scoop.it
The microscopic world is governed by the rules of quantum mechanics, where the properties of a particle can be completely undetermined and yet strongly correlated with those of other particles. Physicists from the University of Basel have observed these so-called Bell correlations for the first time between hundreds of atoms. Their findings are published in the scientific journal Science.

 

Everyday objects possess properties independently of each other and regardless of whether we observe them or not. Einstein famously asked whether the moon still exists if no one is there to look at it; we answer with a resounding yes. This apparent certainty does not exist in the realm of small particles. The location, speed or magnetic moment of an atom can be entirely indeterminate and yet still depend greatly on the measurements of other distant atoms.

 

With the (false) assumption that atoms possess their properties independently of measurements and independently of each other, a so-called Bell inequality can be derived. If it is violated by the results of an experiment, it follows that the properties of the atoms must be interdependent. This is described as Bell correlations between atoms, which also imply that each atom takes on its properties only at the moment of the measurement. Before the measurement, these properties are not only unknown -- they do not even exist.

 

A team of researchers led by professors Nicolas Sangouard and Philipp Treutlein from the University of Basel, along with colleagues from Singapore, have now observed these Bell correlations for the first time in a relatively large system, specifically among 480 atoms in a Bose-Einstein condensate. Earlier experiments showed Bell correlations with a maximum of four light particles or 14 atoms. The results mean that these peculiar quantum effects may also play a role in larger systems.

 

In order to observe Bell correlations in systems consisting of many particles, the researchers first had to develop a new method that does not require measuring each particle individually – which would require a level of control beyond what is currently possible. The team succeeded in this task with the help of a Bell inequality that was only recently discovered. The Basel researchers tested their method in the lab with small clouds of ultracold atoms cooled with laser light down to a few billionths of a degree above absolute zero. The atoms in the cloud constantly collide, causing their magnetic moments to become slowly entangled. When this entanglement reaches a certain magnitude, Bell correlations can be detected. Author Roman Schmied explains: “One would expect that random collisions simply cause disorder. Instead, the quantum-mechanical properties become entangled so strongly that they violate classical statistics.”


Via Dr. Stefan Gruenwald
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Ray Kurzweil Predicts Three Technologies Will Define Our Future

Ray Kurzweil Predicts Three Technologies Will Define Our Future | Systems Theory | Scoop.it
Over the last several decades, the digital revolution has changed nearly every aspect of our lives. The pace of progress in computers has been accelerating, and today, computers and networks... read more
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

How the brain produces consciousness in “time slices”

How the brain produces consciousness in “time slices” | Systems Theory | Scoop.it

EPFL scientists propose a new way of understanding of how the brain processes unconscious information into our consciousness. According to the model, consciousness arises only in time intervals of up to 400 milliseconds, with gaps of unconsciousness in between. The driver ahead suddenly stops, and you find yourself stomping on your breaks before you even realize what is going on. We would call this a reflex, but the underlying reality is much more complex, forming a debate that goes back centuries: Is consciousness a constant, uninterrupted stream or a series of discrete bits – like the 24 frames-per-second of a movie reel? Scientists from EPFL and the universities of Ulm and Zurich, now put forward a new model of how the brain processes unconscious information, suggesting that consciousness arises only in intervals up to 400 milliseconds, with no consciousness in between. The work is published in PLoS Biology.

 

Consciousness seems to work as continuous stream: one image or sound or smell or touch smoothly follows the other, providing us with a continuous image of the world around us. As far as we are concerned, it seems that sensory information is continuously translated into conscious perception: we see objects move smoothly, we hear sounds continuously, and we smell and feel without interruption. However, another school of thought argues that our brain collects sensory information only at discrete time-points, like a camera taking snapshots. Even though there is a growing body of evidence against “continuous” consciousness, it also looks like that the “discrete” theory of snapshots is too simple to be true.

 

Michael Herzog at EPFL, working with Frank Scharnowski at the University of Zurich, have now developed a new paradigm, or “conceptual framework”, of how consciousness might actually work. They did this by reviewing data from previously published psychological and behavioral experiments that aim to determine if consciousness is continuous or discrete. Such experiments can involve showing a person two images in rapid succession and asking them to distinguish between them while monitoring their brain activity.

 

The new model proposes a two-stage processing of information. First comes the unconscious stage: The brain processes specific features of objects, e.g. color or shape, and analyzes them quasi-continuously and unconsciously with a very high time-resolution. However, the model suggests that there is no perception of time during this unconscious processing. Even time features, such as duration or color change, are not perceived during this period. Instead, the brain represents its duration as a kind of “number”, just as it does for color and shape.

 

Then comes the conscious stage: Unconscious processing is completed, and the brain simultaneously renders all the features conscious. This produces the final “picture”, which the brain finally presents to our consciousness, making us aware of the stimulus.


Via Dr. Stefan Gruenwald
more...
Philippe Vallat's curator insight, April 14, 8:29 AM

Well, the title does not describe the content: the text says how the brain processes information, not how it "produces" consciousness

Scooped by Ben van Lier
Scoop.it!

Second Chinese team reports gene editing in human embryos

Second Chinese team reports gene editing in human embryos | Systems Theory | Scoop.it
Study used CRISPR technology to introduce HIV-resistance mutation into embryos.
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Geoffrey Hinton, the 'godfather' of deep learning, on AlphaGo

Geoffrey Hinton, the 'godfather' of deep learning, on AlphaGo | Systems Theory | Scoop.it
The scientist who helped develop the neural networks behind Google's AlphaGo, which beat grandmaster Lee Sedol, on the past, present and future of AI
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Systems Theory and Postmodernism

Systems Theory and Postmodernism | Systems Theory | Scoop.it
I've been listening to a lecture by Luhmann titled  "Systems Theory and Postmodernism." Early in the lecture, Luhmann mentions a conversation he had with Lyotard in which Lyotard said "The Postmodern Condition was not one of his better books." Luhmann...
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Neutrinos can flip between different states effortlessly, hinting at a new type of physics

Neutrinos can flip between different states effortlessly, hinting at a new type of physics | Systems Theory | Scoop.it

Neutrino mutation would not be possible if it weren’t for the particle’s minuscule mass. Because each of the three known mass states is so small and its associated quantum wavelength is so long, the waves corresponding to each state can remain largely in sync, with only small offsets, over cosmic distances. This allows neutrinos to flicker between different flavors in an ephemeral state of multiplicity.

If their masses were larger and their wavelengths shorter, the waves would quickly become so out of phase that this knife-edge balance between different flavors would collapse, forcing the neutrinos into one type or the other. “The different flavors would separate from each other,” says de Gouvêa. “They would have a very binary behavior.” The fact that neutrinos don’t, thanks to their puny mass states, makes sense according to the rules of quantum mechanics, but it is still mind-bending, says neutrino researcher Jason Koskinen of the University of Copenhagen. “I still haven’t wrapped my head around this,” he admits.

There is just one snag: Neutrinos weren’t supposed to have any mass at all. “We built our standard model around the idea that neutrinos are massless,” says Janet Conrad of the Massachusetts Institue of Technology (MIT).

The fact that they have mass, however small, is a big problem. The standard model is physicists’ best idea of how particles and forces interact—a spectacularly strong edifice whose construction was completed in 2012 with the discovery of its last missing particle, the Higgs boson. “Neutrino oscillation is the only confirmed physics right now that can be done outside the standard model,” says Koskinen.

The reason that neutrino mass is so tricky has to do with how any particle gets its mass. Other elementary particles with mass come in two mirror versions—one left- and one right-handed—that correspond to the direction of their spin. Each version can interact with a different force of nature, and both “hands” seem to be required to give particles mass, thanks to their interaction with an invisible quantum “ether” that suffuses all of space: the Higgs field, whose signature particle is the Higgs boson.

The Higgs field acts a bit like a mirror, turning a particle with one spin into its mirror opposite. “The idea is that every once in a while, a left-handed particle will hit the Higgs field and convert to a right-handed particle,” says de Gouvêa. “The net effect is that it looks like a particle with mass.”


Via Dr. Stefan Gruenwald
more...
Renato P. dos Santos's curator insight, April 3, 12:40 PM
Share your insight
Scooped by Ben van Lier
Scoop.it!

How Google Plans to Solve Artificial Intelligence

How Google Plans to Solve Artificial Intelligence | Systems Theory | Scoop.it
Mastering Go is just the beginning for Google DeepMind, which hopes to create human-like AI.
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Microsoft’s Quantum Mechanics

Microsoft’s Quantum Mechanics | Systems Theory | Scoop.it
Can an aging corporation’s adventures in fundamental physics research open a new era of unimaginably powerful computers?
more...
No comment yet.
Scooped by Ben van Lier
Scoop.it!

Genome Discovery Holds Key to Designer Organisms

Genome Discovery Holds Key to Designer Organisms | Systems Theory | Scoop.it
Scientists are homing in on the fewest genes needed for an organism to survive.
Ben van Lier's insight:
Share your insight
more...