Systems Theory
8.1K views | +0 today
Systems Theory
theoretical aspects of (social) systems theory
Curated by Ben van Lier
Your new post is loading...
Your new post is loading...
Rescooped by Ben van Lier from Amazing Science!

The Next Wearable Technology Could Be Your Skin

The Next Wearable Technology Could Be Your Skin | Systems Theory |

Technology can be awkward. Our pockets are weighed down with ever-larger smartphones that are a pain to pull out when we’re in a rush. And attempts to make our devices more easily accessible with smartwatches have so far fallen flat. But what if a part of your body could become your computer, with a screen on your arm and maybe even a direct link to your brain?


Artificial electronic skin (e-skin) could one day make this a possibility. Researchers are developing flexible, bendable and even stretchable electronic circuits that can be applied directly to the skin. As well as turning your skin into a touchscreen, this could also help replace feeling if you’ve suffered burns or problems with your nervous system.


The simplest version of this technology is essentially an electronic tattoo. In 2004, researchers in the US and Japan unveiled a pressure sensor circuit made from pre-stretched thinned silicon strips that could be applied to the forearm. But inorganic materials such as silicon are rigid and the skin is flexible and stretchy. So researchers are now looking to electronic circuits made from organic materials (usually special plastics or forms of carbon such as graphene that conduct electricity) as the basis of e-skin.


Typical e-skin consists of a matrix of different electronic components — flexible transistors, organic LEDs, sensors and organic photovoltaic (solar) cells — connected to each other by stretchable or flexible conductive wires. These devices are often built up from very thin layers of material that are sprayed or evaporated onto a flexible base, producing a large (up to tens of cm2) electronic circuit in a skin-like form.

Via Anna Hu , TechinBiz, Dr. Stefan Gruenwald
Anna Hu 's curator insight, June 30, 2016 7:55 PM
How cool is this
Gust MEES's curator insight, July 1, 2016 8:24 AM
Technology can be awkward. Our pockets are weighed down with ever-larger smartphones that are a pain to pull out when we’re in a rush. And... read more


Learn more / En savoir plus / Mehr erfahren:



Rescooped by Ben van Lier from Amazing Science!

The Current State of Machine Intelligence

The Current State of Machine Intelligence | Systems Theory |

A few years ago, investors and startups were chasing “big data”. Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or collectively “machine intelligence”. The Bloomberg Beta fund, which is focused on the future of work, has been investing in these approaches.


Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus).

Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.

What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).

We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.

Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, anddestroying 80's classic video games.

Big companies have a disproportionate advantage, especially those that build consumer products. The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.
Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom).
Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
Via Dr. Stefan Gruenwald
John Vollenbroek's curator insight, April 25, 2015 2:53 AM

I like this overview

pbernardon's curator insight, April 26, 2015 2:33 AM

Une infographie et une cartographie claire et très intéressante sur l'intelligence artificielle et les usages induits que les organisations vont devoir s'approprier.



Rescooped by Ben van Lier from Tracking the Future!

How do you build a large-scale quantum computer?

How do you build a large-scale quantum computer? | Systems Theory |

Physicists led by ion-trapper Christopher Monroe at the JQI have proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Via Szabolcs Kósa
Andreas Pappas's curator insight, March 28, 2014 4:40 AM

This article shows how scientists can increase the scale of quantum machine while still making them behave quantum mechanically by reading the qu-bits with lasers instead of conventional wiring.

Rescooped by Ben van Lier from Tracking the Future!

Beyond the Moore's Law: Nanocomputing using nanowire tiles

Beyond the Moore's Law: Nanocomputing using nanowire tiles | Systems Theory |

An interdisciplinary team of scientists and engineers from The MITRE Corporation and Harvard University have taken key steps toward ultra-small electronic computer systems that push beyond the imminent end of Moore's Law, which states that the device density and overall processing power for computers will double every two to three years.

The ultra-small, ultra-low-power control processor—termed a nanoelectronic finite-state machine or "nanoFSM"—is smaller than a human nerve cell. It is composed of hundreds of nanowire transistors, each of which is a switch about ten-thousand times thinner than a human hair. The nanowire transistors use very little power because they are "nonvolatile." That is, the switches remember whether they are on or off, even when no power is supplied to them.

Via Szabolcs Kósa
James Jandebeur's curator insight, February 1, 2014 12:57 PM

It mentions that the processors can now be made smaller than a neuron, I wonder how its power compares. Still, quite a breakthrough if it works out.

aanve's curator insight, February 1, 2014 11:09 PM
Christian Verstraete's curator insight, February 3, 2014 1:29 AM

Will this address our needs when we reach the physical limits of our current chip technology?

Rescooped by Ben van Lier from Tracking the Future!

Processors That Work Like Brains Will Accelerate Artificial Intelligence

Processors That Work Like Brains Will Accelerate Artificial Intelligence | Systems Theory |

A new breed of computer chips that operate more like the brain may be about to narrow the gulf between artificial and natural computation—between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world. Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These “neuromorphic” chips may be the missing piece of many promising but unfinished projects in artificial intelligence, such as cars that drive themselves reliably in all conditions, and smartphones that act as competent conversational assistants.

Via Szabolcs Kósa
No comment yet.
Rescooped by Ben van Lier from Tracking the Future!

Paul Allen and the Machines: Teaching the next generation of artificial intelligence

Paul Allen and the Machines: Teaching the next generation of artificial intelligence | Systems Theory |

Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late '60s, eerily intelligent computers were everywhere, whether it was 2001's HAL or Star Trek's omnipresent Enterprise computer. As Allen recalls in his memoir, "machines that behaved like people, even people gone mad, were all the rage back then." He would tag along to his father's job at the library, overwhelmed by the information, and daydream about "the sci-fi theme of a dying or threatened civilization that saves itself by finding a trove of knowledge." What if you could collect all the world's information in a single computer mind, one capable of intelligent thought, and be able to communicate in simple human language? 

Forty years later, with nearly 9 billion dollars to Allen's name, that idea is beginning to seem like more than just fantasy. Much of the technology is already here. We talk to our phones and aren't surprised when they talk back. A web search can answer nearly any question, undergirded by a semantic understanding of the structure of online information. But while the tools are powerful, the processes behind them are still fairly basic. Siri only understands a small subset of questions, and she can't reason, or do anything you might call thinking. Even Watson, IBM'sJeopardy champ, can only handle simple questions with unambiguous phrasing. Already, Google is looking to the Star Trek computer as a guiding light for its voice search — but it's still a long way off. If technology is going to get there, we'll need computers that are better at talking and, more crucially, better at reasoning.

Via Szabolcs Kósa
Roger Ellman's curator insight, October 28, 2013 5:48 AM

Food, or a at least a snack.., for thought

Rescooped by Ben van Lier from Tracking the Future!

Quantum Computers And The End Of Security

Quantum Computers And The End Of Security | Systems Theory |

Quantum computing and quantum communications; these concepts were invented just 30 years ago, after scientific journals refused to issue earlier publications regarding these subjects because it looked more like science-fiction. Nowadays, quantum systems really do exist, with some of them reaching the stage of commercial sales. Quantum computers raise and answer new questions in the security field, primarily in cryptography.

Via Szabolcs Kósa
IT's curator insight, October 11, 2013 11:46 PM

Kde bude člověk za 30 let s Quantem? ...

Rescooped by Ben van Lier from Amazing Science!

Can Life Evolve from Wires and Plastic?

Can Life Evolve from Wires and Plastic? | Systems Theory |

In a laboratory tucked away in a corner of the Cornell University campus, Hod Lipson’s robots are evolving. He has already produced a self-aware robot that is able to gather information about itself as it learns to walk.


Hod Lipson reports: "We wrote a trivial 10-line algorithm, ran it on big gaming simulator, put it in a big computer and waited a week. In the beginning we got piles of junk. Then we got beautiful machines. Crazy shapes. Eventually a motor connected to a wire, which caused the motor to vibrate. Then a vibrating piece of junk moved infinitely better than any other… eventually we got machines that crawl. The evolutionary algorithm came up with a design, blueprints that worked for the robot."


The computer-bound creature transferred from the virtual domain to our world by way of a 3D printer. And then it took its first steps. Was this arrangement of rods and wires the machine-world’s equivalent of the primordial cell? Not quite: Lipson’s robot still couldn’t operate without human intervention. ‘We had to snap in the battery,’ he told me, ‘but it was the first time evolution produced physical robots. Eventually, I want to print the wires, the batteries, everything. Then evolution will have so much freedom. Evolution will not be constrained.’


Not many people would call creatures bred of plastic, wires and metal beautiful. Yet to see them toddle deliberately across the laboratory floor, or bend and snap as they pick up blocks and build replicas of themselves, brings to mind the beauty of evolution and animated life.


One could imagine Lipson’s electronic menagerie lining the shelves at Toys R Us, if not the CIA, but they have a deeper purpose. Lipson hopes to illuminate evolution itself. Just recently, his team provided some insight into modularity—the curious phenomenon whereby biological systems are composed of discrete functional units.


Though inherently newsworthy, the fruits of the Creative Machines Lab are just small steps along the road towards new life. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead,’ he said, ‘but beneath the surface it’s not simple. There is a lot of grey area in between.’


The robots of the Creative Machines Lab might fulfill many criteria for life, but they are not completely autonomous—not yet. They still require human handouts for replication and power. These, though, are just stumbling blocks, conditions that could be resolved some day soon—perhaps by way of a 3D printer, a ready supply of raw materials, and a human hand to flip the switch just the once.


According to Lipson, an evolvable system is ‘the ultimate artificial intelligence, the most hands-off AI there is, which means a double edge. All you feed it is power and computing power. It’s both scary and promising.’ What if the solution to some of our present problems requires the evolution of artificial intelligence beyond anything we can design ourselves? Could an evolvable program help to predict the emergence of new flu viruses? Could it create more efficient machines? And once a truly autonomous, evolvable robot emerges, how long before its descendants make a pilgrimage to Lipson’s lab, where their ancestor first emerged from a primordial soup of wires and plastic to take its first steps on Earth?

Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by Ben van Lier from Tracking the Future!

Global Information Technology Report 2013

Global Information Technology Report 2013 | Systems Theory |

The Global Information Technology Report 2013, the 12th in the series, analyses the impact and influence of ICTs on economic growth and jobs in a hyperconnected world. Read the full news release for more information.
At the core of the report, the Networked Readiness Index (NRI) measures the preparedness of an economy to use ICT to boost competitiveness and well-being.
The report highlights the lack of progress in bridging the new digital divide – not only in terms of developing ICT infrastructure but also in economic and social impact. Despite rapid adoption of mobile telephony, most developing economies lag behind advanced economies due to environments that are insufficiently conducive to innovation and competitiveness. On the other hand, the report shows the progress that countries are making to fully use ICT to boost higher productivity, economic growth and quality jobs in the current economic environment. Finally, the report reveals an apparent investment threshold in ICT, skills and innovation beyond which return on investment increases significantly.

Via Szabolcs Kósa
No comment yet.
Rescooped by Ben van Lier from Man and Machine!

10 Body Hacks That Will Be Available By 2025

10 Body Hacks That Will Be Available By 2025 | Systems Theory |

In the year 2000, conceiving of a device that worked simultaneously as a handheld computer, portable MP3 player, satellite radio, GPS, and phone seemed like science fiction against the then-current backdrop of shiny new, brick-like flip phones. As witnessed with today’s success of the iPhone, technology advances quickly and without much advance notice if driven by market demand and commercial backing.
The next wave of the future could go beyond the technology we’re holding in our hands and extend to what’s embedded inside our hands. There is experimentation with bio-technological hacks going on today both in the lab and in an unsanctioned underground of fanatics that could result in body implant “upgrades” being as ubiquitous in 2025 as smartphones are now.

Via Szabolcs Kósa, trendspotter, Martin Talks
No comment yet.
Rescooped by Ben van Lier from Amazing Science!

GM and Lyft Are Teaming Up to Build a Network of Self-Driving Cars

GM and Lyft Are Teaming Up to Build a Network of Self-Driving Cars | Systems Theory |

General Motors and Lyft are teaming up to create a national network of self-driving cars, the companies jointly announced this morning.

GM will invest $500 million in Lyft and take a seat on the ride-sharing startup’s board of directors. It will also become a preferred provider of cars for short-term use to Lyft drivers.


GM, America’s biggest automaker, has been working on autonomous technology since it first collaborated with Carnegie Mellon University in 2007, for an autonomous vehicle competition sponsored by DARPA. Next year, it plans to finally put a related product on the market: “Super Cruise,” a semi-autonomous feature that will let a car handle itself on the highway, will be available on the 2017 Cadillac CT6.

The partnership with Lyft, though, signifies ambitions far beyond Super Cruise. While we have no details on the proposed “network of on-demand autonomous vehicles”—such as how it will work or when it will arrive—we can assume it will require a far more advanced take on autonomous driving than Super Cruise will offer. Lyft, like other ride-sharing services, does the bulk of its work in cities, which are devilishly hard for robots to navigate. Urban areas are full of complicated intersections, pedestrians, cyclists, and other hard-to-predict variables.

Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by Ben van Lier from Amazing Science!

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us?

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us? | Systems Theory |

Computers might soon become more intelligent than us. Some of the best brains in Silicon Valley are now trying to work out what happens next.

Nate Soares, a former Google engineer, is weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.

The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence (AI). Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.

Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director. It isn’t unusual to hear people express big thoughts about the future in Silicon Valley these days – though most of the technology visions are much more benign. It sometimes sounds as if every entrepreneur, however trivial the start-up, has taken a leaf from Google’s mission statement and is out to “make the world a better place”.

Warnings have lately grown louder. Astrophysicist Stephen Hawking, writing earlier this year, said that AI would be “the biggest event in human history”. But he added: “Unfortunately, it might also be the last.”

Elon Musk – whose successes with electric cars (through Tesla Motors) and private space flight (SpaceX) have elevated him to almost superhero status in Silicon Valley – has also spoken up. Several weeks ago, he advised his nearly 1.2 million Twitter followers to read Superintelligence, a book about the dangers of AI, which has made him think the technology is “potentially more dangerous than nukes”. Mankind, as Musk sees it, might be like a computer program whose usefulness ends once it has started up a more complex piece of software. “Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted. “Unfortunately, that is increasingly probable.”

Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by Ben van Lier from Tracking the Future!

Graphene nanoribbons could be the savior of Moore’s Law

Graphene nanoribbons could be the savior of Moore’s Law | Systems Theory |

With each new generation of microchips, transistors are being placed closer and closer together. This can only go on so long before there’s no more room to improve, or something revolutionary has to come along to change everything. One of the materials that might be the basis of that revolution is none other than graphene. Researchers at the University of California at Berkeley are hot on the trail of a form of so-called nanoribbon graphene that could increase the density of transistors on a computer chip by as much as 10,000 times.

Via Szabolcs Kósa
Thierry Bodhuin's curator insight, February 18, 2014 4:10 AM

Moore's law may continue ... 


Yaroslav Writtle's curator insight, February 18, 2014 6:44 AM

Interesting stuff - wonder what could this mean for computing capacity 10 years down the line?

Benjamin Rees's curator insight, March 27, 2015 8:06 AM

For the past few decades, the concept of Moore's Law has proven to be relatively accurate in saying that the density of transistors able to be placed on an integrated circuit roughly doubles every two years. However, as transistors are manufactured to be placed increasingly close together, it can be foreseen that there will soon be no more room for improvement using current methods and materials. Recent developments in graphene technology may allow for more spatially efficient  circuits in the future, thereby continuing this trend of doubling transistor density.

Rescooped by Ben van Lier from Tracking the Future!

Integrated quantum circuit is most complex ever

Integrated quantum circuit is most complex ever | Systems Theory |

Researchers in the UK, Japan and the Netherlands have fabricated the most functionally complex integrated quantum circuit ever from a single material, capable of generating photons and entangling them at the same time. The circuit consists of two photon sources on a silicon chip that interfere quantum mechanically. Its inventors say that it could be used in quantum information processing applications and in complex on-chip quantum optics experiments.

Via Szabolcs Kósa
No comment yet.
Rescooped by Ben van Lier from The Beinghood Times!

Where Does Consciousness Come From?

Where Does Consciousness Come From? | Systems Theory |

This Science Daily article discusses how scientists have made a significant step into the understanding of conscious perception, by showing how single neurons in the human brain react to certain images.  From this they believe Consciousness arises as an emergent property of the human mind.

Via Allen David Reed
Allen David Reed's curator insight, December 3, 2013 3:16 AM


Science can be a good thing or a bad thing.  Good in that it explores the mysteries of life, bad in that it believes it has a lock on the truth of things once a new event horizon in understanding has been crossed.  This is the blessing and the curse of the limited rational mind that operates at the Logic-Based (Orange) Worldview of science.


Like climbing a ladder, every hundred years or so the collective awareness (and science as its reflection) undergoes a shift in its perception of what consciousness is and where it comes from.  In the latest such leap, science has now decided that consciousness arises as a result of our brain.  Indeed, both scientists and technologists are now running ahead with this theory, thinking they can out-perform the brain and create the holy grail of consciousness inside a super fast computer.


This urge to ‘replicate’ nature and even ‘out perform’ nature comes from the primary fear that underlies the Orange Worldview and drives much technological inovation in our world: the fear of nature, the fear of the feminine, the fear of life.  Fully enlightened Beings (those who have expanded into the ‘Infinity-Based’ (Coral) or ‘Void-Based’ (Teal) Worldviews) see this ‘Fear-Based’ scientific view of consciousness as a wholly limited perception, much like attempting to describe the universe using mathematical equations while sitting in a darkened box.


What has failed to be understood by current science is that the brain is the ‘effect’ of consciousness, not its cause.  Bruce Lipton, author of “The Biology of Belief”, presented a most excellent presentation on this point of non-locality at the Uplift Festival in Byron Bay, Australia (  Bruce takes us far beyond science's limited belief that consciousness is the result of the brain, explaining how our brain is in fact an ‘antenna’ into the ‘Field’.  We humans are quantum, non-local phenomena.


Yet even this ‘epigenetic’ quantum-Field-view of consciousness can be much further expanded.  One of the best recent examples of this was the near-death experience of Mellen-Thomas Benedict ( during which he experienced the infinity of consciousness along with its power of rebooting the hologram that is our body (the cancer he died from was instantly gone).  He describes what the ultimate transcendental state of consciousness, of BEINGHOOD, really is: stepping into the Void.


To view the original article, go to:



**To be in the planet's "other" conversation, include the hasthtag #BEINGHOOD in your comments.  To learn more about the 10 Worldviews, go to:

Rescooped by Ben van Lier from Tracking the Future!

Biology Confronts Data Complexity

Biology Confronts Data Complexity | Systems Theory |

New technologies have launched the life sciences into the age of big data. Biologists must now make sense of their informational windfall.

Via Szabolcs Kósa
Gary Bamford's curator insight, October 21, 2013 1:53 AM

The very definition of 'complexity'!

Germán Morales's curator insight, October 22, 2013 11:26 AM

Tratar la vida como un cumulo de datos... qué se yo... estamos yendo a eso.

tatiyana fuentes's curator insight, October 24, 2013 8:49 AM

It was difficult to find sequence the human genome, but now it’s comparatively simple to compare genomes of the microorganisms living in our bodies, the ocean, the soil, and everywhere because of the new technologies. Life scientists are embarking on countless other big data projects, including efforts to analyze the genomes of many cancers, to map the human brain, and to develop better biofuels and other crops. Compared to fields like physics, astronomy and computer science that have been dealing with the challenges of massive datasets for decades, the big data revolution in biology has also been quick, leaving little time to adapt. Biologists must overcome a number of hurdles, from storing and moving data to integrating and analyzing it, which will require a substantial cultural shift.

Rescooped by Ben van Lier from Peer2Politics!

Putting Big Data in Context | Innovation Insights |

Putting Big Data in Context | Innovation Insights | | Systems Theory |

While futurist Ray Kurweizel and Moore’s Law gets all the headlines, over the years there has been a lot of interesting research and creative thought given to the idea of technological innovation and its implications on the need for human involvement in complex decision-making. In the era of Big Data, when social networks capture our conversations, likes and ideas like never before and sensor networks, or the Internet of Things, indexes more of the world around us, the fastest systems have access to more of the raw fuel, in zettabytes of new data, needed to make increasingly more complex decisions. But, does that mean smart systems will soon replace human decision-making?

Via jean lievens
No comment yet.
Rescooped by Ben van Lier from Tracking the Future!

Human extinction warning from Oxford

Human extinction warning from Oxford | Systems Theory |

What are the greatest global threats to humanity? Are we on the verge of our own unexpected extinction?

An international team of scientists, mathematicians and philosophers at Oxford University's Future of Humanity Institute is investigating the biggest dangers.

And they argue in a research paper, Existential Risk as a Global Priority, that international policymakers must pay serious attention to the reality of species-obliterating risks.

Last year there were more academic papers published on snowboarding than human extinction.

The Swedish-born director of the institute, Nick Bostrom, says the stakes couldn't be higher. If we get it wrong, this could be humanity's final century.

Via Szabolcs Kósa
Rescooped by Ben van Lier from Tracking the Future!

Why the Frontiers of Biology Might Be Inside a Computer Chip

Why the Frontiers of Biology Might Be Inside a Computer Chip | Systems Theory |

When David Harel started the experiment, the petri dish of mouse cells looked just like any other. Genes were being expressed, proteins were being made, and the tissue was being perfused with oxygen-rich blood.
But then things started to change. First one cell changed position and moved across the plate, followed quickly by another. Eventually, through migration and other changes in cell functionality and signaling, the cells had differentiated, with the lucky ones becoming fully-fledged thymus gland T cells. And it all happened in a fraction of the time that biologists would have expected based on several decades of physiological and development studies; after all, this experiment was happening inside a computer, in virtual organs modeled by complicated diagrams, simulating their real-world counterparts.

Via Szabolcs Kósa
No comment yet.