Computational Intelligence
2 views | +0 today
Follow
 
Rescooped by Paulo Fazendeiro from Social Foraging
onto Computational Intelligence
Scoop.it!

Bioengineers Build Open Source Language for Programming Cells

Bioengineers Build Open Source Language for Programming Cells | Computational Intelligence | Scoop.it

Drew Endy wants to build a programming language for the body.

 

Endy is the co-director of the International Open Facility Advancing Biotechnology — BIOFAB, for short — where he’s part of a team that’s developing a language that will use genetic data to actually program biological cells. That may seem like the stuff of science fiction, but the project is already underway, and the team intends to open source the language, so that other scientists can use it and modify it and perfect it.

 

The effort is part of a sweeping movement to grab hold of our genetic data and directly improve the way our bodies behave — a process known as bioengineering. With the Supreme Court exploring whether genes can be patented, the bioengineering world is at crossroads, but scientists like Endy continue to push this technology forward.

 

Genes contain information that defines the way our cells function, and some parts of the genome express themselves in much the same way across different types of cells and organisms. This would allow Endy and his team to build a language scientists could use to carefully engineer gene expression – what they call “the layer between the genome and all the dynamic processes of life.”


Via Ashish Umre
Paulo Fazendeiro's insight:

Could this be the inception of a new CI paradigm?

more...
Harshal Hayatnagarkar's curator insight, April 22, 2013 3:43 AM

Ok that's how artificial selection will be driven. Cool !

luiy's curator insight, April 22, 2013 4:50 AM

Nonetheless, this is what Endy is shooting for — right down to Sun’s embrace of open source software. The BIOFAB language will be freely available to anyone, and it will be a collaborative project.

 

Progress is slow — but things are picking up. At this point, the team can get cells to express up to ten genes at a time with “very high reliability.” A year ago, it took them more than 700 attempts to coax the cells to make just one. With the right programming language, he says, this should expand to about a hundred or more by the end of the decade. The goal is to make that language insensitive to the output genes so that cells will express whatever genes a user wants, much like the print function on a program works regardless of what set of characters you feed it.

What does he say to those who fear the creation of Frankencells — biological nightmares that will wreak havoc on our world? “It could go wrong. It could hurt people. It could be done irresponsibly. Assholes could misuse it. Any number of things are possible. But note that we’re not operating in a vacuum,” he says. “There’s history of good applications being developed and regulations being practical and being updated as the technology advances. We need to be vigilant as things continue to change. It’s the boring reality of progress.”

 

He believes this work is not only essential, but closer to reality than the world realizes. “Our entire civilization depends on biology. We need to figure out how to partner better with nature to make the things we need without destroying the environment,” Endy says. “It’s a little bit of a surprise to me that folks haven’t come off the sidelines from other communities and helped more directly and started building out this common language for programming life. It kind of matters.”

Your new post is loading...
Your new post is loading...
Rescooped by Paulo Fazendeiro from Amazing Science
Scoop.it!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Computational Intelligence | Scoop.it

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.

 

Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.

 

The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”

 

“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

 

Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

King - Man + Woman = Queen: The Marvelous Mathematics of Computational Linguistics

King - Man + Woman = Queen: The Marvelous Mathematics of Computational Linguistics | Computational Intelligence | Scoop.it
Computational linguistics has dramatically changed the way researchers study and understand language. The ability to number-crunch huge amounts of words for the first time has led to entirely new ways of thinking about words and their relationship to one another.

This number-crunching shows exactly how often a word appears close to other words, an important factor in how they are used. So the word Olympics might appear close to words like running, jumping, and throwing but less often next to words like electron or stegosaurus.  This set of relationships can be thought of as a multidimensional vector that describes how the word Olympics is used within a language, which itself can be thought of as a vector space.  

And therein lies this massive change. This new approach allows languages to be treated like vector spaces with precise mathematical properties. Now the study of language is becoming a problem of vector space mathematics.

Today, Timothy Baldwin at the University of Melbourne in Australia and a few pals explore one of the curious mathematical properties of this vector space: that adding and subtracting vectors produces another vector in the same space.

The question they address is this: what do these composite vectors mean? And in exploring this question they find that the difference between vectors is a powerful tool for studying language and the relationship between words.

Via Ashish Umre
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

US military studied how to influence Twitter users in Darpa-funded research

US military studied how to influence Twitter users in Darpa-funded research | Computational Intelligence | Scoop.it

The activities of users of Twitter and other social media services were recorded and analysed as part of a major project funded by the US military, in a program that covers ground similar to Facebook’s controversial experiment into how to control emotions by manipulating news feeds.

 

Research funded directly or indirectly by the US Department of Defense’s military research department, known as Darpa, has involved users of some of the internet’s largest destinations, including Facebook, Twitter, Pinterest and Kickstarter, for studies of social connections and how messages spread.

 

While some elements of the multi-million dollar project might raise a wry smile – research has included analysis of the tweets of celebrities such as Lady Gaga and Justin Bieber, in an attempt to understand influence on Twitter – others have resulted in the buildup of massive datasets of tweets and additional types social media posts.

Several of the DoD-funded studies went further than merely monitoring what users were communicating on their own, instead messaging unwitting participants in order to track and study how they responded.


Via Ashish Umre
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Amazing Science
Scoop.it!

Programming matter by folding: Shape-shifting robots

Programming matter by folding: Shape-shifting robots | Computational Intelligence | Scoop.it
Self-folding sheets of a plastic-like material point the way to robots that can assume any conceivable 3-D structure.

 

Programmable matter is a material whose properties can be programmed to achieve specific shapes or stiffnesses upon command. This concept requires constituent elements to interact and rearrange intelligently in order to meet the goal. This research considers achieving programmable sheets that can form themselves in different shapes autonomously by folding. Past approaches to creating transforming machines have been limited by the small feature sizes, the large number of components, and the associated complexity of communication among the units. We seek to mitigate these difficulties through the unique concept of self-folding origami with universal crease patterns.


This approach exploits a single sheet composed of interconnected triangular sections. The sheet is able to fold into a set of predetermined shapes using embedded actuation. To implement this self-folding origami concept, we have developed a scalable end-to-end planning and fabrication process. Given a set of desired objects, the system computes an optimized design for a single sheet and multiple controllers to achieve each of the desired objects. The material, called programmable matter by folding, is an example of a system capable of achieving multiple shapes for multiple functions.


As director of the Distributed Robotics Laboratory at the Computer Science and Artificial Intelligence Laboratory (CSAIL), Professor Daniela Rus researches systems of robots that can work together to tackle complicated tasks. One of the big research areas in distributed robotics is what’s called “programmable matter,” the idea that small, uniform robots could snap together like intelligent Legos to create larger, more versatile robots.

 

The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) has a Programmable Matter project that funds a good deal of research in the field and specifies “particles … which can reversibly assemble into complex 3D objects.” But that approach turns out to have drawbacks, Rus says. “Most people are looking at separate modules, and they’re really worried about how these separate modules aggregate themselves and find other modules to connect with to create the shape that they’re supposed to create,” Rus says. But, she adds, “actively gathering modules to build up a shape bottom-up, from scratch, is just really hard given the current state of the art in our hardware.”

 

So Rus has been investigating alternative approaches, which don’t require separate modules to locate and connect to each other before beginning to assemble more complex shapes. Fortunately, also at CSAIL is Erik Demaine, who joined the MIT faculty at age 20 in 2001, becoming the youngest professor in MIT history. One of Demaine’s research areas is the mathematics of origami, and he and Rus hatched the idea of a flat sheet of material with tiny robotic muscles, or actuators, which could fold itself into useful objects. In principle, flat sheets with flat actuators should be much easier to fabricate than three-dimensional robots with enough intelligence that they can locate and attach to each other.


So they designed yet another set of algorithms that, given sequences of folds for several different shapes, would determine the minimum number of actuators necessary to produce all of them. Then they set about building a robot that could actually assume multiple origami shapes. Their prototype, made from glass-fiber and hydrocarbon materials, with an elastic plastic at the creases, is divided into 16 squares about a centimeter across, each of which is further divided into two triangles. The actuators consist of a shape-memory alloy — a metal that changes shape when electricity is applied to it. Each triangle also has a magnet in it, so that it can attach to its neighbors once the right folds have been performed.


Via Dr. Stefan Gruenwald
more...
Keith Wayne Brown's curator insight, June 2, 2014 9:04 PM

Transformers--more than meets the eye!

Tekrighter's curator insight, June 3, 2014 8:30 AM

Awesome! This is right up there with 3-D printing as the technological advance of the decade...

Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

Europe Wants a Supercomputer Made From Smartphones - IEEE Spectrum

Europe Wants a Supercomputer Made From Smartphones - IEEE Spectrum | Computational Intelligence | Scoop.it

A European private-public consortium wants to make supercomputers using smartphone and tablet CPUs. And not just any supercomputers.

They’re shooting for the moon—aiming for exaflops (1018 or quintillions of floating-point operations per second), some thousandfold faster than the top of today’s high-performance heap.

 

Supercomputing has always offered a kind of turboboosted reflection of everyday computing. In the 1970s and ’80s, Cray supercomputers and their ilk were like supercharged mainframes, with just handfuls of processors that had each been designed for speed. In the 1990s and 2000s, as PCs and then laptops predominated, supercomputers became agglomerations of hundreds, thousands, and now even millions of PC and server cores. (The world’s fastest supercomputer today is China’s Tianhe-2, powered by 3.1 million Intel Xeon cores but capable of only about 5 percent of an exaflop.)

 

So in the tablet and smartphone age, it was probably only a matter of time before someone decided to make supercomputers out of the engines of present-day digital life. The thinking goes that because ARM cores are designed to run on small smartphone and tablet batteries, a supercomputer built around them could yield more speed with less power. In an age when high-performance computing, or HPC, is often constrained by heat production and electricity consumption, that could mean a more scalable machine.

 


Via Ashish Umre
more...
No comment yet.
Scooped by Paulo Fazendeiro
Scoop.it!

Why Nvidia thinks it can power the AI revolution - GigaOM

Why Nvidia thinks it can power the AI revolution - GigaOM | Computational Intelligence | Scoop.it
Why Nvidia thinks it can power the AI revolution GigaOM Netflix uses them (in the Amazon Web Services cloud) to power its recommendation engine, Russian search engine Yandex uses GPUs to power its search engine, and IBM uses them to run clustering...
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

Bioengineers Build Open Source Language for Programming Cells

Bioengineers Build Open Source Language for Programming Cells | Computational Intelligence | Scoop.it

Drew Endy wants to build a programming language for the body.

 

Endy is the co-director of the International Open Facility Advancing Biotechnology — BIOFAB, for short — where he’s part of a team that’s developing a language that will use genetic data to actually program biological cells. That may seem like the stuff of science fiction, but the project is already underway, and the team intends to open source the language, so that other scientists can use it and modify it and perfect it.

 

The effort is part of a sweeping movement to grab hold of our genetic data and directly improve the way our bodies behave — a process known as bioengineering. With the Supreme Court exploring whether genes can be patented, the bioengineering world is at crossroads, but scientists like Endy continue to push this technology forward.

 

Genes contain information that defines the way our cells function, and some parts of the genome express themselves in much the same way across different types of cells and organisms. This would allow Endy and his team to build a language scientists could use to carefully engineer gene expression – what they call “the layer between the genome and all the dynamic processes of life.”


Via Ashish Umre
Paulo Fazendeiro's insight:

Could this be the inception of a new CI paradigm?

more...
Harshal Hayatnagarkar's curator insight, April 22, 2013 3:43 AM

Ok that's how artificial selection will be driven. Cool !

luiy's curator insight, April 22, 2013 4:50 AM

Nonetheless, this is what Endy is shooting for — right down to Sun’s embrace of open source software. The BIOFAB language will be freely available to anyone, and it will be a collaborative project.

 

Progress is slow — but things are picking up. At this point, the team can get cells to express up to ten genes at a time with “very high reliability.” A year ago, it took them more than 700 attempts to coax the cells to make just one. With the right programming language, he says, this should expand to about a hundred or more by the end of the decade. The goal is to make that language insensitive to the output genes so that cells will express whatever genes a user wants, much like the print function on a program works regardless of what set of characters you feed it.

What does he say to those who fear the creation of Frankencells — biological nightmares that will wreak havoc on our world? “It could go wrong. It could hurt people. It could be done irresponsibly. Assholes could misuse it. Any number of things are possible. But note that we’re not operating in a vacuum,” he says. “There’s history of good applications being developed and regulations being practical and being updated as the technology advances. We need to be vigilant as things continue to change. It’s the boring reality of progress.”

 

He believes this work is not only essential, but closer to reality than the world realizes. “Our entire civilization depends on biology. We need to figure out how to partner better with nature to make the things we need without destroying the environment,” Endy says. “It’s a little bit of a surprise to me that folks haven’t come off the sidelines from other communities and helped more directly and started building out this common language for programming life. It kind of matters.”

Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

Mastering the game of Go with deep neural networks and tree search

Mastering the game of Go with deep neural networks and tree search | Computational Intelligence | Scoop.it
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Via Ashish Umre
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Amazing Science
Scoop.it!

Hebbs Rule Shown: Researchers have for the first time directly created and destroyed neural connections

Hebbs Rule Shown: Researchers have for the first time directly created and destroyed neural connections | Computational Intelligence | Scoop.it

Researchers from UCSD have for the first time directly created and destroyed neural connections that connect high level sensory input and high level behavioral responses. 

 

Donald Hebb in 1949 was one of the first to seize upon this observation.  He proposed that on the biological level, neurons were rewired so that coordinated inputs and outputs get wired together.  As such, were there a nausea neuron and a boat neuron, through the effects of association, the two would get wired together so that the “boat” itself fires up pathways in the “nausea” part of the brain.

 

In the field of neural networks, this has a name: Hebbian learning.  Pavlov of course also described this phenomenon, and tested it in animals, bequeathing its name the “conditioned response”.

 

Until now the wiring of neural inputs and outputs was a theory with good but indirect evidence.  At UCSD, neuroscientists teamed up with molecular biologists to engineer a mouse whose neurons can be directly controlled for forming and losing connections.

 

They did this by injecting an engineered virus into the auditory nerve cells.  The viruses, largely harmless, carry a light responsive molecular switch (a membrane protein “channel” actually) which gets inserted into cells of the auditory region.  Using laser light of certain frequencies it is possible to both “potentiate” or “depress” the auditory nerve cells.

 

The upshot is that the researchers could directly make the auditory nerve cells increase or decrease their signal strength to other nerve cells, without needing a real, external noise.  In effect, they’ve short-circuited the noise input.  In experiments, they used a light electrical pulse to shock mice while simultaneously stimulating the auditory input with the laser-activated switch.

 

Basically they flashed the laser light at the ear of the mouse.  Over time, the mouse began to associate the laser pulse induced nerve signal with the electrical shock.  The mice were conditioned to exhibit fear even when there was no shock.

 

The crux of the experiment is what happened when the scientists flashed the laser in a way to weaken the auditory nerve.  Now the mouse stopped responding in fear to the laser auditory stimulus.

The experiments showed for the first time that associative learning was indeed the wiring together of sensory and response neurons.  The study was published in Nature.

 

Nature (2014) doi:10.1038/nature13294


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

Exploring Function Prediction in Protein Interaction Networks via Clustering Methods

Exploring Function Prediction in Protein Interaction Networks via Clustering Methods | Computational Intelligence | Scoop.it

Complex networks have recently become the focus of research in many fields. Their structure reveals crucial information for the nodes, how they connect and share information. In our work we analyze protein interaction networks as complex networks for their functional modular structure and later use that information in the functional annotation of proteins within the network. We propose several graph representations for the protein interaction network, each having different level of complexity and inclusion of the annotation information within the graph. We aim to explore what the benefits and the drawbacks of these proposed graphs are, when they are used in the function prediction process via clustering methods. For making this cluster based prediction, we adopt well established approaches for cluster detection in complex networks using most recent representative algorithms that have been proven as efficient in the task at hand. The experiments are performed using a purified and reliable Saccharomyces cerevisiae protein interaction network, which is then used to generate the different graph representations. Each of the graph representations is later analysed in combination with each of the clustering algorithms, which have been possibly modified and implemented to fit the specific graph. We evaluate results in regards of biological validity and function prediction performance. Our results indicate that the novel ways of presenting the complex graph improve the prediction process, although the computational complexity should be taken into account when deciding on a particular approach.

 


Via Ashish Umre
more...
No comment yet.
Scooped by Paulo Fazendeiro
Scoop.it!

Finally, Robots That Turn Into Furniture

Finally, Robots That Turn Into Furniture | Computational Intelligence | Scoop.it
Get ready for self-constructing, shape shifting, moving-on-command furniture — or as its creator like to call it, Roombots. There are a lot of things we imagine our future robot...
more...
No comment yet.
Rescooped by Paulo Fazendeiro from Social Foraging
Scoop.it!

First Time Ever: Artificial Intelligence Nominated As A Board Member

First Time Ever: Artificial Intelligence Nominated As A Board Member | Computational Intelligence | Scoop.it

Some days ago Hong Kong-based Deep Knowledge Ventures appointed for the first time everArtificial Intelligence (A.I.) as an official and equal board member.

Yep! Not a joke! You´re not in Star Treck nor experiencing another sequel of Terminator or an encounter with Optimus Prime and Megatron of The Transformers. 

Although robotics and A.I. are already popular in manufacturing, finances, and the military, this marketing stunt makes you wonder, how and when A.I. will further develop and become more and more part of our lives. Scary? Exciting? Both?

In this case A.I. is a sophisticated machine learning program capable of making investment recommendations in the life science sector dubbed VITAL (Validating Investment Tool for Advancing Life Sciences). It basically uses machine learning to analyze financing trends in databases of life science companies and predict successful investments. Aging Analytics, a UK research agency providing life science market, licensed VITAL to Deep Knowledge Ventures. VITAL will report its findings to the board. In addition, it will have an own equal vote in the company´s board when making important investment decisions. The company expects that its opinion will be considered as the most relevant one. 


Via Ashish Umre
Paulo Fazendeiro's insight:

Sooner than anticipated?

more...
Scooped by Paulo Fazendeiro
Scoop.it!

Brain in a Dish Controls Power Grid - Discovery News

Brain in a Dish Controls Power Grid - Discovery News | Computational Intelligence | Scoop.it
Brain in a Dish Controls Power Grid
Discovery News
In thinking about ways to optimize the electrical power grid, Venayagamoorthy looked for a system that could monitor, forecast, plan, learn, make decisions, he told LiveScience.com.
Paulo Fazendeiro's insight:

It is just me or do you also foresee ethical issues?

more...
No comment yet.
Scooped by Paulo Fazendeiro
Scoop.it!

Study shows tax cheats clustering by location; data could mean more audits in ... - The Denver Channel

Study shows tax cheats clustering by location; data could mean more audits in ... - The Denver Channel | Computational Intelligence | Scoop.it
The Denver Channel
Study shows tax cheats clustering by location; data could mean more audits in ...
Paulo Fazendeiro's insight:

IRS is using clustering. I'm wondering if it is hard or fuzzy clustering. Well, if you have to pay it sure is hard...

more...
No comment yet.