omnia mea mecum fero
Follow
Find tag "'MOST READS'"
11.8K views | +0 today
omnia mea mecum fero
όλα τα δικά μου τα κουβαλάω πάνω μου
Curated by pa3geo
Your new post is loading...
Your new post is loading...
Rescooped by pa3geo from Amazing Science
Scoop.it!

Printing the Human Body: How It Works and Where It Is Headed

Printing the Human Body: How It Works and Where It Is Headed | omnia mea mecum fero | Scoop.it

The rise of 3D printing has introduced one of the most ground-breaking technological feats happening right now. The most exciting part, though, doesn't have anything to do with printing electronics or fancy furniture, but in producing human tissues, otherwise known as bioprinting. While it is still in its infancy, the future of bioprinting looks very bright and will eventually result in some major advances for society, whilst also saving billions for the economy this is spent on research and development.


Via Dr. Stefan Gruenwald
more...
Peter Phillips's curator insight, November 27, 2013 1:55 PM

I can't see this saving money - but it will save lives. The technology to print exists. It is the question of how to develop stem cells into tissue types and then how to link these with the bodies complex control systems (nervous, circulatory and immune). in the best case scenario a grown organ will be recognised as self and the body systems will grow into them. However, organs are not toasters. Researchers are concentrating on easy things like skin grafts and ears at present, but like nano electronics, the future is full of potential and questions.

Steve Kingsley's curator insight, November 27, 2013 9:27 PM

Will HP buy Organovo, which invented and produces the NovoGen bioprinter?

Pamela D Lloyd's curator insight, November 29, 2013 5:46 PM

Such astonishingly wonderful ways to use the new 3D printing technology.

Rescooped by pa3geo from Amazing Science
Scoop.it!

Magnetic switches could use 10,000 times less power than current silicon transistors

Magnetic switches could use 10,000 times less power than current silicon transistors | omnia mea mecum fero | Scoop.it
New research from UC Berkeley provides a proof of concept for a magnetic switch that could make computers thousands of times more energy-efficient, and provide some amazing new abilities, too.

 

Computer engineering, but in particular mobile computer engineering, is all about playing a zero-sum game with yourself. Power and efficiency are constantly undercutting one another, creating confounding incentives for designers looking to set records for both talk time and processing speed. At this point it seems obvious that both speed and battery life are limited by the old process of laying down increasingly dense little fields of silicon transistors; whether it’s a quantum computer or a graphene chip, getting more computing power for less electrical power will require a fundamental shift in how we build computers.

 

A new study from UC Berkeley hopes to provide the basis for just such an advance, laying out their attempt at a silicon replacement they say uses up to 10,000 timesless power than prior solutions. They have designed a system that uses magnetic switches in place of transistors, negating the need for a constant electric current. The idea of a magnetic transistor has been discussed since the early 1990s, but the idea’s downfall has always been the need to create a strong magnetic field to orient the magnets for easy switching; all or most of the power saved by the magnets is spent creating the field needed to actually use those magnets.

 

This new study, published in Nature, uses a wire made of tantalum, a somewhat rare element used to make capacitors in everything from Blu-Ray players to mobile phones. Tantalum is a good, light-weight conductor, but it has one particularly odd property that’s made it uniquely useful for magnetic applications: when a current flows through the tantalum wire, all clockwise-spinning electrons migrate to one side of the wire, all counter-clockwise-spinning to the other. The physical movement of these electrons creates a polarization in the system — the same sort of polarization prior researchers have had to create with an expensive magnetic field.

 

If this approach were successful and practical, we could begin to capitalize on some of the shared benefits of all magnetic computing strategies, the most glaring of which is that magnetic switches do not require constant current to maintain their state. Much like a liquid crystal in an e-ink display, a magnetic transistor will maintain its assigned state until actively flipped. This means that a theoretical magnetic processor could use far less energy than semi-conducting silicon ones by accruing energy savings whenever it is not actively doing work. And since tantalum is a fairly well-known material, its incorporation into the manufacturing process shouldn’t prove too difficult.


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

Harvard scientists invent the synaptic transistor that learns while it computes

Harvard scientists invent the synaptic transistor that learns while it computes | omnia mea mecum fero | Scoop.it

It doesn't take a Watson to realize that even the world's best supercomputers are staggeringly inefficient and energy-intensive machines.

 

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

 

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

 

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer.

 

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS.

 

“Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

 

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

 

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

Mapping the impossible: Matterhorn mapped by a fleet of drones in just under 6 hours

The Matterhorn, which juts out a full kilometre above the surrounding Swiss Alps, dominates the local skyline and has challenged countless mountaineers since it was first scaled in 1865.

 

Now this iconic peak has been mapped in unprecedented detail by a fleet of autonomous, fixed-wing drones, flung into the sky from the summit by their makers. What's more, the entire process took just 6 hours.

 

The mapping, which was unveiled at the Drones and Aerial Robotics Conference in New York City last weekend, was carried out by unmanned aerial vehicle (UAV) company SenseFly, and aerial photography company Pix4D.

 

Three eBee drones were launched from the top of the mountain, skimming their way down 100 metres from the face, capturing points just 20 centimetres apart. When they reached the bottom, a second team intercepted the drones and relaunched them for further mapping.

 

Speaking to Mapbox, the mapping company that built the 3D point cloud of the mountain when the drones had landed, SenseFly's Adam Klaptocz said: "Such a combination of high altitudes, steep rocky terrain and sheer size of dataset has simply not been done before with drones, we wanted to show that it was possible."

 

A video crew follows senseFly's (http://www.sensefly.com/) team of engineers marking a historic milestone in proof of surveying techniques, using eBee minidrones to map the epic Matterhorn and construct a 3D model of "the most beautiful mountain".

The mission involved the coordination of several teams with multiple eBee drones taking over 2200 images in 11 flights, all within a few hours of a sunny alpine morning. The results are stunning: a high-definition 3D point-cloud made of 300 million points covering an area of over 2800 hectares with an average resolution of 20 cm. A special thanks to our partners Pix4D (http://www.pix4d.com) for the creation of the 3D model, Drone Adventures (http://www.droneadventures.org) for mission coordination and MapBox (http://www.mapbox.com) for online visualisation.

senseFly is a Parrot company (http://parrot.com/)


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

​What will NASA be doing with its new quantum computer?

​What will NASA be doing with its new quantum computer? | omnia mea mecum fero | Scoop.it
Earlier this year, NASA, in partnership with Google, acquired the world's largest quantum computer. But just what does the space agency plan to do with a device with such revolutionary potential?

 

NASA is currently looking at three very basic applications, including one that would serve as a day-planner for busy astronauts who are up in orbit.

 

"If you're trying to schedule or plan a whole bunch of tasks on the International Space Station, you can do certain tasks only if certain preconditions are met," he explains. "And after you perform the task you end up in another state where you may or may not be able to perform another task. So that's considered a hard optimization problem that a quantum system could potentially solve."

 

They're also looking to schedule jobs on supercomputers. And in fact, NASA Ames is responsible for running the agency's primary supercomputing facility. No doubt, at any instance of time they've got hundreds of individual jobs running on a supercomputer, while many others are waiting for their turn. A very difficult scenario would involve a job waiting to run — one that requires, say, 500 nodes — on a supercomputer with 1,000 nodes available.

 

"Which 500 of these 1,000 nodes should we pick to run the job?," he asks. "It's a very difficult scheduling problem."

 

Another important application is the Kepler search for exoplanets. NASA astronomers use their various telescopes to look at light curves to understand whether any noticeable dimming represents a potential exoplanet as it moves across its host star. This is a massive search problem — one that D-Wave could conceivably help with.

 

"These are the types of applications that we're trying to run," says Biswas. "We're doing it on our D-Wave system, which is the largest in the world, but it's still not large enough to solve the really hard real world problems. But by tackling the smaller problems, we can extrapolate to how a larger problem could be solved on a larger system." "But each of these images may be at a certain wavelength, and you may not get all the information from the image," he explains. "One of the challenges there is what's called data fusion, where you try to get multiple images and somehow fuse them in some smart way so that you can garner information from a fused image that you couldn't get from a single image.

 

And at NASA's Ames Research Center in Silicon Valley, Biswas's team runs the supercomputers that power a significant portion of NASA's endeavors, both public and commercial.

 

"We see quantum computing as a natural extension of our supercomputing efforts," he told me. "In fact, our current belief is that the D-WAVE system and other quantum computers that might come out in the next few years are all going to behave as attached processors to classical silicon computers."

 

Which is actually quite amazing. So in the future, when a user wants to solve a large problem, they would interact with their usual computer, while certain aspects would be handed over to the quantum computer. After performing the calculation, like an optimization problem, it would send the solution back to the traditional silicon-based machine. It'll be like putting your desktop PC on steroids.

 

"Just so we're clear, the D-Wave system is just one of many ways to leverage the effects of quantum physics," he told me. "But in order to use any quantum system, the first thing you need to have is a problem mapped in QUBO form." A QUBO form, which stands for a Quadratic Unconstrained Binary Optimization form, is a mathematical representation of any optimization problem that needs to be solved. At this time — and as far as we know — every single quantum computer requires that the input be in QUBO form.

 

"And that's a serious problem," says Biswas, "because there's no known recipe to devise a problem and then map it into QUBO form. But once we get a QUBO form — which is a graph representation of the problem — we can embed this onto the architecture of the D-Wave machine."

 

The D-Wave processors run 512 qubits which are made up of 64 unit cells. Each unit cell is made up of 8 qubits. And each qubit is made up of a bipartite graph, so there are four quibits on the left and four on the right. Each of the four qubits are connected to the ones on the right and vice-versa. But it's not a fully connected graph.

 

"So what happens therefore, is after you take your problem in QUBO form and you try to embed it into the D-WAVE machine it's not a universal quantum computer. It's not like you have computer keyboard and you can just tell the machine what to do." Essentially, the machine becomes dedicated to the task outlined by the QUBO form — a limitation that could impact scalability.

 

 


Via Dr. Stefan Gruenwald
more...
Scott Gipson's curator insight, December 2, 2013 1:04 AM

       NASA partnered with Google earlier this year to acquire the world’s largest quantum computer. Quantum computers are different from digital computers based on transistors. While digital computers require data to be encoded into binary digits (bits), quantum computation uses quantum properties to represent data and perform operations based on these data. This article discusses the revolutionary potential of the device.

       Quantum systems have the ability to irrevocably change the way we go about computation. Unlike traditional silicon-based computers, these systems tap into the eerie effects of quantum mechanics (namely superposition, entanglement, and parallelism), enabling them to mull over all possible solutions to a problem in a single instant. According to physicist David Deutsch, a quantum system can work on a million computations at once while a standard desktop PC works on just one. These computers will help us find the most convenient solution to a complex problem. As such, they're poised to revolutionize the way we go about data analysis and optimization which include such realms as air traffic control, courier routing, weather prediction, database querying, and hacking tough encryption schemes.

        "Quantum computing has generated a lot of interest recently, particularly the ways in which the D-Wave quantum computer can be used to solve interesting problems. We've had the machine operational since September, and we felt the time is right to give the public a little bit of background on what we've been doing,” said Dr. Rupak Biswas, deputy director of the Exploration Technology Directorate at NASA's Ames Research Center in Silicon Valley.

        Biswas's team is currently looking at three very basic applications, including one that would serve as a day-planner for busy astronauts who are up in orbit. "If you're trying to schedule or plan a whole bunch of tasks on the International Space Station, you can do certain tasks only if certain preconditions are met," he explains. "And after you perform the task you end up in another state where you may or may not be able to perform another task. So that's considered a hard optimization problem that a quantum system could potentially solve."

        NASA is also heavily involved in developing the next generation of air traffic control systems. These involve not only commercial flights, but also cargo and unmanned flights. Currently, much of this is done in a consolidated fashion by air traffic control. But at later stages, when more distributed control is required and highly complex variables like weather need to be taken into account, quantum computing could certainly help.

       This article ties into Chapter 9: Business-to-Business Relations in our Case Studies textbook. “Tactics in business-to-business relations and partner relationship management help companies build productive relationships with other companies” (Guth & Marsh pg. 194). Considering what I’ve read in this article, so far the relationship between the two companies seems to be pretty productive. 

Rescooped by pa3geo from Amazing Science
Scoop.it!

Million Lines of Code - Information Is Beautiful

Million Lines of Code - Information Is Beautiful | omnia mea mecum fero | Scoop.it

Is a million lines of code a lot? How many lines of code are there in Windows? Facebook? iPhone apps? How about a bacterium, a human being, or all the data in the genome database at NIH?


Via Dr. Stefan Gruenwald
more...
odysseas spyroglou's curator insight, November 17, 2013 8:42 AM

More data for data. More statistics, better decisions.

Marc Kneepkens's curator insight, November 18, 2013 9:27 AM

Glad to see that the Human Genome is still way out there.



Rescooped by pa3geo from Amazing Science
Scoop.it!

The Futurist magazine’s top 10 forecasts for 2014 and beyond — and Why They Might Not Come True

The Futurist magazine’s top 10 forecasts for 2014 and beyond — and Why They Might Not Come True | omnia mea mecum fero | Scoop.it

The Futurist magazine’s top 10 forecasts for 2014 and beyond. 

Every year, the editors of the Futurist magazine identify the most provocative forecasts and statements about the future that we’ve published recently and we put them to into an annual report called “Outlook.” It’s sprawling exploration of what the future looks like at a particular moment in time. To accompany the report, we draft a list of our top 10 favorite predictions from the magazine’s previous 12 months. What are the criteria to be admitted into the top 10? The forecast should be interesting, relatively high impact, and rising in likelihood. In other words, it’s a bit subjective.

 

There are surely better methods for evaluating statements about the future, but not for our purposes. You see, we aren’t actually interested in attempting to tell our readers what will happen so much as provoking a better discussion about what can happen—and what futures can be avoided, if we discover we’re heading in an unsavory direction.

 

The future isn’t a destination. But the problem with too many conversations about the future, especially those involving futurists, is that predictions tend to take on unmitigated certainty, sounding like GPS directions. When you reach the Singularity, turn left—that sort of thing. In reality, it’s more like wandering around a city, deciding spur of the moment what road to take.


Via Szabolcs Kósa, Margarida Sá Costa, Dr. Stefan Gruenwald
more...
Say Keng Lee's curator insight, October 7, 2013 5:06 AM

Fascinating forecasts!

Rescooped by pa3geo from Amazing Science
Scoop.it!

Physics: What We Do and Don’t Know. By Steven Weinberg

Physics: What We Do and Don’t Know. By Steven Weinberg | omnia mea mecum fero | Scoop.it

In the past fifty years two large branches of physical science have each made a historic transition. I recall both cosmology and elementary particle physics in the early 1960s as cacophonies of competing conjectures. By now in each case we have a widely accepted theory, known as a “standard model.”

 

Cosmology and elementary particle physics span a range from the largest to the smallest distances about which we have any reliable knowledge. The cosmologist looks out to a cosmic horizon, the farthest distance light could have traveled since the universe became transparent to light over ten billion years ago, while the elementary particle physicist explores distances much smaller than an atomic nucleus. Yet our standard models really work—they allow us to make numerical predictions of high precision, which turn out to agree with observation.

Up to a point the stories of cosmology and particle physics can be told separately. In the end, though, they will come together.

 


Via Dr. Stefan Gruenwald
more...
BHinstitute Harendran B's curator insight, October 21, 2013 1:54 AM
Amazing science is best knowledge updated..