Your new post is loading...
Michio Kaku: When making predictions, I have two criteria: the laws of physics must be obeyed and prototypes must exist that demonstrate “proof of principle.” I’ve interviewed more than 300 of the world’s top scientists, and many allowed me into laboratories where they are inventing the future. Their accomplishments and dreams are eye-opening. From my conversations with them, here’s a glimpse of what to expect in the coming decades:
1. Computers Will Disappear
2. Augmented Reality Will Be Everyday Reality
3. The Brain-Net Will Augment the Internet
4. Capitalism Will Be Perfected
5. Robots Will Be Commonplace
6. Aged Body Parts Will Be Replaced
7. Parents Will Design Their Offspring based on Genomics
8. Cybermedicine Will Extend Lives
9. Dictators Will Be Big Losers
10. Intellectual Capitalism Will Replace Commodity Capitalism
Via Pierre Tran, Amy Cross, Mike Busarello's Digital Textbooks, Jukka Melaranta
Computational knowledge. Symbolic programming. Algorithm automation. Dynamic interactivity. Natural language. Computable documents. The cloud. Connected devices. Symbolic ontology. Algorithm discovery. These are all things Stephen Wolfram and his team has been energetically working on—mostly for years—in the context of Wolfram|Alpha, Mathematica, CDF and so on.
In 2002 Stephen Wolfram released A New Kind of Science and immediately unleashed a firestorm of wonder, controversy, and criticism as the British-born scientist, programmer, and entrepreneur overturned conventional ideas on how to pursue knowledge. Earlier this month, he teased something with the capacity to create as much passion — and, likely, much more actual change — in the world of programming, computation, and applications.
Whether you think his 1,300-page tome on the future of scientific exploration is seminal or fanciful, you can’t question that the man is a genius. Born of Jewish parents who fled persecution in pre-WWII Germany (remind you of another scientist?), Wolfram wrote a dictionary on physics at age 12 and three books on particle physics by the time he was 14, publishing his first scientific papers at 15.
In 1988 he released the first version of Mathematica, a platform for technical computation, and in 2009, he released the Wolfram Alpha search engine, a computational knowledge engine. His new project, he says, is a perfect marriage. “The knowledge graph is a vastly less ambitious project than what we’ve been doing at Wolfram Alpha,” Wolfram says quickly when I bring it up. But don’t compare it to Google’s knowledge graph or semantic search. “It’s just Wikipedia and other data.”
Google wants to understand objects and things and their relationships so it can give answers, not just results. But Wolfram wants to make the world computable, so that our computers can answer questions like “where is the International Space Station right now.” That requires a level of machine intelligence that knows what the ISS is, that it’s in space, that it is orbiting the Earth, what its speed is, and where in its orbit it is right now.
That’s not static data; that’s a combination of computation with knowledge. WolframAlpha does that today, but that is just the beginning. Search engines aren’t good at that, Wolfram argues, because they’re too messy. Questions in a search engine have many answers, with varying degrees of applicability and “rightness.” That’s not computable, not clean enough to program or feed into a system. “We want to be right,” Wolfram told me. “Making the world computable is a much higher bar than being able to generate Wikipedia-style information … a very different thing. What we’ve tried to do is insanely more ambitious.”
New cardiac devices are small enough to be delivered through blood vessels into the heart.
Pacemaker surgery typically requires a doctor to make an incision above a patient’s heart, dig a cavity into which he can implant the heartbeat-regulating device, and then connect the pulse generator to wires delivered through a vein near the collarbone. Such surgery could soon be completely unnecessary. Instead, doctors could employ miniaturized wireless pacemakers that can be delivered into the heart through a major vein in the thigh.
On Monday, doctors in Austria implanted one such device into a patient—the first participant in a human trial of what device-manufacturer Medtronic says is the smallest pacemaker in the world.
The device is 24 millimeters long and 0.75 cubic centimeters in volume—a tenth the size of a conventional pacemaker. Earlier this year, another device manufacturer, St. Jude Medical, bought a startup called Nanostim that makes another tiny pacemaker, and St. Jude is offering it to patients in Europe. This device is 41 millimeters long and one cubic centimeter in volume.
Doctors can implant such pacemakers into the heart through blood vessels, via an incision in the thigh. They use steerable, flexible tubes called catheters to push the pacemakers through a large vein.
The two new devices are the latest effort to make heart surgery less traumatic. Doctors began to widely use less invasive heart treatments in the late 1990s, when artery-unclogging balloons delivered by catheters started to replace bypass surgeries. Other cardiac technologies like stents, which prop open weak or narrow arteries, can also be delivered through blood vessels. More recently, researchers have developed artificial valves for patients whose natural valves have become damaged; these devices can also be delivered by catheters snaking through large blood vessels.
Brian Lindman, a cardiovascular specialist at Washington University School of Medicine, and colleagues have found that less invasive catheter-based procedures for valve repair can be safer for high-risk elderly patients and can enable doctors to treat patients who are too frail to undergo surgery.
More recently, Lindman published a study suggesting that the transcatheter method may improve the odds of survival for diabetic patients as well. However, for some cardiac treatments such as valve repair, a more invasive surgery enables longer-lasting repairs, and so may be the better option for patients strong enough for surgery. “Surgery or transcatheter is not always better,” says Lindman. “It depends on the cardiac problem and on the nuances of each procedure.
Researchers at the University of Twente have developed a new superconducting cable system that is crucial to the success of nuclear fusion reactors.
The superconductivity research group of the University of Twente (UT) has made a technological breakthrough crucial to the success of nuclear fusion reactors, allowing for clean, inexhaustible energy generation based on the workings of the stars in our galaxy. The crux of the new development is a highly ingenious and robust superconducting cable system. This makes for a remarkably strong magnetic field that controls the very hot, energy-generating plasma in the reactor core, laying the foundation for nuclear fusion. The new cables are far less susceptible to heating due to a clever way of interweaving, which allows for a significant increase in the possibilities to control the plasma. Moreover, in combination with an earlier UT invention, the cables are able to withstand the immense forces inside the reactor for a very long time. The increased working life of the superconductors and the improved control of the plasma will soon make nuclear fusion energy more reliable: the magnet coils take up one third of the costs of a nuclear fusion power station. The longer their working life, the cheaper the energy will be. The research is a project within the context of the Green Energy Initiative of the University of Twente.
Cost-effective clean energy Project leader Arend Nijhuis: ‘The worldwide development of nuclear fusion reactors is picking up steam, and this breakthrough leads to a new impulse. Our new cables have already been extensively tested in two institutes.’ Mr Nijhuis has been invited for a new collaboration with China and expects that the UT system will become a global standard. The world’s largest nuclear fusion reactor, ITER, is under construction in Cadarache in France, and is expected to start operation by 2020, as a joint project of the US, EU, Russia, India, Japan, South Korea and China. However, China and South Korea have also initiated their own national large-scale nuclear fusion projects, in which the UT technology can be incorporated.
Scientists say they have been able to successfully print new eye cells that could be used to treat sight loss. The proof-of-principle work in the journal Biofabrication was carried out using animal cells.
The Cambridge University team says it paves the way for grow-your-own therapies for people with damage to the light-sensitive layer of tissue at back of the eye - the retina. More tests are needed before human trials can begin.
Co-authors of the study Prof Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair at the University of Cambridge, said: "The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function.
"Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future."
They now plan to attempt to print other types of retinal cells, including the light-sensitive photoreceptors - rods and cones.
Clara Eaglen, of the RNIB, said: "This is a step in the right direction as the retina is often affected in many of the common eye conditions, causing loss of central vision which stops people watching TV and seeing the faces of loved ones."
UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.
I just can't believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."
Indeed, it's mindblowing. I'm simultaneously excited and terrified. This is how it works: They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.
The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.
Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video arenot what the subject is seeing. They are random bits used just to compose the brain image.
Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain. Right now, the resulting quality is not good, but the potential is enormous. Lead research author — and one of the lab test bunnies — Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines: "Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences".
The brain recorders of the future - Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.
This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.
This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull.
Flying robots have proven themselves capable sheep herders, delivery boys, filmmakers and spies. Now, when can we have one?
Herding sheep, delivering pizza, guiding lost students around campus -- these are just a few things friendly drones can do. Company and DIY drones are on the rise, and not even Hollywood stars will be safe from them. Soon starlets might be acting in front of drone-mounted cameras or being chased by a UAV paparazzi.
Though drones have incredible commercial potential, most countries restrict its use. The U.S. is expected to open up drones for commercial use by 2015.
Proponents are eager to point out the many ways they're going to make our lives better. "Really, this technology is an extra tool to help an industry be more effective," says Gretchen West, the executive vice president for the Association for Unmanned Vehicle Systems International (AUVSI). AUVSI estimates the U.S. loses $10 billion yearly by delaying drone integration. Though drones bring up privacy concerns, some argue it could advance privacy law.
"With precision agriculture, for example, it can take pictures of fields so farmers can identify problems they wouldn't necessarily see walking through the fields. In law enforcement, you could find a child lost in the woods more easily than walking through a field, particularly if there's bad weather or treacherous ground."
While it may seem that drones are set to take over our lives, the reality is a bit more complicated. Drone usage around the world is definitely picking up in the public sector, but when it comes to commercial activity, many countries have strict limitations.
The United States doesn't allow for commercial drone usage at all, though that's expected to change in 2015, when the Federal Aviation Administration (FAA) aims to put a plan in place to integrate drones in U.S. airspace. In the meantime, says West, the U.S. is losing $10 billioons in potential economic impact for every year the FAA delays.
"I think the U.S. has been the leader in this technology, and I think there's a risk of losing that first-mover aspect the longer we wait on regulations," she says.
It’s a question that’s perplexed philosophers for centuries and scientists for decades: where does consciousness come from?
Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That's just the way the universe works.
What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism -- and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.
Koch's insights have been detailed in dozens of scientific articles and a series of books, including last year's Consciousness: Confessions of a Romantic Reductionist. Wired talked to Koch about his understanding of this age-old question.
Glass media that stores data in 3-spatial and 2-optical dimensions could outlast us all
An experimental computer memory format uses five dimensions to store data with a density that would allow more than 300 terabytes to be crammed onto a standard optical disc. But unlike an optical disc, which is made of plastic, the experimental media is quartz glass. Researchers have long been trying to use glass as a storage material because it is far more durable than existing plastics.
A team led by optoelectronics researcher Jingyu Zhang at the University of Southampton, in the U.K., has demonstrated that information can be stored in glass by changing its birefringence, a property related to how polarized light moves through the glass.
In conventional optical media, such as DVDs, you store data by burning tiny pits on one or more layers on the plastic disc, which means you're using three spatial dimensions to store information. But in Zhang's experiment, he and colleagues exploit two additional, optical dimensions.
When their data-recording laser marks the glass, it doesn’t just make a pit: it changes two parameters of the birefringence of the glass. The researchers set these parameters, called slow axis orientation and strength of retardance, by controlling the polarization and intensity of their laser beam. Add the two optical dimensions to three spatial coordinates and the result is "5D data storage," as Zhang calls it.
Previous attempts at storing data in glass consisted of burning tiny holes into the material, but that approach means that an optical microscope is required to read out the data. Zhang's goal is to write data into glass in a format readable with lasers, like existing optical discs, to keep data-reading costs down.
The writing costs will be higher, though, since changing birefringence in glass requires fine control of a laser's polarization and intensity. Earlier attempts involved rotating the laser and using an attenuator, Zhang says, but that could take several seconds between writing operations, making it far too slow for practical applications.
Instead Zhang and colleagues bounced the beam of their ultrafast writing laser off a tiny, commercially available LCD-like screen called a spatial light modulator, or SLM (see illustration below). It changes its reflectivity quickly in response to electrical charges, giving the team fine control over the intensity of the reflected beam.
Increasingly small robots can carry out their functions even inside the human body. No, this isn’t a sci-fi dream. The technology is almost ready. However there is still one condition they must meet to be effective: these devices need to have the same "softness" and flexibility as biological tissues.
This is the opinion of scientists like Antonio De Simone, from SISSA (the International School for Advanced Studies of Trieste) and Marino Arroyo from the Polytechnic University of Catalonia, who have just published a paper in the Journal of the Mechanics and Physics of Solids. Taking inspiration from unicellular water micro-organisms, they studied the locomotion mechanisms of "soft robots."
Forget cogwheels, pistons and levers: miniaturized robots of the future will be 'soft.' "If I think of the robots of tomorrow, what comes to mind are the tentacles of an octopus or the trunk of an elephant rather than the mechanical arm of a crane or the inner workings of a watch. And if I think of micro-robots then I think of unicellular organisms moving in water. The robots of the future will be increasingly like biological organisms" explains Antonio De Simone.
De Simone and his team at SISSA have been studying the movement of euglenids, unicellular aquatic animals, for several years. One of the aims of De Simone's research -- which has recently been awarded a European Research Council Advanced Grant of 1,300,000 euro -- is to transfer the knowledge acquired in euglenids to micro-robotics, a field that represents a promising challenge for the future. Micro-robots may in fact carry out a number of important functions, for example for human health, by delivering drugs directly to where they are needed, re-opening occluded blood vessels, or helping to close wounds, to name just a few.
To do this, these tiny robots will have to be able to move around efficiently. "Imagine trying to miniaturize a device made up of levers and cogwheels: you can't go below a certain minimal size. Instead, by mimicking biological systems we can go all the way down to cell size, and this is exactly the direction research is taking. We, in particular, are working on movement and studying how certain unicellular organisms with highly efficient locomotion move."
In their study, De Simone and Arroyo simulated euglenid species with different shapes and locomotion methods, based chiefly on cell body deformation and swelling, to describe in detail the mechanics and characteristics of the movement obtained.
"Our work not only helps to understand the movement mechanism of these unicellular organisms, but it provides a knowledge base to plan the locomotion system of future micro-robots."
Neurons that encode spatial information form “geotags” for specific memories and these geotags are activated immediately before those memories are recalled, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered. They used a video game in which people navigate through a virtual town delivering objects to specific locations.
“These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags,” said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.
Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.
As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering.
After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.
After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.
This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories (the list of items that had been delivered).
“During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device,” Kahana said. “These ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation.”
Prof. Hawking, the cosmologist, 71, said the brain operates in a similar way to a computer program, meaning it could in theory be kept running without a body to power it. He made these comments at the 33rd Cambridge Film Festival, featuring a special gala screening of Hawking presented by the documentary’s subject, Professor Stephen Hawking.
Asked about whether a person's consciousness can live on after they die, he said: "I think the brain is like a programme in the mind, which is like a computer, so it's theoretically possible to copy the brain onto a computer and so provide a form of life after death.
"However, this is way beyond out present capabilities. I think the conventional afterlife is a fairy tale for people afraid of the dark." The film tells the story of Prof Hawking's life, from his childhood in Oxford to his current home in Cambridge where he lives with the help of a group of carers.
A select set of videos from the 2013 Foresight Technical Conference: Illuminating Atomic Precision, held January 11-13, 2013 in Palo Alto, have been made available on Vimeo. Videos have been posted of those presentations for which the speakers have consented. This conference has brought together many of the world’s leading researchers on a wide range of work relating to atomically and molecularly precise processes, materials, and devices. The wide variety of topics stimulates interdisciplinary dialog, productive collaboration, and scientific and technical progress towards beneficial nanotechnologies.
Our capacity to partner with biology to make useful things is limited by the tools that we can use to specify, design, prototype, test, and analyze natural or engineered biological systems. However, biology has typically been engaged as a "technology of last resort" in attempts to solve problems that other more mature technologies cannot. This lecture will examine some recent progress on virus genome redesign and hidden DNA messages from outer space, building living data storage, logic, and communication systems, and how simple but old and nearly forgotten engineering ideas are helping make biology easier to engineer.
Via Szabolcs Kósa
Computer scientists at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University have joined forces to put powerful probabilistic reasoning algorithms in the hands of bioengineers.
In a new paper presented at the Neural Information Processing Systems conference on December 7, Ryan P. Adams and Nils Napp have shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions.
These algorithms, which use a technique called “message passing inference on factor graphs,” are a mathematical coupling of ideas from graph theory and probability. They represent the state of the art in machine learning and are already critical components of everyday tools ranging from search engines and fraud detection to error correction in mobile phones.
Adams’ and Napp’s work demonstrates that some aspects of artificial intelligence (AI) could be implemented at microscopic scales using molecules. In the long term, the researchers say, such theoretical developments could open the door for “smart drugs” that can automatically detect, diagnose, and treat a variety of diseases using a cocktail of chemicals that can perform AI-type reasoning.
“We understand a lot about building AI systems that can learn and adapt at macroscopic scales; these algorithms live behind the scenes in many of the devices we interact with every day,” says Adams, an assistant professor of computer science at SEAS whose Intelligent Probabilistic Systems group focuses on machine learning and computational statistics. “This work shows that it is possible to also build intelligent machines at tiny scales, without needing anything that looks like a regular computer. This kind of chemical-based AI will be necessary for constructing therapies that sense and adapt to their environment. The hope is to eventually have drugs that can specialize themselves to your personal chemistry and can diagnose or treat a range of pathologies.”
Adams and Napp designed a tool that can take probabilistic representations of unknowns in the world (probabilistic graphical models, in the language of machine learning) and compile them into a set of chemical reactions that estimate quantities that cannot be observed directly. The key insight is that the dynamics of chemical reactions map directly onto the two types of computational steps that computer scientists would normally perform in silico to achieve the same end.
This insight opens up interesting new questions for computer scientists working on statistical machine learning, such as how to develop novel algorithms and models that are specifically tailored to tackling the uncertainty molecular engineers typically face. In addition to the long-term possibilities for smart therapeutics, it could also open the door for analyzing natural biological reaction pathways and regulatory networks as mechanisms that are performing statistical inference. Just like robots, biological cells must estimate external environmental states and act on them; designing artificial systems that perform these tasks could give scientists a better understanding of how such problems might be solved on a molecular level inside living systems.
As 2013 draws to a close Juniper Research has drawn up a list of predictions for the coming year, all neatly wrapped up as the top trends for the technologies industries for 2014.
2014: When Cities Get Smarter
1. 2014: When Cities Get Smarter
Australian researchers have grown a rudimentary kidney in the laboratory from human stem cells, an advance they say could lead to better ways of treating renal disease and testing drug safety.
Earlier this year, an American team announced that they had created a rat kidney from stem cells, though it was incredibly inefficient when transplanted into an organism. This organ from Little’s lab was created with human stem cells and can give researchers an unprecedented insight into how new medications will impact human kidneys, which can dramatically improve success in clinical safety trials. They also represent the potential for improved treatment of renal disease, as that is an area that is lacking at present. For those who are suffering renal disease, dialysis and organ transplantation are the two main treatments available. Eventually, decades down the road, this technology could potentially be used to create full-sized replacement organs for those who have exhausted all other options.
Currently, the kidneys are very small and are about the size that you would find in a five-week old human embryo. This technology needs to become much more advanced before it will be useful in a clinical setting, but that does not at all take from the significance of this announcement. The cellular complexity of this newly manufactured kidney is unlike anything that has been seen before in lab-grown organs. Using stem cells to create new organs for drug safety testing and transplantation purposes has been a goal among those in regenerative medicine for years, and the results from Melissa Little’s lab presents a very large step forward.
The Ring Nebula is one of the most famous celestial objects because of its delicate beauty. That shimmering oval of rainbow colors has popped up everywhere from dorm-room posters to book jackets to album covers to just about every TV backdrop in the history of sci-fi. But it is more than mere eye candy. The Ring is also fascinating for what it tells us about our future.
Middleweight stars like the sun expand and cool in their old age, briefly turning into red giants. After the red giant stage, the outer layers puff off, leaving behind a white dwarf: a dense, super-hot stellar cinder. Those puffed-out layers glow brightly before they disperse. That is exactly what we are seeing in this brand-new Hubble image of the Ring Nebula, along with the video interpretation of that image–a snapshot of what will happen to the sun as it runs out of nuclear fuel in about 5 billion years. The Hubble data also add a completely new twist to what astronomers know about the Ring. For the first time, researchers can get an accurate, three-dimensional understanding of the structure of the nebula.
Put that information together with other images taken using different filters and imaging techniques, and scientists have an incredibly detailed picture of how a sunlike star dies.
For comparison, I’ve collected a greatest-hits gallery of recent images of the Ring Nebula taken by other telescopes and satellites, each created using different techniques. You will notice that there is quite a bit of variation here: Most of these pictures little resemble the space-lollipop that has become an astronomy pop-culture staple. That is because the complexity of the nebula itself. It contains many types of atoms in many different states of ionization, each emitting in its own characteristic way. By choosing to zero in on a particular wavelength (or range of wavelengths) of radiation, astronomers can highlight specific elements, temperatures, and densities of the Ring Nebula.
Look at the Ring Nebula with your own eye through a good-size telescope (you’ll need at least 8″ of aperture under dark skies) and you will walk away with yet another impression. Because of the selective sensitivity of the human retina, the Ring Nebula will appear as a faint, diaphanous greenish-gray oval. That is a useful reminder that the colors of space are highly subjective. Each type of image is truthful in its own way, but none of them have a unique claim on representing what the Ring “really” looks like.
Click on each thumbnail for an explanation of how it was created and what it shows; then watch the video for the full story about the dying gasps of a sunlike star. And be glad that all this action is all unfolding far away. Before the sun produces a beautiful nebula of its own, it will have either baked the Earth to a crisp or swallowed and digested our planet entirely.
3D "bioprinting" takes a three-dimensional, biological structure and essentially clones it using a printer.
Louisville researcher Stuart Williams is not talking about a far-off, science-fiction effort when he describes how local scientists will create new, functioning human hearts — using cells and a 3-D printer.
“We think we can do it in 10 years — that we can build, from a patient’s own cells, a total ‘bioficial’ heart,” said Williams, executive and scientific director of the Cardiovascular Innovation Institute, a collaboration between the University of Louisville and the Jewish Heritage Fund for Excellence.
The project is among the most ambitious in the ever-growing field of three-dimensional printing that some experts say could revolutionize medicine.
Known for creating products as diverse as car parts and action figures, 3-D printing is also being used to create models of human bones and organs, medical devices, personalized prosthetics and now, human tissues. Williams describes the process as taking a three-dimensional structure “and essentially cloning it, using a printer.”
“Bioprinting is pretty much done everywhere,” said Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine in North Carolina, where scientists recently won an award for innovations in bioprinting. “Our ultimate goal is increasing the number of patients who get organs.”
In February 2013, doctors at Weill Cornell Medical College and biomedical engineers at Cornell University in New York announced they had used 3-D printing and injectable gels made of cells to build a facsimile of a human ear that looks and acts like a real one.
And in the case of the baby in Michigan, university officials said the splint was created from a CT scan of the patient’s trachea and bronchus, integrating a computer model with 3-D printing. The baby, who used to stop breathing every day when his collapsed bronchus blocked the flow of air, was off a ventilator three weeks after the surgery, and officials say he hasn’t had breathing trouble since.
Wake Forest scientists, like their peers in Louisville, are working on organs. Officials at Wake Forest say their scientists were the first in the world to engineer a lab-grown organ, and they hope to scale up the process by printing organs with a custom printer. Institute scientists there have also designed a bioprinter to print skin cells onto burn wounds.
So far, Williams said, he knows of no instance where a tissue or organ created through 3-D printing has been implanted in a human. But he said the race is on.
“I think this will have an incredible effect on trauma patients … on the armed forces. You can imagine printing a jaw, printing muscle cells, printing the skin,” he said. “Ultimately I see it being used to print replacement kidneys, to print livers, and to print hearts — and all from your own cells.”
As early as 2015, your Amazon purchases could be dropped at your door within 30 minutes courtesy of unmanned aerial drones. Amazon CEO Jeff Bezos revealed plans for the delivery service Prime Air (an extension of Amazon Prime which guarantees two-day shipping) in a 60 Minutes prime time interview.
The service would ship orders under five pounds (2.3 kg) after they are packed into small plastic containers and then scooped up by Amazon's custom-built "octocopter." The drone then delivers the package to customers within a 10 mile (16 km) radius of Amazon's fulfillment centers.
Clearly the company will need to jump through various hoops to get the service off the ground, with public safety being a primary concern. "Safety will be our top priority, and our vehicles will be built with multiple redundancies designed to commercial aviation standards," the company says.
The Federal Aviation Administration (FAA) is currently working on rules and regulations for unmanned aerial vehicles, a process which Amazon hopes will be completed sooner rather than later. "We hope the FAA's rules will be in place as early as sometime in 2015. We will be ready at that time."
We have seen a rise in proposals for the use of drones to deliver commercial products. One Australian startup plans to use drones to deliver school textbooks to customers in March 2014, while The Burrito Bomberhopes to be dropping Mexican cuisine on people as soon as 2015. With Amazon's product range, however, Prime Air would be the first to do so on such a large and diverse scale.
It may sound like science fiction, but given that Bezos claims that 300 items per second will be ordered from Amazon on Cyber Monday, it is possible that flocks of Prime Air drones will be zipping around above us in the very near future.