"Recollect solves two problems at once by providing a simple tool to archive your online data and search through it later to re-discover your old posts.
The more information we share, the harder it can be to find any particular post later on and the more we have to lose if any of these networks ever disappear.
With Recollect, users can archive posts shared on Twitter, Instagram, Flickr and Foursquare – along with any comments on those posts from other users — and download a Zip file of all that data at any time. Prices for the service range from $6/month for 5,000 archived photos, one monthly data download and one account per social network, to a premium $24/month account that covers 50,000 archived photos, weekly downloads and up to 5 accounts per website. There is also an option to try out the service for 30 days, which gives users the ability to archive and download all their online data once for free.
For the beta release, the team decided to narrow their focus to working with just the four social networks mentioned above and building a set of four key features into the service, including the ability to archive posts, download data, browse through the archive and search for specific keywords.
Recollect offers a novel solution to what we might call the re-discovery problem — helping users categorize and unearth their treasure trove of old posts.
The team hopes to continue improving on Recollect by building what Martin describes as a more “intelligent archive,” which will offer additional options for browsing and discovering older content.
The group also plans to incorporate more social networks into Recollect, including Facebook..."
DNA-based protection technologies are especially suitable for anti-counterfeiting measures. Genuine-ID, a Swiss start-up company, has developed state-of-the-art security products based on DNA code which they call the Genuine-ID product passport.
Here is how it works: The DNA molecules are added a products raw material during the production process. Only 1 ppm (one part per million) is required to uniquely mark the material. That is one gram per one ton of raw material. Therefore there material properties remain unchanged. No extra production steps are required.
The DNA molecules are encapsulated in a silicon dioxide microparticle in order to protect them from the harsh conditions during production and the product life cycle. These microparticles are nothing other than very small glass balls with a diameter of about 100 nm. In the case of the Genuine-ID product passport, a 64-digit code-length is used. The number of possible combinations is therefore 4x4x4....x4 and that 64 times. Mathematically that is written as: 464 = 3.4E38. That is 34 followed by 37 zeroes. A copy of every unique DNA code that is added to a product – the so called DNA primer – is kept as a sample in a safe deposit at a Swiss bank. For security reasons, no electronic version of the DNA-sequence is stored.
In the case of a sample test inquiry, a very small probe (1 mg) is taken from the product and paired with the original DNA primer for a PCR (polymerase chain reaction) test. Only when the DNA code from the two samples are a perfect match will the test result be positive.
The analysis lasts about 2 hours. Many probes can be analyzed simultaneously. The PCR analysis is a proven and established DNA analysis method which is used for blood tests (HIV-test, etc.) and paternity tests. The result is unique and 100% certain.
The last life on Earth will perish in 2.8 billion years, scorched by the dying sun as it swells to become a red giant. For about a billion years before that, the only living things will be single-celled organisms drifting in isolated pools of hot, salty water. A grim outlook, sure, but there's a silver lining for today's alien-hunters. The model that predicts these pockets of life on a future Earth also hints that the habitability of planets around other stars is more varied than previously believed, offering new hope for finding life in unlikely places.
Using what we know about Earth and the sun, researchers in the UK calculated a timeline for the phases of life on our planet as the sun expands to become a red giant. Previous studies modelled this scenario for Earth as a whole, but Jack O'Malley-James at the University of St Andrews, UK, and his colleagues wanted to consider the possibility that life might survive in a few extreme habitats. Sun-like stars of different sizes age at different rates, so the team also looked at how long simple and complex life might thrive around smaller and larger stars.
"Habitability is not so much a set attribute of a planet, but more something that has a lifetime of its own," says O'Malley-James. The team started by modelling rising temperatures on Earth's surface at different latitudes, along with long-term changes to the planet's orbital characteristics. Their model shows that as the sun ages and heats Earth more, complex life withers - plants, mammals, fish and finally invertebrates disappear as temperatures soar. The oceans vaporise, and plate tectonics grind to a halt without water as a lubricant. Eventually, pools of hot brine are all that's left in the less scorching higher altitudes, in sheltered caves or far underground. Microbes living in these pools could rule the Earth for about a billion years before they, too, dwindle to extinction.
Applying the model to stars of various sizes, life on an Earth-like planet would be only single-celled for about the first 3 billion years. Complex life could exist for comparatively short periods before the star begins to die and conditions once again become favourable to microbes alone. Statistically then, if alien life is out there, it is more likely to be microbial simply due to timing, the team says ( http://arxiv.org/abs/1210.5721 ).
The dream of regaining the ability to stand up and walk has come closer to reality for people paralyzed below the waist who thought they would never take another step. A team of engineers at Vanderbilt University’s Center for Intelligent Mechatronics has developed a powered exoskeleton that enables people with severe spinal cord injuries to stand, walk, sit and climb stairs. Its light weight, compact size and modular design promise to provide users with an unprecedented degree of independence.
The university has several patents pending on the design and Parker Hannifin Corporation – a global leader in motion and control technologies – has signed an exclusive licensing agreement to develop a commercial version of the device, which it plans on introducing in 2014.
How to build high-level, class-specific feature detectors from only unlabeled data? For example, is it possible to learn a face detector using only unlabeled images? This summer Google set a new landmark in the field of artificial intelligence with software that self-learned how to recognize cats, people, and other things simply by watching YouTube videos (so-called "Unsupervised Self-Taught Software“). That technology, modeled on how brain cells operate, is now being put to work making Google’s search smarter, with speech recognition being the first service to benefit.
Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something. Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face recognition. Google’s engineers have found ways to put more computing power behind this approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations. The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.
Google is now using these neural networks to recognize speech more accurately, a technology increasingly important to Google’s smartphone operating system, Android, as well as the search app it makes available for Apple devices. “We got between 20 and 25 percent improvement in terms of words that are wrong,” says Vincent Vanhoucke, a leader of Google’s speech-recognition efforts. “That means that many more people will have a perfect experience without errors.” The neural net is so far only working on U.S. English, and Vanhoucke says similar improvements should be possible when it is introduced for other dialects and languages.
Other Google products will likely improve over time with help from the new learning software. The company’s image search tools, for example, could become better able to understand what’s in a photo without relying on surrounding text. And Google’s self-driving cars and mobile computer built into a pair of glasses could benefit from software better able to make sense of more real-world data.
The new technology grabbed headlines back in June of this year, when Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause. A next step could be to have the same model learn the sounds of words as well. Being able to relate different forms of data like that could lead to speech recognition that gathers extra clues from video, for example, and it could boost the capabilities of Google’s self-driving cars by helping them understand their surroundings by combining the many streams of data they collect, from laser scans of nearby obstacles to information from the car’s engine.
Google’s work on making neural networks brings us a small step closer to one of the ultimate goals of AI — creating software that can match animal or perhaps even human intelligence, says Yoshua Bengio, a professor at the University of Montreal who works on similar machine-learning techniques. “This is the route toward making more general artificial intelligence — there’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world,” he says. In fact, the workings of Google’s neural networks operate in similar ways to what neuroscientists know about the visual cortex in mammals, the part of the brain that processes visual information, says Bengio. “It turns out that the feature learning networks being used [by Google] are similar to the methods used by the brain that are able to discover objects that exist.”
However, he is quick to add that even Google’s neural networks are much smaller than the brain, and that they can’t perform many things necessary to intelligence, such as reasoning with information collected from the outside world.
Donated kidneys are in huge demand worldwide. In the UK alone, there are 7200 people on the waiting list – a state of affairs that the new study takes a small step towards ending.
Christodoulos Xinaris of the Mario Negri Institute for Pharmacological Research in Bergamo, Italy, and his colleagues extracted cells from the kidneys of mouse embryos as they grew in the mother. The cells formed clumps that could be grown for a week in the lab to become "organoids" containing the fine plumbing of nephrons – the basic functional unit of the kidney. A human kidney can contain over 1 million nephrons.
Next, Xinaris's team incubated the organoids in the presence of vascular endothelial growth factor (VEGF), which makes blood vessels grow. Then they transplanted the organoids onto the kidneys of adult rats. By injecting the rats with extra VEGF, the researchers encouraged the new tissue to grow its own blood vessels within days. The tissue also developed features called glomeruli, chambers where blood enters the nephrons to be cleansed and filtered.
The researchers then injected the animals with albumin proteins labelled with markers that give out light. They found that the kidney grafts successfully filtered the proteins from the bloodstream, proving that they could crudely perform the main function of real kidneys.
"This is the first kidney tissue in the world totally made from single cells," says Xinaris. "We have functional, viable, vascularised tissue, able to filter blood and absorb large molecules from it. The final aim is to construct human tissues." "This technique could not be used clinically, but it shows a possible way forward for developing a functional kidney in the future," says Anthony Hollander, a tissue engineer at the University of Bristol, UK. Although it will be several years before lab-grown tissues can benefit patients, the team says that the latest findings are a key milestone on the way.
Kidneys are the latest of several lab-grown organs and replacement parts to be developed, including livers, windpipes, parts of voiceboxes and hearts.
It has become routine for engineers to draw inspiration from the animal kingdom when designing mobile robots. There are now machines that run like cheetahs, fly like hummingbirds, and swim like zebrafish. So it’s not surprising that when a team of British and American scientists joined forces to build a robot that wriggles through water, they decided to use the sea lamprey, a primitive eel-like fish, as a model.
But this lamprey-inspired bot won’t merely be another animal-mimicking machine. Instead, it will be a “biohybrid”, a simulated sea lamprey that integrates electronic components with living animal cells. The project team hopes to create a tiny swimming machine, just a millimetre in length, that can respond to environmental cues – navigating using ambient light and following the trail of a chemical compound through the water, for instance. The micro-robot, dubbed “Cyberplasm,” could then perform hazardous underwater tasks, such as looking for submerged mines, and explore worlds inaccessible to humans.
“The idea is to build a part biological, part machine robot,” says Daniel Frankel, a chemical engineer at Newcastle University and one of the lead scientists for the project. “We’re going to do that using genetic engineering – we’re changing the way the cells work so they can be read by electronics.” This ambitious project, which began in 2009 aims to build a swimming robot with cells that have been genetically engineered to act like eyes, cells that detect chemicals, and muscles that contract, says Frankel. “All of these components will eventually work together like an artificial organism.”
Frankel’s job is to design the light- and chemical-sensitive cells that will act as Cyberplasm’s “eyes” and “nose”. To engineer the eye sensors, Frankel started with a supply of Chinese hamster ovary cells, which are commonly used in biological and medical research. Then they modified these cells by inserting a gene that makes plants responsive to light. They linked this plant DNA with another gene – common in mammalian cells –which produces nitric oxide, a gas that acts as an important signaling molecule in the body. These genetic manipulations produced hamster cells that are light-responsive; whenever light hits the cells, they respond by producing a hit of nitric oxide.
Frankel is now using the same approach to build the robot’s chemical sensors, working with Christopher Voigt, a biological engineer at MIT, to engineer hamster cells that give off nitric oxide in the presence of certain chemical compounds.
The release of nitric oxide will allow the modified mammalian cells to communicate with Cyberplasm’s electronic “brain”. When the researchers assemble the final robot, they’ll implant a nitric-oxide-sensitive electrode near the genetically engineered cells. And whenever the electrode detects a nitric oxide plume, it will send a signal to a microprocessor, which will then coordinate the robot’s movement.
Life after death, for most people, is a faithful belief in a spiritual hereafter, a transfer to a higher, non-bodily consciousness. For cryonics enthusiasts, however, a “second life” – or more accurately, a resuscitated life with a little help from freezer storage – here on Earth is the goal.
The Prospect of Immortality is a six-year study by UK photographer Murray Ballard, who has traveled the world pulling back the curtain on the amateurs, optimists, businesses and apparatuses of cryonics. “It’s not a large industry,” says Ballard, who visited the Alcor Life Extension Foundation in Phoenix, Arizona; the Cryonics Institute in Detroit, Michigan; KrioRus in Moscow, Russia; and Suspended Animation Inc in Boytan Beach, Florida; among others.
Cryonics is the preservation of deceased humans in liquid nitrogen at temperatures just shy of its boiling point of −196°C/77 Kelvin. Cryopreservation of humans is not reversible with current science, but cryonicists hypothesize that people who are considered dead by current medical definitions may someday be recovered by using advanced future technologies.
Stats are hard to come by, but it is estimated there are about 2,000 people signed up for cryonics and approximately 250 people currently cryopreserved. Over 100 pets have also been placed in vats of liquid nitrogen with the hopes of a future recovery.
Ballard’s project began in 2006 after he read a news article, “Freezer Failure Ends Couple’s Hopes of Life After Death,” about a French couple who had been kept in industrial freezers beneath their chateau in the Loire valley. He phoned up a small group of UK cryonicists and attended their meetings and training sessions. Later, funding from an arts organization paid for two trips to the U.S.
A chance meeting with one of the founders of KrioRus, a Russian cryonics organization, at a UK conference set up a memorable week-long trip to Moscow, St. Petersburg, and Vorenzh. There he photographed the two resting places of the first Russian cryonics neuro-patient.
“I photographed her grave in a cemetery just outside St. Petersburg and the cryostat containing her head at the facility in Moscow.”
Heads take up less storage space than whole bodies. They’re cheaper to store.
It's not quite warp drive, but researchers are hot on the trail of building nuclear fusion impulse engines, complete with real-life dilithium crystals.
There's a hierarchy of "Star Trek" inventions we would like to see become reality. We already have voice-controlled computers and communicators in the form of smartphones. A working Holodeck is under development. Now, how about we get some impulse engines for our starships?
The University of Alabama in Huntsville's Aerophysics Research Center, NASA, Boeing, and Oak Ridge National Laboratory are collaborating on a project to produce nuclear fusion impulse rocket engines. It's no warp drive, but it would get us around the galaxy a lot quicker than current technologies. The scientists are hoping to make impulse drive a reality by 2030. It would be capable of taking a spacecraft from Earth to Mars in as little as six weeks.
In a sense, AI has become almost mundanely ubiquitous, from the intelligent sensors that set the aperture and shutter speed in digital cameras, to the heat and humidity probes in dryers, to the automatic parking feature in cars. And more applications are tumbling out of labs and laptops by the hour.
“It’s an exciting world,” says Colin Angle, chairman and cofounder of iRobot, which has brought a number of smart products, including the Roomba vacuum cleaner, to consumers in the past decade.
What may be most surprising about AI today, in fact, is how little amazement it creates. Perhaps science-fiction stories with humanlike androids, from the charming Data (“Star Trek“) to the obsequious C-3PO (“Star Wars”) to the sinister Terminator, have raised unrealistic expectations. Or maybe human nature just doesn’t stay amazed for long.
“Today’s mind-popping, eye-popping technology in 18 months will be as blasé and old as a 1980 pair of double-knit trousers,” says Paul Saffo, a futurist and managing director of foresight at Discern Analytics in San Francisco. “Our expectations are a moving target.”
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
Entrepreneurs like iRobot’s Mr. Angle aren’t fussing over whether today’s clever gadgets represent “true” AI, or worrying about when, or if, their robots will ever be self-aware. Starting with Roomba, which marks its 10th birthday this month, his company has produced a stream of practical robots that do “dull, dirty, or dangerous” jobs in the home or on the battlefield. These range from smart machines that clean floors and gutters to the thousands of PackBots and other robot models used by the US military for reconnaissance and bomb disposal.
While robots in particular seem to fascinate humans, especially if they are designed to look like us, they represent only one visible form of AI. Two other developments are poised to fundamentally change the way we use the technology: voice recognition and self-driving cars.
EPFL scientists are developing a prototype of a pair of “augmented” glasses. You’ll be able to read messages, look at your agenda, and receive a variety of information directly on the lenses.
No need to turn to your smartphone to check the time, look at your agenda or the weather forecast, read a text message or map a route in an unfamiliar city. All this information, and much more, will soon be displayed on the lenses of “augmented” glasses via a mini-projector placed on the frames - and on the condition that you’re also wearing a specially designed pair of contact lenses.
EPFL scientists in the Laboratory of Photonic Devices are currently working on a prototype that’s similar to the project announced this spring by Google. The applications envisioned for this eagerly awaited invention run the gamut – games, GPS, teaching enhancement, support for the deaf and hard of hearing, and myriad other kinds of augmented reality.
The Laboratory is working closely with EPFL start-up company Lemoptix, which specializes in miniaturized projection systems, to develop a high definition micro-projector that will blend discreetly into the right arm of the glasses. From this projector, images and information will be sent to the specially treated glasses lens via holography. This is a process in which the light scattered off of an object is recorded and then later reconstructed in 3D in the absence of the object. In the case of augmented glasses, the hologram will be projected on the lenses in such a way that the image is reflected in the direction of the eye, while the lenses still appear transparent. The user thus can still see through the glasses.
Before this invention can be commercialized, however, all these technologies must be refined, tested, and put together. It will likely be between two and five years before we’ll be able to put on a pair of these glasses.
h+ Magazine is a new publication that covers technological, scientific, and cultural trends that are changing human beings in fundamental ways. Around the end of the year, media outlets regularly try to out-predict each other. Particularly in tech journalism, The Next Top Ten Trends To Watch or The Top Apps For 2012 are everywhere. They’re easy to write and get clicked and linked like crazy, so editors love these lists. Who’s to blame them? Even though most people grin smugly while doing so, they read these lists. h+ wanted to go beyond just a top 10 link list, both in breadth and depth. So they asked a bunch of peers and friends to share some thoughts. What are the main drivers of change in their respective fields, what does that mean, and what type of change do they hope for? h+ tried to capture specific insights into different fields & industries (deep knowledge), expectations (what will happen) and desires (what should happen).Among those we asked were designers, scientists, strategists, and a few people who, like us, squarely “sit in between the chairs”, as the Germans say. A big thank you to all contributors who took up the challenge: Alexandra Deschamps-Sonsino, Dannie Jost, Georgina Voss, Mike Arauz, Sami Niemelä, Stefan Erschwendner and Tamao Funahashi.
want to use cloning to save endangered species, but they are having only limited success. A number of times each week, Martha Gómez creates new life. Today, she has set out to produce a South African black-footed cat. Using a razor-thin hollow needle under a microscope, the veterinarian injects a body cell from the endangered species into an enucleated egg cell taken from a house cat. Then she applies an electric current. Scientists like Gómez are hoping for a new era of wildlife conservation. In a bid to save endangered species, they tear down biological barriers and create embryos that contain cell material from two different species of mammals. Iberian lynxes, tigers, Ethiopian wolves and panda bears could all soon be carried to term by related surrogate mothers, and thus saved for future generations.
The world's first surrogate mother of a cloned animal from another species had udders and was named Bessie. In early 2001, the cow delivered a gaur via cesarean section in the United States. The endangered wild ox calf, native to Southeast Asia, had been cloned by the US company Advanced Cell Technology. But the gaur lived only briefly, dying of common dysentery within 48 hours of birth. Since then, researchers have made dozens of attempts at interspecies cloning -- but with limited success. Whenever animals were brought into the world alive, they usually died shortly thereafter.
In 2009, for instance, biotechnicians managed to clone a Pyrenean ibex. The egg was donated by a common domesticated goat. After the birth, the kid desperately gasped for air. Seven minutes later, it was dead.
Many cloning experiments end this way. Geneticists have so far only been able to speculate on the reasons, but the string of failures actually tends to spur researchers to continue. Gómez, for instance, has specialized in cloning wildcats -- and has been quite successful. Cloned African wildcats Ditteaux, Miles and Otis are living in enclosures at the Audubon Center animal facility, and snarl at anyone who approaches them. "They are doing perfectly fine," says Gómez.
Gómez admits that there are problems. Fusing cells from two different species often leads to huge mix-ups. Genes are activated or deactivated at the wrong time, and developmental stages become delayed. In the case of the black-footed cat, for instance, Gómez has so far had no success. "We were able to insert embryos into the uterus of a house cat," she says. "But unfortunately, they didn't develop."
But the researcher remains optimistic. She hopes that she will soon be able to transform body cells from her wildcats into pluripotent stem cells. Cells of this type could considerably simplify the cloning process because they can be used to create any type of body cell and can be easily multiplied. Other researchers have already succeeded in producing such stem cells from snow leopards and northern white rhinoceroses, which are both endangered species.
There are in fact virtually no limits to the creative experimentation of today's biotechnicians. Chinese researchers have fused body cells from panda bears with eggs cells taken from rabbits. But the resulting embryos died shortly thereafter -- in the uteruses of house cats. Meanwhile, Japanese researchers have implanted skin cells from an unborn baby sei whale in enucleated egg cells taken from cattle and pigs.
Other Japanese scientists are even trying to clone the woolly mammoth. Three years ago, cell nuclei from these hairy, tusked ice-age beasts were discovered in mammoth legs that have been frozen in the permafrost of Northeast Siberia for the past 15,000 years.
In the laboratory, a team led by geneticist Akira Iritani injected cell nuclei from the prehistoric animal into enucleated egg cells from mice. The cell constructs only survived for a few hours, but Iritani remains optimistic that an elephant surrogate mother will soon bring to term the first mammoth clone. "From a scientific point of view it is possible," says geneticist Gómez. But is there any point in doing it? The 51-year-old professor hesitates briefly. "I wouldn't do it," she admits. "I would prefer spending all the money on those species that haven't completely vanished from the earth."
Control Data Corp’s (CDC) first supercomputer, the CDC 6600, operated at a speed of three megaflops (10E6 floating-point operations per second). A half century on, our most powerful supercomputers are a billion times faster. But even that impressive mark will inevitably fall. Engineers are eyeing an exaflop (10E18 flops)—and some think they’ll get there by 2018.
Supercomputers enable scientists to model nature—protein folding, the Big Bang, Earth’s climate—as never before. China’s Tianhe-1A (2.57 petaflops) recently ran a 110 billion atom model through 500,000 steps. The model was a mere 0.116 nanoseconds in real time, and it took the machine three hours to complete.
International competition for the top spot is as tight as it’s ever been. China knocked IBM’s Jaguar off the top of the pile with their Tianhe-1A in 2010 (2.57 petaflops). Then it was Japan’s turn to lead the pack with their K computer in 2011 (10.5 petaflops). And the US retook the lead with IBM’s Sequoia in 2012 (16.3 petaflops).
The pace is blistering. Today’s top speed (16.3 petaflops) is 16 times faster than its counterpart four years ago (1.04 petaflops). And Oak Ridge National Laboratory is converting its ex-champion Cray Jaguar into the 20-petaflop Titan (operational later this year). It’s believed Titan’s capacity will be upwards of 35 petaflops.
But even at 35 petaflops, an exaflop (1,000 petaflops) seems distant. Is 2018 a realistic expectation? Sure, it’s plausible. It took 21 years to go from megaflops in 1964 (CDC 6600) to gigaflops in 1985 (Cray 2). But only 11 years to break the teraflop barrier in 1996 (ASCI Red). And just 12 years to enter petaflop territory in 2008 (Roadrunner).
Clocking an exaflop by 2018 would be a decade’s development—a record pace, but not too far outside the realm of reason. The below chart maps supercomputers as long as they’ve been officially ranked by Top500. Today’s pace puts processing power within range of an exaflop by 2018.
Long after human civilization is gone without a trace, our satellites will still orbit Earth intact. Last Pictures is a project to store 100 images on one of them to be discovered later.
Of all the images that have ever been made, would you be able to select just 100 to represent our species and human achievement Trevor Paglen’s Last Pictures is a project to do not only that, but also launch those images into geosynchronous orbit around Earth – all so that long after humans are gone, any space-wanderer will be able to fathom what humanity was all about. The project is based on the idea that after billions of years, all signs of human civilization will have eroded away on Earth, but its satellites will still spin around the planet, making them the best bet for an indefinite time capsule. “Any group of people would come up with 100 totally different images, but that is part of the fun. It’s an impossible project. Part of it was to engage peoples’ imaginations,” says artist Trevor Paglen, who conceived of the concept and collaborated with scientists, anthropologists, curators and corporations to get the images into space.
A personal view by John Smart, who co-founded the Brain Preservation Foundation in 2010 with the neuroscientist Ken Hayworth. He describes his perspective in the following way:
"Let me propose to you four interesting statements about the future:
1. As I argue in this video, chemical brain preservation is a technology that may soon be validated to inexpensively preserve the key features of our memories and identity at our biological death.
2. If either chemical or cryogenic brain preservation can be validated to reliably store retrievable and useful individual mental information, these medical procedures should be made available in all societies as an option at biological death.
3. If computational neuroscience, microscopy, scanning, and robotics technologies continue to improve at their historical rates, preserved memories and identity may be affordably reanimated by being “uploaded“ into computer simulations, beginning well before the end of this century.
4. In all societies where a significant minority (let’s say 100,000 people) have done brain preservation at biological death, significant positive social change will result in those societies today, regardless of how much information is eventually recovered from preserved brains.
These are all extraordinary claims, each requiring strong evidence. Many questions must be answered before we can believe any of them. Yet I provisionally believe all four of these statements, and that is why I co-founded the Brain Preservation Foundation in 2010 with the neuroscientist Ken Hayworth. BPF is a 501c3 noprofit, chartered to put the emerging science of brain preservation under the microscope. Check us out, and join our newsletter if you’d like to stay updated on our efforts."
Softkill Design‘s ProtoHouse project investigates the architectural potential of the latest Selective laser sintering technologies, testing the boundaries of large scale 3D printing by designing with computer algorithms that micro-organize the printed material itself.
With the support of Materialise, Softkill Design produced a high-resolution prototype of a 3D printed house at 1:33 scale. The model consists of 30 detailed fibrous pieces that can be assembled into one continuous cantilevering structure, without need for any adhesive material.
The human visual system consists of a hierarchically organized, highly interconnected network of several dozen distinct areas. Each area can be viewed as a computational module that represents different aspects of the visual scene. Some areas process the simple structural features of a scene, such as the edge orientation, local motion and texture. Others process complex semantic features, such as faces, animals and places. Recently, researches have been focussing on discovering the way each of these areas are representing the visual world, and on how these multiple representations are modulated by attention, learning and memory. Because the human visual system is exquisitely adapted to process natural images and movies we focus most of our effort on natural stimuli.
One way to think about visual processing is in terms of neural coding. Each visual area encodes certain information about a visual scene, and that information must be decoded by downstream areas. Both encoding and decoding processes can, in theory, be described by an appropriate computational model of the stimulus-response mapping function of each area. Therefore, our descriptions of visual function are posed in terms of quantitative computational encoding models. However, once an accurate encoding model has been developed, it is fairly straightforward to convert it into a decoding model that can be used to read out brain activity, in order to classify, identify or reconstruct mental events. In the popular press this is often called “brain reading”.
To control the three-dimensional shape of engineered tissue, researchers grow cells on tiny, sponge-like scaffolds. These devices can be implanted into patients or used in the lab to study tissue responses to potential drugs.
A team of researchers from MIT, Harvard University and Boston Children’s Hospital has now added a new element to tissue scaffolds: electronic sensors. These sensors, made of silicon nanowires, could be used to monitor electrical activity in the tissue surrounding the scaffold, control drug release or screen drug candidates for their effects on the beating of heart tissue.
Until now, the only cellular platforms that incorporated electronic sensors consisted of flat layers of cells grown on planar metal electrodes or transistors. Those two-dimensional systems do not accurately replicate natural tissue, so the research team set out to design a 3-D scaffold that could monitor electrical activity, allowing them to see how cells inside the structure would respond to specific drugs. The researchers built their new scaffold out of epoxy, a nontoxic material that can take on a porous, 3-D structure. Silicon nanowires embedded in the scaffold carry electrical signals to and from cells grown within the structure.
The team chose silicon nanowires for electronic sensors because they are small, stable, can be safely implanted into living tissue and are more electrically sensitive than metal electrodes. The nanowires, which range in diameter from 30 to 80 nanometers (about 1,000 times smaller than a human hair), can detect less than one-thousandth of a watt, which is the level of electricity that might be seen in a cell.
The team also grew blood vessels with embedded electronic sensors and showed that they could be used to measure pH changes within and outside the vessels. Such implantable devices could allow doctors to monitor inflammation or other biochemical events in patients who receive the implants. Ultimately, the researchers would like to engineer tissues that can not only sense an electrical or chemical event, but also respond to it appropriately — for example, by releasing a drug.
A technique that uses acoustic waves to sort cells on a chip may create miniature medical analytic devices that could make Star Trek's tricorder seem a bit bulky in comparison, according to a team of researchers. The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a dime-sized chip, said Tony Jun Huang, associate professor of engineering science and mechanics, Penn State. By changing the frequency of the acoustic waves, researchers can easily alter the paths of the cells. Most current cell-sorting devices allow the cells to be sorted into only two channels in one step, according to Huang. He said that another drawback of current cell-sorting devices is that cells must be encapsulated into droplets, which complicates further analysis.
The researchers first tested the device by sorting a stream of fluorescent polystyrene beads into three channels. Prior to turning on the transducer, the particles flowed across the chip unimpeded. Once the transducer produced the acoustic waves, the particles were separated into the channels. Following this experiment, the researchers sorted human white blood cells that were affected by leukemia. The leukemia cells were first focused into the main channel and then separated into five channels. The device is not limited to five channels, according to Huang. "We can do more," Huang said. "We could do 10 channels if we want, we just used five because we thought it was impressive enough to show that the concept worked."
Rapid, accurate genetic sequencing soon may be within reach of every doctor's office if recent research from the National Institute of Standards and Technology (NIST) and Columbia University's School of Engineering and Applied Science can be commercialized effectively. The team has demonstrated a potentially low-cost, reliable way to obtain the complete DNA sequences of any individual using a sort of molecular ticker-tape reader, potentially enabling easy detection of disease markers in a patient's DNA ("PEG-labeled nucleotides and nanopore detection for single molecule DNA sequencing by synthesis").
Genia Technologies is collaborating with scientists at Columbia University and Harvard University to develop a commercial single-molecule sequencer. The company has licensed a nanopore sequencing-by-synthesis technology developed by researchers at Columbia and the National Institute of Standards and Technology, which it plans to integrate with its nanopore chip platform, and is using polymerase fusion proteins developed at Harvard.
Genia plans to ship its first nanopore sequencing device to beta customers by the end of next year, and to bring a commercial product to market in 2014.
While sequencing the genome of an animal species for the first time is so common that it hardly makes news anymore, it is less well known that sequencing any single individual's DNA is an expensive affair, costing many thousands of dollars using today's technology. An individual's genome carries markers that can provide advance warning of the risk of disease, but you need a fast, reliable and economical way of sequencing each patient's genes to take full advantage of them. Equally important is the need to continually sequence an individual's DNA over his or her lifetime, because the genetic code can be modified by many factors.
Nanopores and their interaction with polymer molecules have been a longtime research focus of NIST scientist John Kasianowicz. His group collaborated with a team led by Jingyue Ju, director of Columbia's Center for Genome Technology and Biomolecular Engineering, which came up with the idea for tagging DNA building blocks for single molecule sequencing by nanopore detection. The ability to discriminate between the polymer tags was demonstrated by Kasianowicz, his NIST colleague Joseph Robertson, and others. Columbia University has applied for patents for the commercialization of the technology.
Kasianowicz estimates that the technique could identify a DNA building block with extremely high accuracy at an error rate of less than one in 500 million, and the necessary equipment would be within the reach of any medical provider. "The heart of the sequencer would be an operational amplifier that would cost much less than $1,000 for a one-time purchase," he says, "and the cost of materials and software should be trivial."
This planet can't protect us forever. Sooner or later, there'll be a catastrophe that renders this world uninhabitable for humans. And when that day comes, we'll need to know already how to live in space.
Physicist Stephen Hawking suggests that our ongoing efforts to colonize space could ultimately save humanity from extinction. As it stands, Earth is our only biosphere — all our eggs are currently in one basket. If something were to happen to either our planet or our civilization, it would be vital to know that we could sustain a colony somewhere else.
And the threats are real. The possibility of an asteroid impact, nuclear war, a nanotechnological disaster, or severe environmental degradation make the need for off-planet habitation extremely urgent. And given our ambitious future prospects, including the potential for ongoing population growth, we may very well have no choice but to leave the cradle.
Back in 2000, NASA completed a $200 million study called the "Roadmap to Settlement" in which they described the potential for a moon-based colony in which habitats could be constructed several feet beneath the lunar surface (or covered within an existing crater) to protect colonists from high-energy cosmic radiation. They also outlined the construction of an onsite nuclear power plant, solar panel arrays, and a number of methods for extracting carbon, silicon, aluminium and other materials from the surface. As NASA's roadmap suggests, a colony on the Moon could help us prepare for a mission to Mars. It would probably be wise to set up, test, and train a self-sustaining colony a little closer to home before we take that massive leap to Mars.
And indeed, Mars holds considerably more potential than the Moon. It features a solar day of 24 hours and 39 minutes, and a surface area 28.4% less than Earth's. The Red Planet also has an axial tilt of 25 degrees (compared to the Earth's 29%) resulting in similar seasonal shifts (though they're twice as long given that Mars's year is 1.88 Earth years). And most importantly, Mars has an existing atmosphere, significant mineral diversity (such as ore and nickel-iron), and water. Actually, it has a lot of water. Recent analysis shows that Mars could have as much water underground as Earth.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.