The afternoon of May 6, 2010 was among the strangest in economic history. Starting at 2:42 p.m. EDT, the Dow Jones stock index fell 600 points in just 6 minutes. Its nadir represented the deepest single-day decline in that ...
Via Jacek Rajewski
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Handheld instrument does real-time nucleic acid testing to check if you're getting the fish you paid for.
Appreciate a well-cooked tuna steak or salmon wrapped in a sushi roll? There’s a good chance the fish sitting on your plate or in your grocery store’s seafood case is not what its label says it is, according to the ocean conservancy group Oceana. So you could be paying a premium for red snapper that’s really just plain old tilapia.
University of South Florida scientists have now made a handheld device that could help fight such seafood fraud. The instrument genetically verifies whether fish being called grouper is really grouper or less expensive, potentially harmful substitutes like catfish or mackerel. A quarter of grouper in the United States is mislabeled, according to Oceana, making it the fourth most commonly mislabeled fish in the country. Snapper was the most commonly mislabeled.
The Oceana study found that 33 percent of the 1200-plus seafood samples taken nationwide were mislabeled. This seafood fraud costs fishermen, the U.S. seafood industry, and consumers $20–25 billion annually, it calculates. In addition, fraud allows illegally caught fish to slip into the legal seafood trade and prevents consumers from making ecologically-friendly choices.
Today’s DNA barcoding methods for seafood identification analyze a sample’s DNA. While the price of gene sequencing has dropped in recent years, it still takes days and expensive lab equipment for accurate genetic identitification. The new device, on the other hand, purifies and amplifies a seafood sample’s RNA, or ribonucleic acid. The assay is simpler and works within 90 minutes. USF marine science professor John Paul and his colleagues have developed such assays to identify several microorganisms, and have now applied the technology to seafood identification.
The researchers described the technology and its application in the journal Food Control. They are now developing assays for other commercially relevant species, and they’re also commercializing it through Tampa-based spinoff PureMolecular LLC. That company plans to start selling the machines for US $2000 by this summer, Reuters reports.
Astronomers have discovered an outburst from a star thought to be in the earliest phase of its development.
Using data from orbiting observatories, including NASA's Spitzer Space Telescope, and ground-based facilities, an international team of astronomers has discovered an outburst from a star thought to be in the earliest phase of its development. The eruption, scientists say, reveals a sudden accumulation of gas and dust by an exceptionally young protostar known as HOPS 383.
Stars form within collapsing fragments of cold gas clouds. As the cloud contracts under its own gravity, its central region becomes denser and hotter. By the end of this process, the collapsing fragment has transformed into a hot central protostar surrounded by a dusty disk roughly equal in mass, embedded in a dense envelope of gas and dust. Astronomers call this a "Class 0" protostar.
"HOPS 383 is the first outburst we've ever seen from a Class 0 object, and it appears to be the youngest protostellar eruption ever recorded," said William Fischer, a NASA Postdoctoral Program Fellow at NASA's Goddard Space Flight Center in Greenbelt, Maryland.
The Class 0 phase is short-lived, lasting roughly 150,000 years, and is considered the earliest developmental stage for stars like the sun. A protostar has not yet developed the energy-generating capabilities of a sun-like star, which fuses hydrogen into helium in its core. Instead, a protostar shines from the heat energy released by its contraction and by the accumulation of material from the disk of gas and dust surrounding it. The disk may one day develop asteroids, comets and planets.
Because these infant suns are thickly swaddled in gas and dust, their visible light cannot escape. But the light warms dust around the protostar, which reradiates the energy in the form of heat detectable by infrared-sensitive instruments on ground-based telescopes and orbiting satellites.
HOPS 383 is located near NGC 1977, a nebula in the constellation Orion and a part of its sprawling star-formation complex. Located about 1,400 light-years away, the region constitutes the most active nearby "star factory" and hosts a treasure trove of young stellar objects still embedded in their natal clouds.
A method of using light to activate or suppress neurons without requiring genetic modification (as in optogenetics) has been developed by scientists from the University of Chicago and the University of Illinois at Chicago. The new technique, described in the journal Neuron, uses targeted, heated gold nanoparticles. The researchers says it’s a significant technological advance with potential advantages over current optogenetic methods, including possible use in the development of therapeutics for diseases such as macular degeneration.
“This is effectively optogenetics without genetics,” said study senior author Francisco Bezanilla, PhD, Lillian Eichelberger Cannon Professor of biochemistry and molecular biology at the University of Chicago. “Many optogenetic experimental designs can now be applied to completely normal tissues or animals, greatly extending the scope of these research tools and possibly allowing for new therapies involving neuronal photostimulation.”
Optogenetics, the use of light to control neural activity, is a powerful technique with widespread use in neuroscience research. It involves genetically engineered neurons that express a light-responsive protein originally discovered in algae. This process allows scientists to stimulate individual neurons as well as neural networks with precise flashes of light. However, since optogenetics is reliant on genetic modification, its use is primarily limited to relatively few model organisms.
Bezanilla and his colleagues have previously shown that normal, non-genetically modified neurons can be activated by heat generated from pulses of near-infrared light. But this method lacked specificity and can damage cells. To improve the technique, they used gold nanoparticles — spheres only 20 nanometers in diameter. When stimulated with visible light, spherical gold nanoparticles absorb and convert light energy into heat. This heating effect can activate unmodified neurons. However, nanoparticles must be extremely close to a cell to produce any effect. Since the nanoparticles diffuse quickly, or get washed away in a neuron’s immediate environment, their efficacy is short-lived.
To get nanoparticles to stick, Bezanilla and his team coupled them to a synthetic molecule based on Ts1, a scorpion neurotoxin, which binds to sodium channels without blocking them. Neurons treated with Ts1-coupled nanoparticles in culture were readily activated by light. Untreated neurons were non-responsive.
Importantly, treated neurons could still be stimulated even after being continuously washed for 30 minutes, indicating that the nanoparticles were tightly bound to the cell surface. This also minimized potentially harmful elevated temperatures, as excess nanoparticles were washed away.
Neurons treated with Ts1-coupled nanoparticles could be stimulated repeatedly with no evidence of cell damage. Some individual neurons, targeted with millisecond pulses of light, produced more than 3,000 action potentials (spikes) over the span of 30 minutes, with no reduction in efficacy. In addition to cultured cells, Ts1-coupled nanoparticles were tested on complex brain tissue using thin slices of mouse hippocampus. In these experiments, the researchers were able to activate groups of neurons and then observe the resulting patterns of neural activity.
“The technique is easy to implement and elicits neuronal activity using light pulses. Therefore, stimulating electrodes are not required,” Bezanilla said. “Furthermore, with differently-shaped nanoparticles it can work in near-infrared as well as in visible wavelengths, which has many practical advantages in living animals. Thus far, most optogenetic tools have been limited to visible wavelengths.”
Many scientists believe tiny Martian microbes could flourish beneath the red planet's ice-cemented polar caps, or even in briny puddles of ultra-salty water—which they believe may be locked under Mars' soil in seasonal flows. But here's one important question about the possibility of life on Mars: What the heck does it eat?
According to Gary King, a biologist at Louisiana State University, the surprising answer could be a scentless, atmospheric gas: carbon monoxide. In a new study in the science journal PNAS, King has concluded that enough of the gas seeps into Mars's soil from the planet's atmosphere to feed hearty lifeforms. Such organisms which could look like Alkalilimnicola ehrlichii, a carbon monoxide-munching microbe found in California in 2007.
"This is a very important piece of work for Mars astrobiology," says Chris McKay, an astrobiologist at NASA who was not involved in King's research. "What this research means is that we now know of an energy source for microbial systems that could exist anywhere near the surface of Mars."
As McKay explains it, if you're going to have Earth-like life on Mars today, then you need to account for three things: nutrients, water, and energy. "Nutrients aren't really an issue," he says. "Mars has an abundance of carbon dioxide, nitrates, atmospheric nitrogen, and small amounts of many other nutrients. As for water, the theory that there could be these brines of saltwater under the soil has been around for years… What has always needed serious explanation is a potential source of energy. Now we have one."
For the most part, scientists had discounted the idea that hypothetical Martian microbes could get their energy from the planet's atmosphere. The reason is simple: Mars's atmosphere is incredibly thin and dominated by carbon dioxide, which is not a viable energy source. Only a tiny sliver of Mars's total atmosphere is carbon monoxide. It's created as sunlight breaks an oxygen atom off from atmospheric carbon dioxide.
With the emergence of wearable electronics that monitor fitness and health, there is a growing need for more flexible light-emitting devices. One option that researchers have been interested in is developing fabrics with integrated light-emitting devices. Unfortunately, fabrics themselves are not a suitable surface for current light-emitting materials. However, a team of scientists have found a way around this issue by integrating the light-emitting devices directly into fabrics using a new technology: light-emitting device fibers.
These research team, based in China, worked with polymer light-emitting electrochemical cells (PLECs). Like many other light-emitting devices, PLECs have a structure that is composed of two metal electrodes connected to a thin organic layer that acts as a semiconductor. Because PLECs have mobile ions incorporated into the semiconductor, they have many benefits compared to other light-emitting diodes (LEDs): low operating voltage, high efficiency in converting electrons to photons, and high power efficiency. PLECs are also a good option because they do not require the use of metals that are sensitive to air and they can be used on rougher surfaces; these characteristics make them suitable for large-scale manufacturing.
These fiber-shaped PLECs have a coaxial structure with four layers. Using a solution-based processing, a steel wire, which acts as the base of this fiber, is dip-coated with a thin layer of ZnO nanoparticles. This layer has two key functions: protecting the light-emitting layer that's applied next; and decreasing the leakage of the current, thus enhancing current efficiency.
Next, the electroluminescent polymer layer is deposited onto the wire using dip-coating. Finally, a sheet of aligned carbon nanotubes is wrapped around the bundle using a dry-drawn form of spinnable carbon nanotubes. Because the carbon nanotubes were highly aligned, they provided the fiber with high electrical conductivities. Imaging revealed that the fibers had a uniform diameter and a smooth outer surface.
The scientists who created these fibers determined the lifetime of the devices. They found that the fibers gradually light up over a 21-minute period and gradually dim over a four-hour period; in these studies, the light emitted by the fibers was blue. The fiber lit up when a voltage of 5.6V was applied and reached a peak intensity at 13V. When the fiber is pre-charged, it displays a rapid turn-on response that is similar to conventional LEDs.
The brightness of the light emitted by the fibers was almost entirely independent of viewing angle. When the fibers were bent, they maintained their brightness above 90 percent and no obvious damage was observed. Though only blue light was explored in these studies, the team believes other colors could be displayed as well.
UIC researchers created an electromechanical device--a humidity sensor--on a bacterial spore. They call it NERD, for Nano-Electro-Robotic Device. The report is online at Scientific Reports, a Nature open access journal.
"We've taken a spore from a bacteria, and put graphene quantum dots on its surface--and then attached two electrodes on either side of the spore," said Vikas Berry, UIC associate professor of chemical engineering and principal investigator on the study.
"Then we change the humidity around the spore," he said. When the humidity drops, the spore shrinks as water is pushed out. As it shrinks, the quantum dots come closer together, increasing their conductivity, as measured by the electrodes. "We get a very clean response--a very sharp change the moment we change humidity," Berry said. The response was 10 times faster, he said, than a sensor made with the most advanced man-made water-absorbing polymers.
A sensitive and reproducible electron-tunneling width modulation of 1.63 nm within a network of GQDs chemically-secured on a spore was achieved via sporal hydraulics with a driving force of 299.75 Torrs (21.7% water at GQD junctions). The electron-transport activation energy and the Coulomb blockade threshold for the GQD network were 35 meV and 31 meV, respectively; while the inter-GQD capacitance increased by 1.12 folds at maximum hydraulic force.
This is the first example of nano/bio interfacing with spores and will lead to the evolution of next-generation bio-derived microarchitectures, probes for cellular/biochemical processes, biomicrorobotic-mechanisms, and membranes for micromechanical actuation.
A team of researchers working at Harvard University has taken yet another step towards bringing to life a reasonable facsimile of a woolly mammoth—a large, hairy elephant-like beast that went extinct approximately 3,300 years ago. The work by the team has not been published as yet, because as team lead George Church told The Sunday Times, recently, they believe they have more work to do before they write up their results.
Church is quick to point out that his team is not cloning the mammoth, instead they are rebuilding the genome of the ancient animal by studying its DNA, replicating it and then inserting the copy into the genome of an Asian elephant—the closest modern day equivalent. They are not bringing forth a new mammoth yet either—all of their work is confined to simple cells in their lab. What they have done, however, is build healthy living elephant cells with mammoth DNA in them. Their work is yet another step towards that ultimate goal, realizing the birth of a wooly mammoth that is as faithful to the original as is humanly possible.
Talk of cloning a mammoth began not long after scientists learned how to actually do cloning—mammoth carcasses have been found in very cold places which preserved remains, which of course, included DNA. But not everyone has been onboard with the idea—some claim it is stepping into God's territory, others suggest it seems ridiculous considering all of the species that are nearing extinction, including those of elephants. Why not use those financial resources that are now going towards bringing back something that has gone extinct, to saving those that are still here?
The technique the team is using is called Crispr, it allows for reproducing exact copies of genes—in this case 14 mammoth genes, which are then inserted into elephant genes. As Church explains, the team prioritizes which genes are replicated and inserted, based on such factors as hairiness, ear size, and subcutaneous fat, which the animal needed to survive in its harsh cold environment.
Not clear as yet is when or if the team at Harvard has plans to produce an actual living mammoth, or if they will leave that to other teams working on similar projects.
Australian scientists have uncovered what is believed to be the largest asteroid impact zone ever found on Earth, in central Australia. A team lead by Dr Andrew Glikson from the Australian National University (ANU) said two ancient craters found in central Australia were believed to have been caused by one meteorite that broke in two. "They appear to be two large structures, with each of them approximately 200 kilometers," Dr Glikson said. "So together, jointly they would form a 400 kilometer structure which is the biggest we know of anywhere in the world. "The consequences are that it could have caused a large mass extinction event at the time, but we still don't know the age of this asteroid impact and we are still working on it."
The material at both impact sites appears to be identical which has led researchers to believe they are from the same meteorite. Over millions of years the obvious craters have disappeared, but geothermal research drilling revealed the secret history hidden under an area including South Australia, Queensland and the Northern Territory.
"The next step will be more research, hopefully deep crust seismic traverses," Dr Glikson said. "Under the Cooper Basin and Warburton Basin we don't have that information and our seismic information covers up to five kilometers and some other data such as seismic tomography and magnetic data. "The mantle underneath has been up-domed which is a very promising indication of a major event." The research has been published in the geology journal Tectonophysics.
Multicolored fluorescent proteins have allowed the color-coding of cancer cells growing in vivo and enabled the distinction of host from tumor with single-cell resolution. Non-invasive imaging with fluorescent proteins enabled the dynamics of metastatic cancer to be followed in real time in individual animals. Non-invasive imaging of cancer cells expressing fluorescent proteins has allowed the real-time determination of efficacy of candidate antitumor and antimetastatic agents in mouse models. The use of fluorescent proteins to differentially label cancer cells in the nucleus and cytoplasm can visualize the nuclear–cytoplasmic dynamics of cancer cellsin vivo including: mitosis, apoptosis, cell-cycle position, and differential behavior of nucleus and cytoplasm that occurs during cancer-cell deformation and extravasation. Recent applications of the technology linking fluorescent proteins with cell-cycle-specific proteins such that the cells change color from red to green as they transit from G1 to S phases. With the macro- and micro-imaging technologies described here, essentially any in vivo process can be imaged, giving rise to the new field of in vivo cell biology using fluorescent proteins.
Via Gilbert Faure au nom de l'ASSIM
UK and Singapore researchers have simulated neural networks and synapses in the brain using optical pulses as information carriers over fibers made from light-sensitive chalcogenide glass. The research, published in Advanced Optical Materials, has the potential to allow faster and smarter optical neuromorphic (brain-like) computers capable of learning, the researchers say.
Compared to biological systems, today’s computers are “up to a billion times less efficient — simulating 5 seconds of brain activity takes 500 seconds and needs 1.4 MW of power,” they note. The researchers, from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK, and Centre for Disruptive Photonic Technologies (CDPT) at Nanyang Technological University (NTU), Singapore, developed a proof-of-concept system that demonstrated optical equivalents of brain functions. These include holding a neural resting state and simulating the changes in electrical activity in a nerve cell as it is stimulated.
The changing properties of the glass act as the varying electrical activity in a nerve cell, and light provides the stimulus to change these properties. This enables switching a light signal, the equivalent to a nerve cell firing.
The research paves the way for scalable brain-like computing systems that enable “photonic neurons” with ultrafast signal transmission speeds, higher bandwidth, and lower power consumption than their biological and electronic counterparts, including “non-Boolean computing and decision-making paradigms that mimic brain functionalities,” the researchers say.
software update will give Tesla Model S cars the ability to start driving themselves in “autopilot” mode on “major roads” like highways this summer, Tesla Motors chief executive Elon Musk announced on March 19, 2015. He also said Tesla had been testing its autopilot mode on a route from San Francisco to Seattle, largely unassisted, and that the cars will be able to park themselves in a private garage and be summoned by smart phone.
Taking it a step further, Musk predicted at NVidia’s annual developers conference on Tuesday, March 17, 2015, that humans driving cars will eventually be outlawed. “It’s too dangerous,” he said. “You can’t have a person driving a two-ton death machine.” But he admitted that “the hardest part of helping cars drive themselves is what happens when vehicles are traveling between 15 and 50 miles per hour.”
Other updates announced today were Automatic Emergency Braking (will engage in the event of an unavoidable collision in order to reduce risk of impact), Blind Spot Warning (alerts you when drivers behind you are dangerously close), Side Collision Warning, and Valet Mode (limits its speed, locking the glove box and trunk and hiding personal information). In addition, the software will update the audio system sound quality, improved radio tuning, and refined active cruise control. Other luxury cars have most of these features, but updates are not possible.
“We really designed the Model S to be a very sophisticated computer on wheels,” Musk said. “With Tesla’s regular over-the-air software updates, Model S actually improves while you sleep, the Tesla blog explains. “When you wake up, added functionality, enhanced performance, and improved user experience make you feel like you are driving a new car.”
It is one of nature’s most spectacular displays and now scientists have shown how the chameleon changes color. A new study has found that the lizards possess a layer of skin cells that contain floating nanocrystals. These tiny crystals are roughly evenly spaced throughout the cell and this spacing determines the wavelength of light that the cells reflect. The latest research shows that chameleons switch color from green to red by actively changing the spacing between these tiny cellular crystals.
Prof Michel Milinkovitch and his team at the University of Geneva cracked the problem after years of studying the panther chameleon (Furcifer pardalis), native to Madagascar, which has one of the most impressive colour displays in the chameleon kingdom. When males encounter a male competitor or a potentially receptive female, it shifts the background colour of its skin from green to yellow, its blue patterning turns white and red becomes brighter.
“This happens within minutes of it seeing another male,” said Milinkovitch. In the study, published in the journal Nature Communications, the scientists studied the skin of the lizards using spectroscopy. They found that beneath several layers of pigmented skin cells, the chameleons have a layer of cells called iridophores, containing nanocrystals made of guanine, one of the four key components of DNA.
The guanine nanocrystals are arranged in a lattice throughout the cell, the spacing of which determines the cell’s color. When the chameleon is calm, the crystals were found to be organised into a dense network, reflecting blue wavelengths most strongly. When excited, the chameleon was found to loosen its lattice of nanocrystals by about 30%, allowing the reflection of yellows or reds. “They’re basically pulling apart or squashing together the lattice,” said Milinkovitch.
The scientists are yet to work out how chameleons cause this change, but it could be due to cells shrinking or expanding, giving the crystals more or less space to fill. The chameleon’s visible color is also determined by the upper layers of skin cells, which the light is filtered through, containing yellow and red pigment. Previously scientists had shown that chameleons are also able to change their hue through the migration of melanin in and out of cells, turning them from pale to dark green, for instance. However, until now it was a mystery how they managed to completely switch color from green to red in a matter of minutes.
Imagine being able to download a full-length 8GB HD movie to your phone in six seconds (versus seven minutes over 4G or more than an hour on 3G) and video chats so immersive that it will feel like you can reach out and touch the other person right through the screen.
That’s the vision for the 5G concept — the next generation of wireless networks — presented at the Mobile World Congress show last week, according to re/code.
Here’s what it will offer:
“Ulrich Dropmann, head of industry environment networks at Nokia, gave a scenario where you might be cruising in your driverless car when, unbeknownst to you, a crash has just occurred up the road,” says re/code. “With 5G, sensors placed along the road would be able to instantly relay that information back to your car (this is where having low latency is important), so it could brake earlier and avoid another accident.”
So when might it be here? “The most optimistic targets would see the first commercial network up and running by 2020, but even that may be too optimistic. As with LTE, it will take years for the network to become widespread.”
Seth Robertson and Viet Tran, engineering students of George Mason University, have a new explanation of how to put out a fire and they build their very own practical peace of fire-fighting technology.
Their new fire-fighting solution works with sound waves by pushing low frequency sound waves “30 to 60 hertz range” to the flames you can separate the oxygen from the fuel. The fire has a triangle of needs: Heat, Fuel and Oxygen. And simply by taking any of these needs away, you can put out the fire. What wave sound does to this triangle is to bring air (Oxygen) back and forth which keeps the air away from fire but in molecule levels. The fire will act like a cat going after a laser pointer light and that is all it takes to cut off the oxygen from the fire.
But the inventors have even more dreams for their new flagship, Washington post reports: “Robertson and Tran envision their technology being used to put out fires in homes — and in the wild. If properly scaled, sound-wave extinguishers would eliminate the need to douse forests in chemicals or waste untold gallons of water”. But that’s still a long way away.
Employing an ingenious microfluidic design that combines chemical and mechanical properties, a team of Harvard scientists has demonstrated a new way of detecting and extracting biomolecules from fluid mixtures. The approach requires fewer steps, uses less energy, and achieves better performance than several techniques currently in use and could lead to better technologies for medical diagnostics and chemical purification.
The biomolecule sorting technique was developed in the laboratory of Joanna Aizenberg, Amy Smith Berylson Professor of Materials Science at Harvard School of Engineering and Applied Sciences (SEAS) and Professor in the Department of Chemistry and Chemical Biology. Aizenberg is also co-director of the Kavli Institute for Bionano Science and Technology and a core faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering, leading the Adaptive Materials Technologies platform there.
The new microfluidic device, described in a paper appearing today in the journal Nature Chemistry, is composed of microscopic “fins” embedded in a hydrogel that is able to respond to different stimuli, such as temperature, pH, and light. Special DNA strands called aptamers, that under the right conditions bind to a specific target molecule, are attached to the fins, which move the cargo between two chemically distinct environments. Modulating the pH levels of the solutions in those environments triggers the aptamers to “catch” or “release” the target biomolecule.
After using computer simulations to test their novel approach, in collaboration with Prof. Anna C. Balazs from the University of Pittsburgh, Aizenberg’s team conducted proof-of-concept experiments in which they successfully separated thrombin, an enzyme in blood plasma that causes the clotting of blood, from several mixtures of proteins. Their research suggests that the technique could be applicable to other biomolecules, or used to determine chemical purity and other characteristics in inorganic and synthetic chemistry.
“Our adaptive hybrid sorting system presents an efficient chemo-mechanical transductor, capable of highly selective separation of a target species from a complex mixture—all without destructive chemical modifications and high-energy inputs,” Aizenberg said. “This new approach holds promise for the next-generation, energy-efficient separation and purification technologies and medical diagnostics.”
With 3D printers everywhere, making everything from Yoda statues to bionic body parts, this company is using 3D printing to make new body tissue. BioBots, a team from the University of Pennsylvania, does just that. They’ve developed a $5,000 3D printer that actually prints functional living tissue. The company just snagged the Most Innovative Company at SXSW’s Accelerator Awards.
And while most of the living tissue BioBots is creating these days is for drug research — to make it less expensive and take animals out of the mix — one day, it could print new organs for transplants. “If we could somehow reveal the failures before testing drugs on people, we would be able to identify false positives much earlier in the drug development process,” CEO and co-founder Danny Cabrera told Forbes. “The problem is in animal testing – mice are not humans, and tests on animals often fail to mimic human diseases or predict how the human body responds to new drugs.
“The Holy Grail is to develop fully functioning replacement organs out of a patient’s own cells, eliminating the organ waiting list, but in the meantime we’ll settle for getting more drugs approved by the FDA at a significantly lower cost on an accelerated time scale, improving the quality of life for millions of people around the world.”
Before dinosaurs came along, one of Earth’s top predators was a salamander-like amphibian that lived in tropical areas of the supercontinent Pangaea. Fossils unearthed from a 30- to 40-centimeter-thick bone bed in southern Portugal suggest the creature was more than 2 meters long, weighed as much as 100 kilograms, and had a broad flat head the size and shape of a toilet seat. The newly described species (artist's representation shown), which lived between 220 million and 230 million years ago, was one of the largest in a group of amphibians known as metoposaurs and is the first known in this region from well-preserved fossils, the researchers report online today in the Journal of Vertebrate Paleontology.
The species has been dubbed Metoposaurus algarvensis to honor the Algarve region of Portugal, where the fossils were unearthed. (Even though the genus name contains the Greek word saur, which translates as “lizard,” these creatures and their kin were amphibians.) The 4-square-meter area of the bone bed already excavated has yielded 10 skulls and hundreds of remains, suggesting that the creatures became concentrated in one area and then died when the lake they inhabited dried up, the researchers say. Because the beasts had spindly limbs probably insufficient to support their weight, they likely remained in the water most of the time, feeding on fish but possibly snacking on small ancestors of dinosaurs or mammals that wandered too near the waterside. Similar bone beds that include other species of metoposaurs have been found in what are now Africa, Europe, and North America—a hint that climate at the time was highly unpredictable and prone to lengthy droughts.
A team of scientists have for the first time successfully demonstrated the non-local collapse of a particle’s wave function in an experiment using a single particle.
This was demonstrated by the scientists, who split a single photon between their labs in Japan and Australia, but was previously regarded as an unlikely phenomenon by Albert Einstein.
Almost 90 years ago, he used single-particle entanglement as evidence that quantum mechanics was incorrect, deriding non-local wave function collapse as “spooky action at a distance”.
“Einstein never accepted orthodox quantum mechanics and the original basis of his contention was this single-particle argument,” explained Professor Howard Wiseman, director of Griffith University’s Centre for Quantum Dynamics.
The research was published recently in Nature Communications.
Reference: Fuwa M, Takeda S, Zwierz M, Wiseman HM, Furusawa A. Experimental proof of nonlocal wavefunction collapse for a single particle using homodyne measurements. Nature Communications 06 March 2015. doi:10.1038/ncomms7665.
Researchers at The Ohio State University have discovered how to control heat with a magnetic field.
In the March 23 issue of the journal Nature Materials, they describe how a magnetic field roughly the size of a medical MRI reduced the amount of heat flowing through a semiconductor by 12 percent. The study is the first ever to prove that acoustic phonons—the elemental particles that transmit both heat and sound—have magnetic properties.
"This adds a new dimension to our understanding of acoustic waves," said Joseph Heremans, Ohio Eminent Scholar in Nanotechnology and professor of mechanical engineering at Ohio State. "We've shown that we can steer heat magnetically. With a strong enough magnetic field, we should be able to steer sound waves, too."
People might be surprised enough to learn that heat and sound have anything to do with each other, much less that either can be controlled by magnets, Heremans acknowledged. But both are expressions of the same form of energy, quantum mechanically speaking. So any force that controls one should control the other.
"Essentially, heat is the vibration of atoms," he explained. "Heat is conducted through materials by vibrations. The hotter a material is, the faster the atoms vibrate. Sound is the vibration of atoms, too," he continued. "It's through vibrations that I talk to you, because my vocal chords compress the air and create vibrations that travel to you, and you pick them up in your ears as sound."
The name "phonon" sounds a lot like "photon." That's because researchers consider them to be cousins: Photons are particles of light, and phonons are particles of heat and sound. But researchers have studied photons intensely for a hundred years—ever since Einstein discovered the photoelectric effect. Phonons haven't received as much attention, and so not as much is known about them beyond their properties of heat and sound.
The implication: In materials such as glass, stone, plastic—materials that are not conventionally magnetic—heat can be controlled magnetically, if you have a powerful enough magnet. The effect would go unnoticed in metals, which transmit so much heat via electrons that any heat carried by phonons is negligible by comparison. There won't be any practical applications of this discovery any time soon, howerver: 7-tesla magnets like the one used in the study don't exist outside of hospitals and laboratories, and the semiconductor had to be chilled to -450 degrees Fahrenheit (-268 degrees Celsius)—very close to absolute zero—to make the atoms in the material slow down enough for the phonons' movements to be detectible.
Scientists have compared for the first time the genomes of the two bacteria species that cause leprosy. The study shows how the two species evolved from a common ancestor around 13.9 million years ago, and offers new insights into their biology that could lead to new treatments.
Leprosy is a chronic infection of the skin, peripheral nerves, eyes and mucosa of the upper respiratory tract, affecting over a quarter million people worldwide. Its symptoms can be gruesome and devastating, as the bacteria reduce sensitivity in the body, resulting in skin lesions, nerve damage and disabilities. Until recently, leprosy was attributed to a single bacterium, Mycobacterium leprae; we now suspect that its close relative, Mycobacterium lepromatosis, might cause a rare but severe form of leprosy. EPFL scientists have analyzed for the first time the complete genome of M. lepromatosis, and compared it to that of the major leprosy-causing bacterium.
Published in PNAS, the study reveals the origin and evolutionary history of both bacteria, and offers new insights into their biology, global distribution, and possibly treatment. Along with its mutilating symptoms, leprosy also carries a stigma, turning patients into social outcasts. Although we have been able to push back the disease with antibiotics, leprosy remains endemic in many developing countries today.
Leprosy can manifest itself in various forms, all thought to be caused by the bacterium M. leprae. But in 2008, a study showed considerable evidence that another species of bacterium, M. lepromatosis, causes a distinct and aggressive form of the disease called “diffuse lepromatous leprosy”, found in Mexico and the Caribbean.
The lab of Stewart Cole at EPFL’s Global Health Institute carried out a genome-wide investigation on M. lepromatosis. This complex and computer-heavy technique looks at the bacterium’s entire DNA, locating its genes along the sequence. Because M. lepromatosis cannot be grown in the lab and animal models for this version of leprosy do not exist yet, the scientists used an infected skin sample from a patient in Mexico to obtain the bacterium’s genetic material.
After extracting the DNA from the entire sample, the researchers had to separate the bacterial DNA from the patient’s. To do this, they used two genetic techniques: one that increased the bacterium’s DNA and another that decreased the human DNA. With the bacterium’s DNA isolated, the researchers were able to sequence it and read it. Once they had the complete sequence of the bacterium’s genome, they were able to compare it with the known genome of M. leprae, the bacterium responsible for the majority of leprosy cases.
The study found that the two species of bacteria are very closely related. The comparative genomics analysis could “backtrack” the history of their genes, and showed that the two bacteria have diverged 13.9 million years ago from a common ancestor with a similar genome structure, and possibly a similar lifestyle. That ancestor suffered a process known as “gene decay”, where over a long period of time and multiple generations, a large number of genes mutated, became non-functional, and eventually disappeared. The study showed that the two new species continued to lose genes but from different regions of their genomes, indicating that during their evolution they occupied different biological roles and mechanisms to ensure survival.
Positive-strand RNA viruses genome replication invariably is associated with vesicles or other rearranged cellular membranes. Brome mosaic virus (BMV) RNA replication occurs on perinuclear endoplasmic reticulum (ER) membranes in ~70 nm vesicular invaginations (spherules). BMV RNA replication vesicles show multiple parallels with membrane-enveloped, budding retrovirus virions, whose envelopment and release depend on the host ESCRT (endosomal sorting complexes required for transport) membrane-remodeling machinery. We now find that deleting components of the ESCRT pathway results in at least two distinct BMV phenotypes. One group of genes regulate RNA replication and the frequency of viral replication complex formation, but had no effect on spherule size, while a second group of genes regulate RNA replication in a way or ways independent of spherule formation. In particular, deletingSNF7 inhibits BMV RNA replication > 25-fold and abolishes detectable BMV spherule formation, even though the BMV RNA replication proteins accumulate and localize normally on perinuclear ER membranes. Moreover, BMV ESCRT recruitment and spherule assembly depend on different sets of protein-protein interactions from those used by multivesicular body vesicles, HIV-1 virion budding, or tomato bushy stunt virus (TBSV) spherule formation. These and other data demonstrate that BMV requires cellular ESCRT components for proper formation and function of its vesicular RNA replication compartments. The results highlight growing but diverse interactions of ESCRT factors with many viruses and viral processes, and potential value of the ESCRT pathway as a target for broad-spectrum antiviral resistance.
Via Rabenstein, Frank
Technology developers from the UK have designed a new wearable technology where the garment itself becomes an active motion sensor. Xelflex uses bend-sensitive fiber-optic that are stitched inside the clothing to provide intelligent feedback for athletes without encumbering them with bulking electronics.
The makers say that until now smart fabrics have had multiple electronic sensors, making them bulky and sensitive to moisture. Xelflex's fiber-optic thread is robust enough for use in sportswear, with only a small, credit card-sized, electronics pack being the only other component.
Xelflex inventor Martin Brock said making a wearable technology that was comfortable was a key factor: "Xelflex is a breakthrough sensing technology based on optical fibers; where the optical fiber is actually integrated into the garment. And really it behaves like any other thread in that garment, there's no compromise between having a sensor that gives you feedback on your motion or your performance; and having some clothing that is comfortable and wearable and elegant as part of the everyday activities."
The technology built on the developers' extensive experience in industrial fiber-optic sensors and low-cost impulse radar. Brock explained that Xelflex measures the scattering of light in the optic fibers where bending the fiber results in increased scattering and reflection, which can then be measured.
"As I flex my joint there, it changes how much that optical fiber is bent. And as that bending changes the properties of the light in the optical fiber change so that more light is scattered back towards the source. And we pick up on that extra scattering and that allows us to measure how much that joint is bent," said Brock.
Algorithms turn the results from the sensors into feedback that is useful for wearer; for example, correcting posture and movement, and coaching them on how to improve. Cambridge Consultants' Duncan Smith said Xelflex improves on current 'smart garments' which he says are little more than clothing acting as a support for a conventional electronic sensor, with no synergy between the two and where the electronic component often detracts from the garment. He wants to bridge the gap between technologists and fashion designers.
"Xelflex represents a major step forward in wearable technology because it's truly wearable - the sensor is actually built in to the fabric so you're able to design clothes that have the sensor built into them. This means that fashions designers can design the clothes rather than technologists designing wearable technology that's just a wristband or something like that. And that's a big step forward."
DARPA announced today, March 19, a Request for Information (RFI) on methods for using analog approaches to speed up computation of the complex mathematics that characterize scientific computing. “The standard digital computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.
These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium, he notes. But they involve continuous rates of change over a large range of physical parameters relating to the problems of interest, so they don’t lend themselves to being broken up and solved in discrete pieces by individual CPUs. Examples of such problems include predicting the spread of an epidemic, understanding the potential impacts of climate change, or modeling the acoustical signature of a newly designed ship hull.
What if there were a processor specially designed for such equations? What might it look like? Analog computers solve equations by manipulating continuously changing values instead of discrete digital measurements, and have been around for more than a century. In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism.
Their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. So it’s conceivable, Tang said, that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.
DARPA’s RFI is called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS), available here: http://go.usa.gov/3CV43. The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Tang said.
Plastics, such as the low-density polyethylene (LDPE) used to make bags and the polyethylene terephthalate (PET) found in water bottles, can remain intact for years in a landfill. So some plastics manufacturers include additives designed to help the long polymers in the plastics disintegrate faster. Transition-metal salts called oxo-degradable additives catalyze the oxidation of the polymer chains in the presence of oxygen and ultraviolet light or heat. Other types of additives claim to increase biodegradation through different mechanisms.
Some manufacturers assert that once polymer chains are fragmented in this way, microbes can then eat them. But previous studies have cast doubt on this claim: For example, many oxo-degradable plastics do not pass a common composting certification test known as ASTM-D6400, which requires 60% of the material to be converted into carbon dioxide in 180 days. Researchers Susan Selke and Rafael Auras of the School of Packaging at Michigan State University wanted to design a rigorous field study to determine whether the materials perform as promised.
So they and their colleagues prepared films of an LDPE blend used to make bread, supermarket and trash bags, and PET sheets, like those used to make plastic water bottles, with three different additives supplied by their manufacturers. These were an oxo-degradable additive made by Symphony; a non-oxo-degradable one made by Ecologic; and Wells Plastics’ Reverte, which was originally described by the manufacturer as a combination of the two types of additives. The researchers exposed the oxo-degradable plastics to UV light at 0.80 W/m2 for about six days, the equivalent of about two months of outdoor exposure in Miami. They then treated all of the samples to mimic disposal of such plastics in a compost pile, a landfill, and soil.
By measuring the carbon dioxide and methane that evolved from the plastics in closed containers simulating composting and landfilling, the researchers could determine whether microbes had digested the materials. After about six months of composting and a year and a half of landfill-like conditions, samples that included plastics with additives did not produce significantly more methane and CO2 than the samples of plastics without them. At the end of the experiment, the team checked that microbes in the landfill simulation were still alive: When the researchers fed the microbes starch, the microbes produced gases as expected. After three years of soil burial, the plastic samples with additives did not show any greater physical degradation than samples without them.
Auras says, “We saw no evidence that these additives promote significant biodegradation in these tested environments.”
Before dinosaurs ruled what is now North America, the Carolina Butcher topped the predator charts. A reconstruction of this newly identified species suggests that Carnufex carolinensis was 3 meters long and looked a lot like living crocodiles — except it walked on two legs, not four. Researchers led by North Carolina Museum of Natural Sciences paleontologist Lindsay Zanno found its fossilized skull, spine and arm bones in 231-million-year-old rock deposits in central North Carolina.
C. carolinensis is one of the oldest and largest crocodile ancestors identified to date.Its size and stature also suggest that for a time, it was one of the top predators in the part of the supercontinent Pangaea that became North America, Zanno and colleagues write March 19 in Scientific Reports.
Past fossil finds show that cousins of ancient crocodiles were vying with the earliest bipedal dinosaurs, called theropods, for the title of top predator in the southern regions of Pangaea.
C. carolinensis and others like it may have dominated the northern regions of Pangaea without competition from early dinosaurs, the researchers write.
The researchers note that its reign probably ended 201 million years ago. That’s when a mass extinction event wiped out most large, land-based predators, clearing the way for dinosaurs to fully dominate during the Jurassic period.