They could almost be mistaken for images taken in the far-reaches of outer space, or an ultra-magnified snapshot of microscopic organisms.
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
In recent years, new strains of bacteria have emerged that resist even the most powerful antibiotics. Each year, these superbugs, including drug-resistant forms of tuberculosis and staphylococcus, infect more than 2 million people nationwide, and kill at least 23,000. Despite the urgent need for new treatments, scientists have discovered very few new classes of antibiotics in the past decade.
MIT engineers have now turned a powerful new weapon on these superbugs. Using a gene-editing system that can disable any target gene, they have shown that they can selectively kill bacteria carrying harmful genes that confer antibiotic resistance or cause disease.
Led by Timothy Lu, an associate professor of biological engineering and electrical engineering and computer science, the researchers described their findings in the Sept. 21 issue of Nature Biotechnology.
Last month, Lu’s lab reported a different approach to combating resistant bacteria by identifying combinations of genes that work together to make bacteria more susceptible to antibiotics.
Lu hopes that both technologies will lead to new drugs to help fight the growing crisis posed by drug-resistant bacteria.
“This is a pretty crucial moment when there are fewer and fewer new antibiotics available, but more and more antibiotic resistance evolving,” he says. “We’ve been interested in finding new ways to combat antibiotic resistance, and these papers offer two different strategies for doing that.”
Most antibiotics work by interfering with crucial functions such as cell division or protein synthesis. However, some bacteria, including the formidable MRSA (methicillin-resistant Staphylococcus aureus) and CRE (carbapenem-resistant Enterobacteriaceae) organisms, have evolved to become virtually untreatable with existing drugs.
In the new Nature Biotechnology study, graduate students Robert Citorik and Mark Mimee worked with Lu to target specific genes that allow bacteria to survive antibiotic treatment. The CRISPR genome-editing system presented the perfect strategy to go after those genes.
CRISPR, originally discovered by biologists studying the bacterial immune system, involves a set of proteins that bacteria use to defend themselves against bacteriophages (viruses that infect bacteria). One of these proteins, a DNA-cutting enzyme called Cas9, binds to short RNA guide strands that target specific sequences, telling Cas9 where to make its cuts.
Lu and colleagues decided to turn bacteria’s own weapons against them. They designed their RNA guide strands to target genes for antibiotic resistance, including the enzyme NDM-1, which allows bacteria to resist a broad range of beta-lactam antibiotics, including carbapenems. The genes encoding NDM-1 and other antibiotic resistance factors are usually carried on plasmids — circular strands of DNA separate from the bacterial genome — making it easier for them to spread through populations.
When the researchers turned the CRISPR system against NDM-1, they were able to specifically kill more than 99 percent of NDM-1-carrying bacteria, while antibiotics to which the bacteria were resistant did not induce any significant killing. They also successfully targeted another antibiotic resistance gene encoding SHV-18, a mutation in the bacterial chromosome providing resistance to quinolone antibiotics, and a virulence factor in enterohemorrhagic E. coli.
In addition, the researchers showed that the CRISPR system could be used to selectively remove specific bacteria from diverse bacterial communities based on their genetic signatures, thus opening up the potential for “microbiome editing” beyond antimicrobial applications.
Ecole polytechnique fédérale de Lausanne (EPFL; Lausanne, Switzerland) researchers have developed a new light-emitting diode (LED)-based handheld device that is able to test a large number of proteins in our body all at once. Professor Hatice Altug and postoctoral fellow Arif Cetin from EPFL in collaboration with professor Aydogan Ozcan from UCLA (Los Angeles, CA) developed the compact and inexpensive "optical lab on a chip" to quickly analyze up to 170,000 different molecules in a blood sample--simultaneously identifying insulin levels, cancer and Alzheimer markers, or even certain viruses.
Instead of analyzing the biosample by looking at the spectral properties of the sensing platforms as has traditionally been the case, this new technique uses changes in the intensity of the light to do on-chip imaging, eliminating sometimes clunky spectrometers in the process.
Only 7.5 cm high and weighing 60 g, the device is able to detect viruses and single-layer proteins down to 3 nanometers thick. Detailed in a publication in Nature Light: Science & Application, the recipe is simple and contains few ingredients: an off-the-shelf CMOS chip, an LED, and a 10 square millimeter gold plate pierced with arrays of extremely small holes less than 200 nm wide.
Nanoholes on the gold substrates are compartmented into arrays of different sections, where each section functions as an independent sensor. Sensors are coated with special biofilms that are specifically attracting targeted proteins. Consequently, multiple different proteins in the biosamples could be captured at different places on the platform and monitored simultaneously. The LED light shines on the platform, passes through the nanoscale openings and its properties are recorded onto the CMOS chip. Since light going through the nanoscale holes changes its properties depending on the presence of biomolecules, it is possible to easily deduce the number of particles trapped on the sensors.
The great desert was born some 7 million years ago, as remnants of a vast sea called Tethys closed up. The movement of tectonic plates that created the Mediterranean Sea and the Alps also sparked the drying of the Sahara some 7 million years ago, according to the latest computer simulations of Earth’s ancient climate.
Though North Africa is currently covered by the world’s largest non-polar desert, climate conditions in the region have not been constant there for the last several million years. Subtle changes in Earth’s tilt toward the sun periodically increase the amount of solar energy received by the Northern Hemisphere in summer, altering atmospheric currents and driving monsoon rains. North Africa also sees more precipitation when less of the planet’s water is locked up in ice. Such increases in moisture limit how far the Sahara can spread and can even spark times of a “green Sahara”, when the sparse desert is replaced by abundant lakes, plants and animals.
Before the great desert was born, North Africa had a moister, semiarid climate. A few lines of evidence, including ancient dune deposits found in Chad, had hinted that the arid Sahara may have existed at least 7 million years ago. But without a mechanism to explain how it emerged, few scientists thought that the desert we see today could really be that old. Instead, most scientists argue that the Sahara took shape just 2 to 3 million years ago. Terrestrial and marine evidence suggest that North Africa underwent a period of drying at that time, when the Northern Hemisphere started its most recent cycle of glaciation.
Now Zhongshi Zhang of the Bjerknes Centre for Climate Research in Bergen, Norway, and colleagues have run simulations of climate change in North Africa over the last 30 million years. Their simulations take into account changes in Earth’s orbital position, atmospheric chemistry and the ratio of land to ocean as driven by tectonic forces. The models shows that precipitation in North Africa declined by more than half about 7 million years ago, causing the region to dry out. But this effect could not be explained by changes in vegetation, Earth’s tilt or greenhouse gas concentrations—leaving tectonic action.
About 250 million years ago, a huge body of water called the Tethys Sea separated the supercontinents of Laurasia to the north and Gondwana to the south. As those supercontinents broke apart and shuffled around, the African plate collided with the Eurasian plate, birthing the Alps and the Himalayas but closing off the bulk of the Tethys Sea. As the plates kept moving, the sea continued to shrink, eventually diminishing into the Mediterranean.
What set off the aridification in Africa was the replacement of the western arm of the Tethys Sea with the Arabian Peninsula around 7 to 11 million years ago. Replacing water with land, which reflects less sunlight, altered the region’s precipitation patterns. This created the desert and heightened its sensitivity to changes in Earth’s tilt, the researchers conclude in a study published today in Nature.
An ultrasensitive biosensor made from the wonder material graphene has been used to detect molecules that indicate an increased risk of developing cancer. The biosensor has been shown to be more than five times more sensitive than bioassay tests currently in use, and was able to provide results in a matter of minutes, opening up the possibility of a rapid, point-of-care diagnostic tool for patients.
The biosensor has been shown to be more than five times more sensitive than bioassay tests currently in use, and was able to provide results in a matter of minutes, opening up the possibility of a rapid, point-of-care diagnostic tool for patients. The biosensor has been presented today, 19 September, in IOP Publishing's journal 2D Materials.
To develop a viable bionsensor, the researchers, from the University of Swansea, had to create patterned graphene devices using a large substrate area, which was not possible using the traditional exfoliation technique where layers of graphene are stripped from graphite.
Instead, they grew graphene onto a silicon carbide substrate under extremely high temperatures and low pressure to form the basis of the biosensor. The researchers then patterned graphene devices, using semiconductor processing techniques, before attaching a number of bioreceptor molecules to the graphene devices. These receptors were able to bind to, or target, a specific molecule present in blood, saliva or urine.
The molecule, 8-hydroxydeoxyguanosine (8-OHdG), is produced when DNA is damaged and, in elevated levels, has been linked to an increased risk of developing several cancers. However, 8-OHdG is typically present at very low concentrations in urine, so is very difficult to detect using conventional detection assays, known as enzyme-linked immunobsorbant assays (ELISAs).
In their study, the researchers used x-ray photoelectron spectroscopy and Raman spectroscopy to confirm that the bioreceptor molecules had attached to the graphene biosensor once fabricated, and then exposed the biosensor to a range of concentrations of 8-OHdG.
Wouldn't it be great if you could just call up a supercomputer and ask it to do your data-wrangling for you? Actually, scratch that, no-one uses the phone anymore. What'd be really cool is if machines could respond to your queries straight from Twitter. It's a belief that's shared by Wolfram Research, which has just launched the Tweet a Program system to its computational knowledge engine, Wolfram Alpha. In a blog post, founder Stephen Wolfram explains that even complex queries can be executed within the space of 140 characters, including data visualizations.
In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun with the introduction of Tweet-a-Program. Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. TheTwitter bot will run your program in the Wolfram Cloud and tweet the result back to you. One can do a lot with Wolfram Language programs that fit in a tweet. It’s easy to make interesting patterns or even complicated fractals. Putting in some math makes it easy to get all sorts of elaborate structures and patterns.
The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population. There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. The Wolfram Language does really well with words and text and deals with images too.
As many stars as there are in our galaxy (100 - 400 billion), there are roughly an equal number of galaxies in the observable universe -- so for every star in the colossal Milky Way, there's a whole galaxy out there. All together, that comes out to the typically quoted range of between 10**22 and 10**24 total stars, which means that for every grain of sand on Earth, there are 10,000 stars out there.
The science world isn't in total agreement about what percentage of those stars are "sun-like" (similar in size, temperature, and luminosity) -- opinions typically range from 5 percent to 20 percent. Going with the most conservative side of that (5 percent), and the lower end for the number of total stars (10**22), gives us 500 quintillion, or 500 billion billion sun-like stars.
There's also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it's as high as 50 percent, but let's go with the more conservative 22 percent that came out of a recent PNAS study. That suggests that there's a potentially-habitable Earth-like planet orbiting at least 1 percent of the total stars in the universe -- a total of 100 billion billion Earth-like planets.
So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you're on the beach. Moving forward, we have no choice but to get completely speculative. Let's imagine that after billions of years in existence, 1 percent of Earth-like planets develop life (if that's true, every grain of sand would represent one planet with life on it). And imagine that on 1 percent of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe.
Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we'd estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy.
So where is everybody?
Welcome to the Fermi Paradox. There is something called "The Great Filter". The Great Filter theory says that at some point from pre-life to Type III intelligence, there's a wall that all or nearly all attempts at life hit. There's some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter. If this theory is true, the big question is, Where in the timeline does the Great Filter occur? This article gives different possibilities and scenarios.
Einstein is most famous for general relativity, which is really a theory of gravity. But his theory of special relativity has been just as important. Special relativity is all about how to interpret measurements: if you measure the speed of an object from a moving vehicle, how do I reconcile that number with a measurement I make from the side of the road? At low speeds this is a fairly simple task, but at very high speeds things start to get strange. This strangeness arises as a consequence of the speed of light being constant.
Tests of the validity of special relativity abound, but they've been limited to a few classes of objects. The ones done in the lab are usually very sensitive experiments performed on relatively slow-moving objects, while natural tests use the motion of the Earth or other astronomical objects.
Now, a German facility has measured time dilation very accurately. But in a twist, these measurements were performed on things moving at just under 40 percent of the speed of light in the laboratory. The researchers tested how clocks slow down when they are in motion. For example, if you are in motion relative to me, and I can see the watch on your hand, I should observe that it runs slightly slow compared to the one I'm wearing. Indeed, if you put an atomic clock in an airplane and fly it around the world, it will end up with a slightly different time than an identical clock that remained at the airport.
This time dilation is a consequence of a feature of physics called Lorentz invariance. Lorentz invariance is a way of saying that no matter where we are in the Universe, or how fast we are traveling, the Universe and its rules are basically the same.
The scientists verified in a very elegant experiment that special relativity and Lorentz invariance is true to one part in a billion. These results were also used to test some extensions to the Standard Model of physics, but these results were too inaccurate to provide much insight about the Standard Model. But there are competing models that may have much stronger deviations from Lorentz invariance. In these cases, the fact that these experiments didn't see any deviations will certainly be able to tell us something.
More importantly, though, the whole experiment is Earth-based, so we are not relying on any assumptions about astronomical objects. And even cooler, the experiment is in a regime where the objects actually have a speed that is quite high compared to normal lab experiments, which offers a whole new window on special relativity and Lorentz invariance.
The population of Earth is unlikely to stabilize this century, according to a new analysis published in the 19 September issue of the journal Science. The findings are contrary to past studies, which have predicted that the world population will peak around 2050 and then level off or decline.
The first flexible display device based on graphene has been unveiled by scientists in the UK, who say it is the first step on the road towards next generation gadgets that can be folded, rolled or crumpled up without cracking the screen.
Researchers at UT Arlington have created the first electronic device that can cool electrons to -228 degrees Celsius (-375F), without any kind of external cooling. The chip itself remains at room temperature, while a quantum well within the device cools the electrons down cryogenic temperatures. Why is this exciting? Because thermal excitation (heat) is by far the biggest problem when it comes to creating both high-performance and ultra-low-power computers. These cryogenic, quantum well-cooled electrons could allow for the creation of electronic devices that consume 10 times less energy than current devices, according to the researchers.
What, you may ask, is a quantum well? In essence, a quantum well is a very narrow gap between two semiconducting materials. Electrons are happily bouncing along the piece of semiconductor when they hit the gap (the well). Only electrons that have very specific characteristics can cross the boundary. In this case, only electrons with very low energy (i.e. cold electrons) are allowed to pass, while hot electrons are sent back from whence they came. The well is created by sandwiching a narrow-bandgap semiconductor between two semiconductors with a wider bandgap – it’s basically the quantum equivalent of the neck between the two bulbs of an hourglass.
Via Marty Koenig
With a new therapeutic product, researchers have managed to cure arthritis in mice for the first time. The scientists are now planning to test the efficacy of the drug in humans. Rheumatoid arthritis is a condition that causes painful inflammation of several joints in the body. The joint capsule becomes swollen, and the disease can also destroy cartilage and bone as it progresses. Rheumatoid arthritis affects 0.5% to 1% of the world's population.
Antibody–cytokine fusion proteins (immunocytokines) are innovative biopharmaceutical agents, which are being considered for the therapy of cancer and chronic inflammatory conditions. Immunomodulatory fusion proteins capable of selective localization at the sites of rheumatoid arthritis (RA) are of particular interest, as they may increase the therapeutic index of the cytokine payload. The F8 antibody recognizes the alternatively spliced extra domain A of fibronectin, a marker of angiogenesis, which is strongly overexpressed at sites of arthritis. In this study, scientists investigated the targeting and therapeutic activity of the immunocytokine F8-IL4 in the mouse model of collagen-induced arthritis. Different combination regimes were tested and evaluated by the analysis of serum and tissue cytokine levels. They were able to show that F8-IL4 selectively localizes to neovascular structures at sites of rheumatoid arthritis in the mouse, leading to high local concentrations of IL4. When used in combination with dexamethasone, F8-IL4 was able to cure mice with established collagen-induced arthritis. Response to treatment was associated with an elevation of IL13 levels and decreased IL6 plasma concentrations. A fully human version of F8-IL4 is currently being developed for clinical investigations and clinical trials in humans will hopefully start soon.
"As a result of combination with the antibody, IL-4 reaches the site of the disease when the fusion molecule is injected into the body," says pharmacist Teresa Hemmerle, who has just completed her dissertation in the group of Dario Neri, a professor at the Institute of Pharmaceutical Sciences. Together with Fabia Doll, also a PhD pharmacist at ETH, she is the lead author of the study. "It allows us to concentrate the active substance at the site of the disease. The concentration in the rest of the body is minimal, which reduces side-effects," she says.
Two prominent U.S. hospitals are preparing to launch trials with diabetics and chronic disease patients using Apple Inc's (AAPL.O) HealthKit, offering a glimpse of how the iPhone maker's ambitious take on healthcare will work in practice.
HealthKit, which is still under development, is the center of a new healthcare system by Apple. Regulated medical devices, such as glucose monitors with accompanying iPhone apps, can send information to HealthKit. With a patient's consent, Apple's service gathers data from various health apps so that it can be viewed by doctors in one place.
Stanford University Hospital doctors said they are working with Apple to let physicians track blood sugar levels for children with diabetes. Duke University is developing a pilot to track blood pressure, weight and other measurements for patients with cancer or heart disease.
The goal is to improve the accuracy and speed of reporting data, which often is done by phone and fax now. Potentially doctors would be able to warn patients of an impending problem. The pilot programs will be rolled out in the coming weeks.
Apple last week mentioned the trials in a news release announcing the latest version of its operating system for phones and tablets, iOS 8, but this is the first time any details have been made public. Apple declined to comment for this article.
Apple aims eventually to work with health care providers across the United States, including hospitals which are experimenting with using technology to improve preventative care to lower healthcare cost and make patients healthier.
Reuters previously reported that Apple is in talks with other U.S. hospitals. Stanford Children's Chief Medical Information Officer Christopher Longhurst told Reuters that Stanford and Duke were among the furthest along.
Longhurst said that in the first Stanford trial, young patients with Type 1 diabetes will be sent home with an iPod touch to monitor blood sugar levels between doctor's visits.
HealthKit makes a critical link between measuring devices, including those used at home by patients, and medical information services relied on by doctors, such as Epic Systems Corp, a partner already announced by Apple.
Medical device makers are taking part in the Stanford and Duke trials.
DexCom Inc (DXCM.O), which makes blood sugar monitoring equipment, is in talks with Apple, Stanford, and the U.S. Food and Drug Administration about integrating with HealthKit, said company Chief Technical Officer Jorge Valdes.
DexCom's device measures glucose levels through a tiny sensor inserted under the skin of the abdomen. That data is transmitted every five minutes to a hand-held receiver, which works with a blood glucose meter. The glucose measuring system then sends the information to DexCom's mobile app, on an iPhone, for instance.
Under the new system, HealthKit can scoop up the data from DexCom, as well as other app and device makers.
Data can be uploaded from HealthKit into Epic's "MyChart" application, where it can be viewed by clinicians in Epic's electronic health record.
Via Ray and Terry's
Children born today will see the world committed to dangerous and irreversible levels of climate change by their young adulthood at current rates, as the world poured a record amount of greenhouse gases into the atmosphere this year.
Annual carbon dioxide emissions showed a strong rise of 2.5% on 2013 levels, putting the total emitted this year on track for 40bn tonnes. That means the global ‘carbon budget’, calculated as the total governments can afford to emit without pushing temperatures higher than 2C above pre-industrial levels, is likely to be used up within just one generation, or in thirty years from now.
Scientists think climate change is likely to have catastrophic and irreversible effects, including rising sea levels, polar melting, droughts, floods and increasingly extreme weather, if temperatures rise more than 2C. They have calculated that this threshold is likely to be breached if global emissions top 1,200 billion tonnes, giving a “carbon budget” to stick to in order to avoid dangerous warming.
Dave Reay, professor of carbon management at the University of Edinburgh, said: “If this were a bank statement it would say our credit is running out. We’ve already burned through two-thirds of our global carbon allowance and avoiding dangerous climate change now requires some very difficult choices. Not least of these is how a shrinking global carbon allowance can be shared equitably between more than 7bn people and where the differences between rich and poor are so immense.”
The study, by the Global Carbon Project, also found that China’s per capita emissions had surpassed those of Europe for the first time, between 2013 and 2014.
It comes ahead of a climate summit on Tuesday in New York, at which the UN secretary-general Ban Ki-moon will bring together heads of state and government from more than 120 countries to discuss climate change, and encourage them to make commitments on emissions reductions in the run-up to a crunch meeting in Paris late next year, at which a new global agreement on emissions is expected to be signed.
Emissions for 2014, according to the research, are set to rise to 40bn tonnes. That compares with emissions of 32bn tonnes in 2010, showing how fast the output is rising.
The rising trend has continued despite increasingly alarming warnings from scientists over the future of the climate, and commitments by developed countries to cut their carbon and from major developing economies to curb their emissions growth. There was a brief blip in global emissions growth at the time of the banking crisis, but this “breathing space” was quickly overtaken by an expansion in fossil fuel demand.
A team of Stanford researchers has developed a protein therapy that disrupts the process that causes cancer cells to break away from original tumor sites, travel through the blood stream and start aggressive new growths elsewhere in the body. This process, known as metastasis, can cause cancer to spread with deadly effect.
The Stanford team seeks to stop metastasis, without side effects, by preventing two proteins – Axl and Gas6 – from interacting to initiate the spread of cancer. Axl proteins are expressed on the surface of cancer cells, poised to receive biochemical signals from Gas6 proteins. When two Gas6 proteins link with two Axls, the signals that are generated enable cancer cells to leave the original tumor site, migrate to other parts of the body and form new cancer nodules.
Physicists at the University of Geneva have succeeded in teleporting the quantum state of a photon to a crystal over 25 kilometers of optical fiber. The experiment, carried out in the laboratory of Professor Nicolas Gisin, constitutes a first, and simply pulverises the previous record of 6 kilometres achieved ten years ago by the same UNIGE team. Passing from light into matter, using teleportation of a photon to a crystal, shows that, in quantum physics, it is not the composition of a particle which is important, but rather its state, since this can exist and persist outside such extreme differences as those which distinguish light from matter. The results obtained by Félix Bussières and his colleagues are reported in the latest edition of Nature Photonics.
Quantum physics, and with it the UNIGE, is again being talked about around the world with the Marcel Benoist Prize for 2014 being awarded to Professor Nicolas Gisin, and the publication of experiments in Nature Photonics. The latest experiments have enabled verifying that the quantum state of a photon can be maintained whilst transporting it into a crystal without the two coming directly into contact. One needs to imagine the crystal as a memory bank for storing the photon's information; the latter is transferred over these distances using the teleportation effect.
The experiment not only represents a significant technological achievement but also a spectacular advance in the continually surprising possibilities afforded by the quantum dimension. By taking the distance to 25 kilometres of optical fibre, the UNIGE physicists have significantly surpassed their own record of 6 kilometres, the distance achieved during the first long-distance teleportation achieved by Professor Gisin and his team in 2003.
While bubonic plague would seem a blight of the past, there have been recent outbreaks in India, Madagascar and the Congo. And it's mode of infection now appears similar to that used by other well-adapted human pathogens, such as the HIV virus.
In self-defense the hagfish produces from its glands a slime that is composed of nanometer width threads and what is likely sugar or glyco-modifications. The slime is thought to impede capture by making the hagfish slippery, and possibly by clogging the gills of a predator. The nanothreads are remarkable: comparable to spider silk in tensile strength (800 megapascals or near 1 gigapascal) and lightness, and 5 times stronger than steel on a weight basis. Moreover, each thread is only 12 nanometers wide but 15 centimeters long. Amazingly, a full thread is wrapped up in so that it fits within a single cell, highly specialized and called a gland thread cell (GTC).
Scientists have uncovered, using electron microscopy, the organization of a single hagfish nano-sized thread, helping resolve the mystery of why extrusion of such a long (compared to its width) thread from the cell does not cause tangling. The thread is coiled up in a conical “skein” in 15-20 layers. As a GTC matures, its nucleus migrates to an extreme pole, leaving most of the cell volume packed with a single coil of thread.
The conical shape of the coiling seems to be controlled by the shape of the nucleus of the GTC which deforms over time from being round to being elongated. The first layer of coils observed by the researchers is round with subsequent layers becoming more elongated. Therefore the nucleus provides an evolving “obstruction” which restricts the freedom of the thread and organizes it over the maturation time.
The authors used a method known as Focused Ion Beam Scanning Electron Microscopy or (FIB-SEM) to scan a matured GTC. The great advantage of FIB-SEM is the ability to acquire image slices through a succession of scanned planes. Software then takes image slices to reconstruct a 3D representation.
Attempts to industrialize strong, natural nanofibers with extraordinary properties have met with limited success. Harvesting silk from the silk worm in bulk is routine and has a history of thousands of years. But silk does not have the tensile properties of spider or hagfish thread. Harvesting spider silk is not doable on an industrial scale partly due to our inability to amass sufficient production volume, although there are some ongoing efforts to engineer the proteins into another organism such as a silkworm or bacteria that affords better control over production capacity.
Intel has been working on a 3D scanner small enough to fit in the bezel of even the thinnest tablets. The company aims to have the technology in tablets from 2015, with CEO Brian Krzanich telling the crowd at MakerCon in New York on Thursday that he hopes to put the technology in phones as well.
"Our goal is to just have a tablet that you can go out and buy that has this capability," Krzanich said. "Eventually within two or three years I want to be able to put it on a phone."
Krzanich and a few of his colleagues demonstrated the technology, which goes by the name "RealSense," on stage using a human model and an assistant who simply circled the model a few times while pointing a tablet at the subject. A full 3D rendering of the model slowly appeared on the screen behind the stage in just a few minutes. The resulting 3D models can be manipulated with software or sent to a 3D printer.
"The idea is you go out, you see something you like and you just capture it," Krzanich explained. He said consumer tablets with built in 3D scanners will hit the market in the third or fourth quarter of 2015, with Intel also working on putting the 3D scanning cameras on drones.
The predecessor to the 3D scanning tablets demonstrated on stage were announced earlier this month in the form of the Dell Venue 8 7000 series Android tablet sports Intel's RealSense snapshot depth camera, which brings light-field camera-like capabilities to a tablet. It will be available later this year.
A growing body of evidence suggests that environmental stresses can cause changes in gene expression that are transmitted from parents to their offspring, making "epigenetics" a hot topic. Epigenetic modifications do not affect the DNA sequence of genes, but change how the DNA is packaged and how genes are expressed. Now, a study by scientists at UC Santa Cruz shows how epigenetic memory can be passed across generations and from cell to cell during development.
"There has been ongoing debate about whether the methylation mark can be passed on through cell divisions and across generations, and we've now shown that it is," said corresponding author Susan Strome, a professor of molecular, cell and developmental biology at UC Santa Cruz.
Strome's lab created worms with a mutation that knocks out the enzyme responsible for making the methylation mark, then bred them with normal worms. Using fluorescent labels, they were able to track the fates of marked and unmarked chromosomes under the microscope, from egg cells and sperm to the dividing cells of embryos after fertilization. Embryos from mutant egg cells fertilized by normal sperm had six methylated chromosomes (from the sperm) and six unmarked or "naked" chromosomes (from the egg).
As embryos develop, the cells replicate their chromosomes and divide. The researchers found that when a marked chromosome replicates, the two daughter chromosomes are both marked. But without the enzyme needed for histone methylation, the marks become progressively diluted with each cell division.
"The mark stays on the chromosomes derived from the initial chromosome that had the mark, but there's not enough mark for both daughter chromosomes to be fully loaded," Strome said. "So the mark is bright in a one-cell embryo, less bright after the cell divides, dimmer still in a four-cell embryo, and by about 24 to 48 cells we can't see it anymore."
The researchers then did the converse experiment, fertilizing normal egg cells with mutant sperm. The methylation enzyme (called PRC2) is normally present in egg cells but not in sperm, which don't contribute much more than their chromosomes to the embryo. So the embryos in the new experiment still had six naked chromosomes (this time from the sperm) and six marked chromosomes, but now they also had the enzyme.
"Remarkably, when we watch the chromosomes through cell divisions, the marked chromosomes remain marked and stay bright, because the enzyme keeps restoring the mark, but the naked chromosomes stay naked, division after division," Strome said. "That shows that the pattern of marks that was inherited is being transmitted through multiple cell divisions."
A first-ever standard “operating system” for drones, developed by a startup with MIT roots, could soon help manufacturers easily design and customize unmanned aerial vehicles (UAVs) for multiple applications.
Today, hundreds of companies worldwide are making drones for infrastructure inspection, crop- and livestock-monitoring, and search-and-rescue missions, among other things. But these are built for a single mission, so modifying them for other uses means going back to the drawing board, which can be very expensive.
Now Airware, founded by MIT alumnus Jonathan Downey ’06, has developed a platform — hardware, software, and cloud services — that lets manufacturers pick and choose various components and application-specific software to add to commercial drones for multiple purposes.
The key component is the startup’s Linux-based autopilot device, a small red box that is installed into all of a client’s drones. “This is responsible for flying the vehicle in a safe, reliable manner, and acts as hub for the components, so it can collect all that data and display that info to a user,” says Downey, Airware’s CEO, who researched and built drones throughout his time at MIT.
To customize the drones, customers use software to select third-party drone vehicles and components — such as sensors, cameras, actuators, and communication devices — configure settings, and apply their configuration to a fleet. Other software helps them plan and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud, where it’s aggregated and analyzed, and available to designated users.
If a company decides to use a surveillance drone for crop management, for instance, it can easily add software that stitches together different images to determine which areas of a field are overwatered or underwatered. “They don’t have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,” Downey says. “The entire industry can leverage that.”
Clients have trialed Airware’s platform over the past year — including researchers at MIT, who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations, search-and-rescue missions, and agricultural applications. Another UAV maker, Cyber Technology in Australia, is using the platform for drones responding to car crashes and other disasters, and inspecting offshore oilrigs.
Now, with its most recent $25 million funding round, Airware plans to launch the platform for general adoption later this year, viewing companies that monitor crops and infrastructure — with drones that require specific cameras and sensors — as potential early customers.
There is something primal in a mother's response to a crying infant. So primal, in fact, that mother deer will rush protectively to the distress calls of other infant mammals, such as fur seals, marmots and even humans. This suggests such calls might share common elements – and perhaps that these animals experience similar emotions.
Researchers – and, indeed, all pet owners – know that humans respond emotionally to the distress cries of their domestic animals, and there is some evidence that dogs also respond to human cries. However, most people have assumed this is a by-product of domestication.
However, Susan Lingle, a biologist at the University of Winnipeg, Canada, noticed that the infants of many mammal species have similar distress calls: simple sounds with few changes in pitch. She decided to test whether cross-species responses occur more widely across the evolutionary tree.
So, Lingle and her colleague Tobias Riede, now at Midwestern University in Glendale, Arizona, recorded the calls made by infants from a variety of mammal species when separated from their mother or otherwise threatened. They then played the recordings through hidden speakers to wild mule deer (Odocoileus hemionus) out on the Canadian prairies. They found that deer mothers quickly moved towards the recordings of infant deer, but also towards those of infant fur seals, dogs, cats and humans, all of which call at roughly the same pitch. Even the ultrasonic calls of infant bats attracted the deer mothers if Lingle used software to lower their pitch to match that of deer calls. In contrast, they found the deer did not respond to non-infant calls such as birdsong or the bark of a coyote (American Naturalist, DOI: 10.1086/677677).
The DNA of every organism on Earth is a right-handed double helix, but why that would be has puzzled scientists since not long after Francis Crick and James Watson announced the discovery of DNA's double-helical structure in 1953. It's a puzzle because no one has been able to think of a fundamental reason why DNA couldn't also be left-handed.
New research by University of Nebraska-Lincoln physicists and published in the Sept. 12 online edition of Physical Review Letters now gives support to a long-posited but never-proven hypothesis that electrons in cosmic rays -- which are mostly left-handed -- preferentially destroyed left-handed precursors of DNA on the primordial Earth.
The hypothesis, called the Vester-Ulbricht model, was proposed by Frederic Vester of the University of Saarbrucken in Germany and Tilo L.V. Ulbricht of the University of Cambridge in England in 1961 in response to the 1957 discovery that most of the electrons spewing from radioactive beta decay were left-handed.
Joan M. Dreiling and Timothy J. Gay of UNL focused circularly polarized laser light on a specially prepared crystal of gallium-arsenide to produce electrons whose spins were either parallel or anti-parallel to their direction of motion upon emission from the crystal -- essentially artificial beta rays. They then directed these electrons to strike target molecules of a substance called bromocamphor, which comes in both right- and left-handed varieties.
They found that at the lowest electron energies they studied, left-handed electrons preferentially destroyed left-handed molecules and vice versa. This sensitivity to molecular handedness has a mechanical analog: the inability of a left-handed bolt to screw into a right-handed nut. The molecular experiment proves the principle underlying the Vester-Ulbricht hypothesis.
"The circular polarization of the laser light effectively transferred to the spin (handedness) of the electrons emitted by the gallium-arsenide crystal," said Dreiling, a postdoctoral research assistant who received her doctorate from UNL in May. "We are able to reverse the spin-polarization of the electrons just by reversing the circular polarization of the light."
The effect they saw was quite small, they said -- like "looking for an electronic needle in a haystack," Gay said -- but they said they're highly confident in their result. "We have done several different checks with our experiment and I am totally confident that the asymmetry exists," Dreiling said. "The checks all came out showing that this asymmetry is real."
While some people have successfully 3D printed buildings, others have taken the same approach to the car manufacturing business, as a company has just come out with a car called the Strati that’s the first 3D-printed car in the world. Scientific Americanreveals that it took Local Motors only 45 hours to build the Strati, a two-seater “neighborhood” electric car that has a range of up to 120 miles and a maximum speed of 40 mph.
Interestingly, the company plans to start selling Stratis for anywhere between $18,000 to $30,000 later this year, as it further refines its 3D-printing procedure.
“We expect in the next couple of months [printing a complete car] to be below 24 hours and then eventually get it below 10 hours, [down from 45 hours currently]” Local Motors CEO John Rogers said. “This is in a matter of months. Today, the best Detroit or Germany can do is 10 hours on a [production] line, after hundreds of years of progress.”
The car’s design was chosen from over 200 proposals submitted by Local Motors’ online community and Rogers says that the main advantage of 3D printed cars is that local communities may adopt such procedures to build cars best fitted to the resources available to them.
“In the future, you’ll still have … your Detroits that make one product the same over a million units,” the exec said. “And then I think you’ll have examples of microfactories that do things profitably at lower volumes—10,000 units, 15,000 units per year—and show the mass factories what they ought to build next.”
Local Motors chose an electric engine for the Strati because an electric powertrain was simpler to construct. Another advantage the Strati has is that it’s made from thermoplastic using a “Big Are Additive Manufacturing (BAAM) machine,” which is a fully recyclable material, meaning that it can be easily “chopped up and reprocessed back into another car.”
Even so, while using 3D printing technology to build a car might lead to less wasted material, a lot of energy might actually be required to print such vehicles.
Via Tiaan Jonker, MARTIN'S Gonçalo Wa kapinga
Although cardiac pacemakers have saved countless lives, they do have at least one shortcoming – like other electronic devices, their batteries wear out. When this happens, of course, surgery is required in order to replace the pacemaker. While some researchers are looking into ideas such as drawing power from blood sugar, Swiss scientists from the University of Bern have taken another approach. They’ve developed a wristwatch-inspired device that can power a pacemaker via the beating of the patient’s own heart.
Bern cardiologist Prof. Rolf Vogel first came up with the idea four years ago, and it has been in development ever since. The resulting prototype device wasn’t just inspired by an auto-winding wristwatch, but actually incorporates the mechanism of a commercially-available model. Such watches rely on the user’s arm movements to wind a mechanical spring. Once that spring is fully wound, it then unwinds to power a micro-generator inside the watch.
In the case of the Bern device, it’s sutured onto the heart’s myocardial muscle instead of being worn on the wrist, and its spring is wound by heart contractions instead of arm movements. When that spring unwinds, the resulting energy is buffered in a capacitor. That capacitor then powers a pacemaker, to which it is electrically wired.
According to the research team, the system has demonstrated a mean output power of 52 microwatts when implanted in a live 60-kg (132-lb) pig – that’s more than enough for most modern pacemakers, which consume about 10 microwatts.
They now hope to further miniaturize the technology, make it more sensitive to the motion of the heart, and build both its energy-harvesting and capacitor functions into a pacemaker. This all-in-one setup would do away with the need for electrical leads, which can fail in conventional pacemakers.
The research was presented this Sunday at the ESC (European Society of Cardiology) Congress, by PhD candidate and team member Adrian Zurbuchen. A similar device is being developed at the University of Michigan.
Euler is easily the most prolific mathematician of all time. The range and volume of his output is simply staggering. He published over 850 papers, almost all of substantial length, and more than 25 books and treatises. In 1907 the Swiss Academy of Sciences established the Euler Commission with the charge of publishing the complete body of work consisting of all of his papers, manuscripts, and correspondence. This project, known as Opera Omnia, began in 1911 and is still ongoing. His scientific publications, not counting his correspondence, run to over 70 volumes, each between approximately 300 and 600 pages. Thousands of pages of handwritten manuscripts are still not in print. Euler was in constant communication with all the great scientists of his day, and his correspondence covers several thousand pages.
Euler's powers of memory and concentration were legendary. He could recite the entire Aeneid word-for-word. He was not troubled by interruptions or distractions; in fact, he did much of his work with his young children playing at his feet. He was able to do prodigious calculations in his head, a necessity after he went blind. The contemporary French mathematician Condorcet tells the story of two of Euler's students who had independently summed seventeen terms of a complicated infinite series, only to disagree in the fiftieth decimal place; Euler settled the dispute by recomputing the sum in his head.
Further reading: http://www.ams.org/bookstore/pspdf/euler-prev.pdf