Amazing Science
Follow
482.2K views | +93 today
 
Scooped by Dr. Stefan Gruenwald
onto Amazing Science
Scoop.it!

Researchers collaborate on inexpensive DNA sequencing method

Researchers collaborate on inexpensive DNA sequencing method | Amazing Science | Scoop.it

Rapid, accurate genetic sequencing soon may be within reach of every doctor's office if recent research from the National Institute of Standards and Technology (NIST) and Columbia University's School of Engineering and Applied Science can be commercialized effectively. The team has demonstrated a potentially low-cost, reliable way to obtain the complete DNA sequences of any individual using a sort of molecular ticker-tape reader, potentially enabling easy detection of disease markers in a patient's DNA ("PEG-labeled nucleotides and nanopore detection for single molecule DNA sequencing by synthesis").

 

Genia Technologies is collaborating with scientists at Columbia University and Harvard University to develop a commercial single-molecule sequencer. The company has licensed a nanopore sequencing-by-synthesis technology developed by researchers at Columbia and the National Institute of Standards and Technology, which it plans to integrate with its nanopore chip platform, and is using polymerase fusion proteins developed at Harvard.

Genia plans to ship its first nanopore sequencing device to beta customers by the end of next year, and to bring a commercial product to market in 2014.


While sequencing the genome of an animal species for the first time is so common that it hardly makes news anymore, it is less well known that sequencing any single individual's DNA is an expensive affair, costing many thousands of dollars using today's technology. An individual's genome carries markers that can provide advance warning of the risk of disease, but you need a fast, reliable and economical way of sequencing each patient's genes to take full advantage of them. Equally important is the need to continually sequence an individual's DNA over his or her lifetime, because the genetic code can be modified by many factors.

 

Nanopores and their interaction with polymer molecules have been a longtime research focus of NIST scientist John Kasianowicz. His group collaborated with a team led by Jingyue Ju, director of Columbia's Center for Genome Technology and Biomolecular Engineering, which came up with the idea for tagging DNA building blocks for single molecule sequencing by nanopore detection. The ability to discriminate between the polymer tags was demonstrated by Kasianowicz, his NIST colleague Joseph Robertson, and others. Columbia University has applied for patents for the commercialization of the technology.


Kasianowicz estimates that the technique could identify a DNA building block with extremely high accuracy at an error rate of less than one in 500 million, and the necessary equipment would be within the reach of any medical provider. "The heart of the sequencer would be an operational amplifier that would cost much less than $1,000 for a one-time purchase," he says, "and the cost of materials and software should be trivial."

more...
No comment yet.
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science | Scoop.it

NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".

 

This newsletter is aggregated from over 1450 news sources:

http://www.genautica.com/links/1450_news_sources.html

 

All my Tweets and Scoop.It! posts sorted and searchable:

http://www.genautica.com/tweets/index.html

 

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

You can search through all the articles semantically on my

archived twitter feed

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.

 

You can also type your own query:

 

e.g., you are looking for articles involving "dna" as a keyword

 

http://www.scoop.it/t/amazing-science/?q=dna


Or CLICK on the little

FUNNEL symbol at the

 top right of the screen

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••


MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

more...
Siegfried Holle's curator insight, July 4, 2014 8:45 AM

Your knowledge is your strength and power 

Saberes Sin Fronteras OVS's curator insight, November 30, 2014 5:33 PM

Acceso gratuito a documentos de las mejores universidades del mundo

♥ princess leia ♥'s curator insight, December 28, 2014 11:58 AM

WoW  .. Expand  your mind!! It has room to grow!!! 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ocean Cleanup project completes Great Pacific Garbage Patch research expedition

Ocean Cleanup project completes Great Pacific Garbage Patch research expedition | Amazing Science | Scoop.it

In May, the Ocean Cleanup project announced that its first deployment would be delivered in the Korea Strait next year. That will pave the way for its ultimate goal of cleaning up the Great Pacific Garbage Patch. With that in mind, a research expedition at the Garbage Patch has just been completed. The concept for the Ocean Cleanup project was conceived by Dutch entrepreneur and inventor Boyan Slat and announced in 2013. Slat realized that the movement of the oceans could be harnessed in order to direct floating plastic waste into the arms of a static collection system.


After a positive feasibility study, a successful crowdfunding campaign and being named a category winner in the 2015 Designs of the Year awards, the Ocean Cleanup project recently set out to gather research in the Pacific. A fleet of 30 vessels, including a 171 ft (52 m) mothership, took part in the month-long voyage, or Mega Expedition, the primary goal of which was to determine just how much plastic is actually floating in the Great Pacific Garbage Patch.


According to the Ocean Cleanup project, this was the largest ocean research expedition in history. A series of measurement techniques were employed to sample the concentration of plastic in the area, including trawls and aerial surveys. It is also said to have been the first time that large pieces of plastic, such as ghost nets and Japanese tsunami debris, have been quantified.


Slat explains that it is not just floating bits of plastic that are a problem, but what happens to those pieces over the long term. "The vast majority of the plastic in the garbage patch is currently locked up in large pieces of debris, but UV light is breaking it down into much more dangerous microplastics, vastly increasing the amount of microplastics over the next few decades if we don’t clean it up," he says. "It really is a ticking time bomb."


The research samples collected during the expedition during have to be analyzed, but preliminary findings indicate a "higher-than-expected volume" of plastic objects found at the Pacific site.


The cleanup proper of the Great Pacific Garbage Patch is expected to begin in 2020.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Temples hidden dangers: Incense could be more harmful than cigarette smoke, researchers find

Temples hidden dangers: Incense could be more harmful than cigarette smoke, researchers find | Amazing Science | Scoop.it

In the future, incense might need to carry a health warning, just like tobacco. That’s the conclusion of researchers who for the first time have compared the effects of burning incense indoors to inhaling tobacco smoke. Previous research has already shown how incense smoke can be harmful to a person’s health, but these new findings suggest that it’s worse than cigarettes by several measurements – a result that may alarm some in Asian countries, where incense burning is a common practice in the home and a traditional ritual in many temples.


Clearly, there needs to be greater awareness and management of the health risks associated with burning incense in indoor environments,” said Rong Zhou of the South China University of Technology, in a statement to the press.


The researchers tested two types of incense against cigarette smoke to see their effects on bacteria and the ovary cells of Chinese hamsters. Both the incense products contained the common ingredients agarwood and sandalwood, which are used in incense for their fragrances.


The findings, published in Environmental Chemistry Letters, showed that incense smoke is mutagenic, which means it can cause mutations to genetic material, primarily DNA. Compared to the cigarette smoke, the incense products were found to be more cytotoxic (toxic to cells) and genotoxic (toxic to DNA). Of the 64 compounds identified in the incense smoke, two were singled out as highly toxic.


Obviously none of this sounds very good, and for people frequently exposed to incense smoke in indoor environments, hopefully it serves as a wake-up call: mutagenics, genotoxins, and cytotoxins are all linked to the development of cancers.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

NASA: Seas around the world have risen an average of 3 inches since 1992

NASA: Seas around the world have risen an average of 3 inches since 1992 | Amazing Science | Scoop.it

Seas around the world have risen an average of nearly 3 inches since 1992, with some locations rising more than 9 inches due to natural variation, according to the latest satellite measurements from NASA and its partners. An intensive research effort now underway, aided by NASA observations and analysis, points to an unavoidable rise of several feet in the future.


Members of NASA’s new interdisciplinary Sea Level Change Team will discuss recent findings and new agency research efforts during a media teleconference today at 12:30 p.m. EDT. NASA will stream the teleconference live online.


The question scientists are grappling with is how quickly will seas rise?


“Given what we know now about how the ocean expands as it warms and how ice sheets and glaciers are adding water to the seas, it’s pretty certain we are locked into at least 3 feet of sea level rise, and probably more,” said Steve Nerem of the University of Colorado, Boulder, and lead of the Sea Level Change Team. “But we don't know whether it will happen within a century or somewhat longer.”


Team scientists will discuss a new visualization based on 23 years of sea level data – the entire record of available satellite data -- which reveals changes are anything but uniform around the globe. The record is based on data from three consecutive satellite missions, the first a collaboration between NASA and the French space agency, Centre National d'Études Spatiales, launched in 1992. The next in the series is Jason-3, led by the National Oceanic and Atmospheric Administration (NOAA) with participation by NASA, CNES and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT).

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

“Spookiness” Indeed Confirmed by the First Loophole-free Quantum Test

“Spookiness” Indeed Confirmed by the First Loophole-free Quantum Test | Amazing Science | Scoop.it

Spookiness, it seems, is here to stay. Quantum theory has been put to its most stringent “loophole free” test yet, and it has come out victorious, ruling out more common sense views of reality (well, mostly). Many thanks to Matt Leifer for bringing this experiment -- by a collaboration of researchers in the Netherlands, Spain, and the UK -- to my attention (arXiv:1508.05949).

A few years ago, I wrote a feature for Scienceabout the quest to close loopholes in quantum entanglement experiments, with a number of groups around the world vying to perform the perfect test. ("Quantum Mechanics Braces for the Ultimate Test.") In that article, I quote quantum physicist and FQXi member Nicolas Gisin saying: “This race is on because the group that performs the first loophole-free test will have an experiment that stands in history.”


All prior tests have loopholes, and to get a truly definitive result, these need to be closed. One such loophole is the “detection loophole”. In many Bell tests, experimenters entangle photons and then measure their properties. The trouble is photons zip about quickly, and often simply escape from the experiment before being detected and measured. Physicists can lose as many as 80 per cent of the photons in their test. That means that experimenters have to make a ‘fair sampling’ assumption that the ones that they *do* detect are representative of the ones that have gone missing. For the conclusions to be watertight, however, you really want to keep track of all the subjects in your test.

It is easier to keep hold of entangled ions, which have been used in other experiments. The catch there, however, is that these are not often kept far enough apart to rule out the less spooky explanation that the two entangled partners simply influence each other, communicating at a speed that is less than the speed of light, during the experiment. This is known as the “communication loophole” or the “locality loophole.”

In the new paper by Henson et al, the authors describe measuring electrons with entangled spins. The entangled pairs have been separated by 1.3 km, to ensure that they do not have time to communicate (at a speed slower than the speed of light) over the course of the experiment.

They cleverly use a technique known as "entanglement swapping" to tie up both loopholes, combining the benefits of photons (which can travel long distances) with electrons (which are easier to monitor). Their electrons are placed in two different labs, 13km apart. The spin of each electron is then entangled with a photon and those two photons are fired off to a third location, where they are entangled with each other. As soon as the photons are entangled, BINGO, so too are the two original electron spins, seated in vastly distant labs. The team carried out 245 trials of the experiment, comparing entangled electrons, and report that Bell’s bound is violated.


The authors of the recent test state: ”Our experiment realizes the first Bell test that simultaneously addresses both the detection loophole and the locality loophole. Being free of the experimental loopholes, the setup can test local realist theories of nature without introducing extra assumptions such as fair-sampling, a limit on (sub-)luminal communication or the absence of memory in the setup. Our observation of a loophole-free Bell inequality violation thus rules out all local realist theories that accept that the number generators timely produce a free random bit and that the outputs are final once recorded in the electronics. This result places the strongest restrictions on local realistic theories of nature to date.”

As a test of the foundations of reality, for most physicists, these experiments dot the i’s and cross the t’s. It seemed unlikely that given the other Bell tests performed so far — even with their loopholes — that quantum theory would be found wanting, in a loophole-free test. That’s because each of the earlier experiments were so different from each other, and had different weaknesses, that nature would have to have been cunning, in quite different and particular kinds of ways in each previous experiment, to keep fooling us into thinking quantum theory was correct, if it is not. But it is important, nonetheless, to test quantum theory to its limits. After all, you never know.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Rethinking RNA: Thousands of long noncoding RNAs are physically attached to DNA

Rethinking RNA: Thousands of long noncoding RNAs are physically attached to DNA | Amazing Science | Scoop.it

It’s a new day for RNA. In a study published in Cell Reports on Aug 18, Michael Werner, sixth-year graduate student in Cell and Molecular Biology, and Alex Ruthenburg, PhD, Neubauer Family Foundation Assistant Professor of Molecular Genetics and Cell Biology, detail their discovery of a new class of RNA molecule that could perhaps be considered the “dark matter” of the genome. They identified thousands of long noncoding RNAs that are physically attached to DNA (quite literally coating the genome), which may play important but yet unidentified roles in gene regulation.


At some point in high school and college introductory biology classes you probably learned the “Central Dogma.” It posits that in all organisms, genetic information is coded within DNA, which is converted to a ‘messenger’ molecule called RNA, which is then converted into proteins – and it is proteins that perform the various functions of the cell as molecular machines. Advances in next-generation sequencing technologies during the last decade have revealed that this is only part of the story, however.


It turns out that only ~1.5 percent of our genome contains the information to make proteins. Most of the DNA in our genome is processed into RNA ‘transcripts’ that don’t code for proteins – referred to as noncoding RNA. Some have even been shown to perform functions in the cell as RNA molecules, without the need to be turned into a protein.


Now, together with Alex Ruthenburg, Werner discovered a class of noncoding RNA that establishes a new paradigm for how RNA acts inside cells. In a recent Cell Reports paper, the two scientists show that the majority of long noncoding RNA molecules are actually associated with DNA, as opposed to messenger RNAs that are loosely dispersed throughout the nucleus.


Remarkably, they identified several thousand RNAs that are actually physically tethered to DNA and coat the human genome, which they called chromatin-enriched RNAs (cheRNAs). The discovery of these RNAs was possible through biochemical enrichment of the genome, to the exclusion of other parts of the cell that predominately contained messenger RNA. Although they didn’t intend to find these cheRNA molecules, they decided to see if there was anything else they could learn about them. To their excitement and considerable surprise, they found tantalizing hints that cheRNAs are involved in regulating the expression of nearby genes. The sheer number of these RNAs suggest that they could be a relatively common way to control genes throughout the human genome, possibly contributing to the complexity of tissues seen across our bodies.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Wormhole Created in Lab Makes Invisible Magnetic Field

Wormhole Created in Lab Makes Invisible Magnetic Field | Amazing Science | Scoop.it
Physicists have created a so-called magnetic wormhole that transports a magnetic field from one point to the other without being detected.


Ripped from the pages of a sci-fi novel, physicists have crafted a wormhole that tunnels a magnetic field through space. "This device can transmit the magnetic field from one point in space to another point, through a path that is magnetically invisible," said study co-author Jordi Prat-Camps, a doctoral candidate in physics at the Autonomous University of Barcelona in Spain. "From a magnetic point of view, this device acts like a wormhole, as if the magnetic field was transferred through an extra special dimension." 


The idea of a wormhole comes from Albert Einstein's theories. In 1935, Einstein and colleague Nathan Rosen realized that the general theory of relativity allowed for the existence of bridges that could link two different points in space-time. Theoretically these Einstein-Rosen bridges, or wormholes, could allow something to tunnel instantly between great distances (though the tunnels in this theory are extremely tiny, so ordinarily wouldn't fit a space traveler). So far, no one has found evidence that space-time wormholes actually exist.


The new wormhole isn't a space-time wormhole per se, but is instead a realization of a futuristic "invisibility cloak" first proposed in 2007 in the journal Physical Review Letters. This type of wormhole would hide electromagnetic waves from view from the outside. The trouble was, to make the method work for light required materials that are extremely impractical and difficult to work with, Prat said.


But it turned out the materials to make a magnetic wormhole already exist and are much simpler to come by. In particular, superconductors, which can carry high levels of current, or charged particles, expel magnetic field lines from their interiors, essentially bending or distorting these lines. This essentially allows the magnetic field to do something different from its surrounding 3D environment, which is the first step in concealing the disturbance in a magnetic field.


So the team designed a three-layer object, consisting of two concentric spheres with an interior spiral-cylinder. The interior layer essentially transmitted a magnetic field from one end to the other, while the other two layers acted to conceal the field's existence.

The inner cylinder was made of a ferromagnetic mu-metal. Ferromagnetic materials exhibit the strongest form of magnetism, while mu-metals are highly permeable and are often used for shielding electronic devices.


A thin shell made up of a high-temperature superconducting material called yttrium barium copper oxide lined the inner cylinder, bending the magnetic field that traveled through the interior.  The final shell was made of another mu-metal, but composed of 150 pieces cut and placed to perfectly cancel out the bending of the magnetic field by the superconducting shell. The whole device was placed in a liquid nitrogen bath in order to work. Normally, magnetic field lines radiate out from a certain location and decay over time, but the presence of the magnetic field should be detectable from points all around it. However, the new magnetic wormhole funnels the magnetic field from one side of the cylinder to another so that it is "invisible" while in transit, seeming to pop out of nowhere on the exit side of the tube, the researchers report today (Aug. 20, 2015) in the journal Scientific Reports.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

IoT mapped: The emerging landscape of smart things

IoT mapped: The emerging landscape of smart things | Amazing Science | Scoop.it

No one really knows how many “things” there are deployed today that have IoT characteristics. IDC’s 2013 estimate was about 9.1 billion, growing to about 28 billion by 2020 and over 50 billion by 2025. You can get pretty much any other number you want, but all the estimates are very large. So what are all these IoT things doing and why are they there? Here’s our attempt to map out the IoT landscape (click to enlarge).


There are a whole lot of possible organizational approaches to the constituent parts of IoT. One can use a “halo” approach, looking at how IoT principles will be applied to individual people, their surroundings (vehicles and homes), the organization of those surroundings (towns and cities and the highways and other transit systems that connect them), the range of social activities (essentially commerce, but also travel, hospitality, entertainment and leisure) that go on in those surroundings and finally the underpinnings of those activities (“industrial” including agriculture, energy and transport and logistics).


This is not an exhaustive taxonomy (excluded are all military and some law enforcement specific uses) or even the best way to organize things, but it’s a useful start and has been helpful in explaining the opportunity to the businesses we advise.


Via Fernando Gil
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

EMBL: The genome in the cloud

EMBL: The genome in the cloud | Amazing Science | Scoop.it

Since the completion of the Human Genome Project in 2001, technological advances have made sequencing genomes much easier, quicker and cheaper, fueling an explosion in sequencing projects. Today, genomics is well into the era of ‘big data’, with genomics datasets often containing hundreds of terabytes (1014 bytes) of information.


The rise of big genomic data offers many scientific opportunities, but also creates new problems, as Jan Korbel, Group Leader in the Genome Biology Unit EMBL Heidelberg, describes in a new commentary paper authored with an international team of scientists and published today in Nature.


Korbel’s research focuses on genetic variation, especially genetic changes leading to cancer, and relies on computational and experimental techniques. While the majority of current cancer genetic studies assess the 1% of the genome comprising genes, a main research interest of the Korbel group is in studying genetic alterations within ‘intergenic’ regions that drive cancer. As this approach looks at much more of the genome than gene-focused studies, it requires analysis of larger amounts of data. This challenge is exemplified via the Pan-Cancer Analysis of Whole Genomes (PCAWG) project, co-led by Korbel, which brings together nearly 1 petabyte (10^15 bytes) of genome sequencing data from more than 2000 cancer patients.


The problem is not a shortage of data but accessing and analysing it. Genome datasets from cancer patients are typically stored in so-called ‘controlled access’ data archives, such as the European Genome-phenome Archive (EGA). These repositories, however, are ‘static’, says Korbel, meaning that the datasets need to be downloaded to a researcher’s institution before they can be further analysed or integrated with other types of data to address biomedically relevant research questions. “With massive datasets, this can take many months and may be unfeasible altogether depending on the institution’s network bandwidth and computational processing capacities,” says Korbel. “It’s a severe limitation for cancer research, blocking scientists from replicating and building on prior work.”


With data stored in one of the various commercial cloud services on offer from companies such as Amazon Web Services, or on academic community clouds, researchers can analyse vast datasets without first downloading them to their institutions, saving time and money that would otherwise need to be spent on maintaining them locally. Cloud computing also allows researchers to draw on the processing power of distributed computers to significantly speed up analysis without purchasing new equipment for computationally laborious tasks. A large portion of the data from PCAWG, for example, will be analysed through cloud computing using both academic community and commercial cloud providers, thanks to new computational frameworks currently being built.


One concern about using cloud computing revolves around the privacy of people who have supplied genetic samples for studies. However, cloud services are now typically as secure as regular institutional data centres, which has diminished this worry: earlier this year, the US National Institutes of Health lifted a 2007 ban on uploading their genomic data into cloud storage. Korbel predicts that the coming months and years will see a big upswing in the use of cloud computing for genomics research, with academic cloud services, such as the EMBL-EBI Embassy Cloud, and commercial cloud providers including Amazon becoming a crucial component of the infrastructure for pursuing research in human genetics.


Yet there remain issues to resolve. One is who should pay for cloud services. Korbel and colleagues urge funding agencies to take on this responsibility given the central role cloud services are predicted to play in future research. Another issue relates to the differing privacy, ethical and normative policies and regulations in Europe, the US, and elsewhere. Some European countries may prefer that patient data remain within their jurisdiction so that they fall under European privacy laws, and not US laws, which apply once a US-based cloud provider is used. Normative and bioethical aspects of patient genome analysis, including in the context of cloud computing, are another specific focus of Korbel’s research, which is being pursued via an inter-disciplinary collaboration with Fruzsina Molnár-Gábor from Heidelberg University faculty of law in a project funded by the Heidelberg Academy of Sciences and Humanities.


more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Natural Products Chemistry Breaking News
Scoop.it!

Plant Seed Species Identification from Real Time Mass Spectrometry Chemical Fingerprints

Plant Seed Species Identification from Real Time Mass Spectrometry Chemical Fingerprints | Amazing Science | Scoop.it

Plant species identification based on the morphological features of plant parts is a well-established science in botany. However, species identification from seeds has largely been unexplored, despite the fact that the seeds contain all of the genetic information that distinguishes one plant from another. Using seeds of genus Datura plants, a group of scientists now shows that the mass spectrum-derived chemical fingerprints for seeds of the same species are similar. On the other hand, seeds from different species within the same genus display distinct chemical signatures, even though they may contain similar characteristic biomarkers.


The intraspecies chemical signature similarities on the one hand, and interspecies fingerprint differences on the other, can be processed by multivariate statistical analysis methods to enable rapid species-level identification and differentiation. The chemical fingerprints can be acquired rapidly and in a high-throughput manner by direct analysis in real time mass spectrometry (DART-MS) analysis of the seeds in their native form, without use of a solvent extract. Importantly, knowledge of the identity of the detected molecules is not required for species level identification.


However, confirmation of the presence within the seeds of various characteristic tropane and other alkaloids, including atropine, scopolamine, scopoline, tropine, tropinone, and tyramine, was accomplished by comparison of the in-source collision-induced dissociation (CID) fragmentation patterns of authentic standards, to the fragmentation patterns observed in the seeds when analyzed under similar in-source CID conditions. The advantages, applications, and implications of the chemometric processing of DART-MS derived seed chemical signatures for species level identification and differentiation are discussed in the paper.


Via NatProdChem
more...
No comment yet.
Suggested by Cosmic Orgasm
Scoop.it!

Mars Exploration: Searching for Life

Mars Exploration: Searching for Life | Amazing Science | Scoop.it
We all have been wondering for a long time if there is life beyond Earth. Mars was one of the first places that we hoped to find it.


Once we had great expectations that there may be intelligent Martians live in great cities. But after various missions to the Red Planet, these ideas have changed to the thought that microbial organisms may still lie deep beneath Mars' surface. Let's look at humanity's attempts to look for life on Mars through the years and see how likely life on the Red Planet actually is.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe
Scoop.it!

Bouncing single photons off satellites for flexible high-end quantum key encryption

Bouncing single photons off satellites for flexible high-end quantum key encryption | Amazing Science | Scoop.it

Quantum key distribution is regularly touted as the encryption of the future. While the keys are exchanged on an insecure channel, the laws of physics provide a guarantee that two parties can exchange a secret key and know if they're being overheard. This unencrypted-but-secure form of key exchange circumvents one of the potential shortcomings of some forms of public key systems.


However, quantum key distribution (QKD) has one big downside: the two parties need to have a direct link to each other. So, for instance, banks in and around Geneva use dedicated fiber links to perform QKD, but they can only do this because the link distance is less than 100km. These fixed and short links are an expensive solution. A more flexible solution is required if QKD is going to be used for more general encryption purposes.


A group of Italian researchers have demonstrated the possibility of QKD via a satellite, which in principle (but not in practice) means that any two parties with a view of a satellite can exchange keys.


QKD is based on, essentially, the fact that once you measure the state of a photon, the photon is gone—you need to absorb the photon with a detector to measure its state. To take a particular example, we have Alice and Bob who want to communicate without letting the nefarious Eve into the picture. They begin by generating a secret key, through the laws of quantum physics, with which to encode their future communications.


Alice generates two lists of random ones and zeros. The first list contains bit values, and the second set is used to set the basis (think of this as the orientation of the measurement system) of a string of single photons. An important point is that these two basis sets are not orthogonal. So, for instance, a common example is to choose vertical and horizontal polarization for one basis and two diagonal polarizations for the second. Between the two values, the polarization of the photon is set into four possible states.


These single photons are sent to Bob, who will measure them. But, the quantum measurements don't allow you to ask a photon "What polarization are you?" Instead you end up asking questions like "Are you vertical or horizontally polarized?" So, Bob randomly chooses between the two basis sets. Sometimes he asks the photons which diagonal polarization they have and other times he asks them if they are vertical or horizontally polarized.


Now, if Alice sends a vertically polarized photon to Bob who asks which diagonal polarization it has, the photon will end up randomly choosing 45 degrees or 135 degrees. However, if Alice chooses to send a horizontally polarized photon and Bob asks the photon if it is horizontally or vertically polarized, he will always get horizontally polarized. The key point is that the measurement basis choice determines how the photon must be described. If Bob and Alice make the same choice, the photon is either in one or other state. If their choices are different, the photon, according to Bob, is in a superposition of two states. The upshot is that, in the first case, the measurement process is deterministic. Alice and Bob can know from their instrument settings exactly which of Bob's detector must click. In the second case, however, the measurement process forces the photon to randomly choose from two states: neither Bob nor Alice can predict the outcome of the measurement. It is this uncertainty, and how intervening measurements by Eve modify that uncertainty, that give QKD its security.


After all the photons are sent, Bob has a string of random numbers, but he has no way of knowing which ones to choose to make up a key. To create a common secret key, Bob and Alice publicly announce their choice of basis set for each bit. But, the choice of which polarization is kept secret. Alice and Bob can look for the positions in the string where they made the same choices and choose those bits to generate the common key.


The next step is to reveal Eve. To do this, Alice announces a section of the secret key. How does this reveal Eve? Let's suppose that Eve is intercepting the photons. She randomly chooses a basis set and measures the photons, but Eve doesn't know which basis set Alice chose. When Eve tries to recreate the photon state that Alice sent, she gets it wrong half the time. So, instead of Alice and Bob finding that they get the same result all the time, the number drops to one half. Eve can, of course, be subtler and only intercept every second photon, bringing the statistic closer to full agreement. But, the fewer photons she intercepts, the less information she has.


When Alice and Bob compare statistics for the partial key, they not only know that Eve is there, but how much information Eve is getting. If Eve was not present, they can throw away the revealed section of key and continue to generate more key digits. However, even if Eve is listening in, they can determine if they wish to go on, based on knowing how much of the key Eve is intercepting.


Via CineversityTV
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Making hydrogen fuel from water and visible light at 100 times higher efficiency

Making hydrogen fuel from water and visible light at 100 times higher efficiency | Amazing Science | Scoop.it

A big step closer to hydrogen as a practical fuel to power vehicles and electrical devices.


Researchers at Michigan Technological University have found a way to convert light to hydrogen fuel more efficiently — a big step closer to mimicking photosynthesisCurrent methods for creating hydrogen fuel are based on using electrodes made from titanium dioxide (TiO2), which acts as a catalyst to stimulate the light–>water–>hydrogen chemical reaction. This works great with ultraviolet (UV) light, but UV comprises only about 4% of the total solar energy, making the overall process highly inefficient.*


The ideal would be to use visible light, since it constitutes about 45 percent of solar energy. Now two Michigan Tech scientists — Yun Hang Hu, the Charles and Carroll McArthur professor of Materials Science and Engineer, and his PhD student, Bing Han — have developed a way to do exactly that.


They report in Journal of Physical Chemistry that by absorbing the entire visible light spectrum, they have increased the yield and energy efficiency of creating hydrogen fuel by up to two magnitudes (100 times) greater than previously reported.


References:
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Super-low loss quantum energy transport could revolutionize sunlight to energy conversion

Super-low loss quantum energy transport could revolutionize sunlight to energy conversion | Amazing Science | Scoop.it

The use of sunlight as an energy source is achieved in a number of ways, from conversion to electricity via photovoltaic (PV) panels, concentrated heat to drive steam turbines, and even hydrogen generation via artificial photosynthesis. Unfortunately, much of the light energy in PV and photosynthesis systems is lost as heat due to the thermodynamic inefficiencies inherent in the process of converting the incoming energy from one form to another. Now scientists working at the University of Bayreuth claim to have created a super-efficient light-energy transport conduit that exhibits almost zero loss, and shows promise as the missing link in the sunlight to energy conversion process.


Using specifically-generated nanofibers at its core, this is reported to be the very first time a directed energy transport system has been exhibited that effectively moves intact light energy over a distance of several micrometers, and at room temperature. And, according to the researchers, the transference of energy from block to block in the nanofibers is only adequately explained at the quantum level with coherence effects driving the energy along the individual fibers.


Quantum coherence is the phenomenon where subatomic waves are closely interlinked via shared electromagnetic fields. As they travel in phase together, these quantum coherent waves start to act as one very large synchronous wave propagating across a medium. In the case of the University of Bayreuth device, these coherent waves of energy travel across the molecular building blocks from which the nanofibers are made, passing from block to block and moving as one continuous energy wave would in unbound free space.


It is this effect that the scientists say is driving the super-low energy loss capabilities of their device, and have confirmed this observation using a variety of microscopy techniques to visualize the conveyance of excitation energy along the nanofibers. The nanofibers themselves are specifically-prepared supramolecular strands, manufactured from a chemically bespoke combination of carbonyl-bridged (molecularly connected) triarylamine (an organic compound) combined with three naphthalimide bithiophene chromophores (copolymer molecules that absorb and reflect specific wavelengths of light). When brought together under particular conditions, these elements spontaneously self-assemble into 4 micrometer long, 0.005 micrometer diameter nanofibers made up of more than 10,000 identical chemical building blocks.


"These highly promising nanostructures demonstrate that carefully tailoring materials for the efficient transport of light energy is an emerging research area," said Dr. Richard Hildner, an experimental physicist at the University of Bayreuth. The results of this research were recently published in the journal Nature.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Revealed: How did our planet ever escape 'snowball Earth'?

Revealed: How did our planet ever escape 'snowball Earth'? | Amazing Science | Scoop.it
Glaciers once covered most of Earth's surface and reflected the sun's heat back into space.


New details of a nightmare period on Earth with surface conditions as frigid as present-day central Antarctica at the equator have been revealed thanks to the publication of a study of ancient glacier water. The research, by an international team led by Daniel Herwartz, is published in the journal Proceedings of the National Academy of Sciences and shows that even tropical regions were once covered in snow and ice.


The idea of a deep-frozen world, “snowball Earth”, has captured the imagination since first proposed in the 1990s. On several occasions in history, long before animals evolved, apparently synchronous ice sheets existed on all the continents. However, much like falling into a crevasse on a glacier, it’s easy enough to enter such an ice age, but very difficult to escape.


The snowball Earth theory came from climate modelers who found that low carbon dioxide levels could trigger the growth of ice sheets. The whole planet would become glaciated and its mean temperature drop to as low as -45°C. As ice is much more reflective than the sea, or bare land, the Earth at that point would have been bouncing nearly all of the sun’s radiation back into space. So how could the planet ever emerge from such an ice age?


Volcanoes had to be the answer. Only they could emit enough carbon dioxide into the atmosphere to overcome the effects of Earth’s cool reflective surface. But climate models still found it difficult to plausibly describe how the Earth could have shed its glaciers.


We now have the first full explanation for how the best-known snowball event, the Marinoan, finished 635 million years ago with a several hundred meter rise in sea level. The study is the result of work by an international team of scientists. The results are published in the journal Nature Geoscience.


The team of researchers found slight wobbles of the Earth’s spin axis caused differences in the heat received at different places on the planet’s surface. These changes were small, but enough over thousands of years to cause a change in the places where snow accumulated or melted, leading the glaciers to advance and retreat.


The Earth was left looking just like the McMurdo Dry Valleys in Antarctica – arid, with lots of bare ground, but also containing glaciers up to 3 km thick. Such an Earth would have been darker than previously envisaged, absorbing more of the sun’s radiation; it was easier to see how the escape from the snowball happened.


Today, to find exposed rocks that can tell us about the carbon dioxide content of the atmosphere in the Marinoan, you have to go to the Norwegian Arctic island of Svalbard. In 2009 snowball theory was vindicated after we found the telltale signal of high carbon dioxide levels in Svalbard limestone that formed during the ice age.


Immediately underneath the Marinoan deposits are some beds of rocks deposited at very regular intervals – so regular that they must have formed over thousands of years, influenced by wobbles in the Earth’s orbit. Since Svalbard was near the Equator at the time, the most likely type of wobble is caused by the Earth slowly shifting (“precessing”) its axis on cycles of approximately 20,000 years.


Researchers also found evidence of the same process in the Snowball deposits themselves. Fluctuations in ice in relation to the Earth’s orbit are a feature of our modern ice ages over the past million years, but had not been found in such an old glaciation.


For a long time the Earth was too cold for glaciers to erode and deposit sediment – the main snowball period. The sediments then show several advances and retreats of the ice. When the glaciers retreated, they left behind a patchwork of environments: shallow and deep lakes, river channels, and floodplains that appeared as arid as anything known in Earth’s history.


Carbon dioxide appears to have remained at the same high level throughout the deposition of these sediments. Since it takes millions of years for CO2 to build up in the atmosphere, this implies the sediment layers must have formed quickly – on the order of 100,000 years. All this fits with the idea of 20,000 year precession cycles.


group of climate modellers from Paris tested the theory. The rocks and the models agreed: wobbles in the Earth’s axis had caused the planet to escape its snowball phase.


So after several million years of being frozen, this icy Earth with a hot atmosphere rich in carbon dioxide had reached a Goldilocks zone – too warm to stay completely frozen, too cold to lose its ice. This transitional period lasted around 100,000 years before the glaciers fully melted and present-day Svalbard was flooded by the sea.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First case of autoimmune encephalitis in a non-human species

First case of autoimmune encephalitis in a non-human species | Amazing Science | Scoop.it

Knut the polar bear may have met an early end but he wasn’t forgotten. The cute polar bear cub born at Berlin Zoo in 2006 and controversially reared by zookeepers, drowned as an adult after experiencing epileptic seizures. Now, the condition responsible for his death has been identified.


The cause of the seizures was unknown since no bacteria, virus or parasite could be found to explain the underlying brain inflammation. The mystery was finally solved by Harald Pruess from the German Centre for Neurodegenerative Diseases in Berlin and his team, who normally study dementia in people.


They analysed samples of Knut’s cerebrospinal fluid, which bathes the brain and spinal cord, and found high levels of an antibody known to attack a glutamate receptor in the brain. In humans, this is a sign of a disease called autoimmune encephalitis. Knut’s case is the first ever reported in a non-human.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

97% of expert papers support human-caused global warming, 3% contrarian papers have flaws, study finds

97% of expert papers support human-caused global warming, 3% contrarian papers have flaws, study finds | Amazing Science | Scoop.it

Those who reject the 97% expert consensus on human-caused global warmingoften invoke Galileo as an example of when the scientific minority overturned the majority view. In reality, climate contrarians have almost nothing in common with Galileo, whose conclusions were based on empirical scientific evidence, supported by many scientific contemporaries, and persecuted by the religious-political establishment. Nevertheless, there’s a slim chance that the 2–3% minority is correct and the 97% climate consensus is wrong.


To evaluate that possibility, a new paper published in the journal of Theoretical and Applied Climatology examines a selection of contrarian climate science research and attempts to replicate their results. The idea is that accurate scientific research should be replicable, and through replication we can also identify any methodological flaws in that research. The study also seeks to answer the question, why do these contrarian papers come to a different conclusion than 97% of the climate science literature?


This new study was authored by Rasmus Benestad, myself (Dana Nuccitelli), Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook. Benestad (who did the lion’s share of the work for this paper) created a tool using the R programming language to replicate the results and methods used in a number of frequently-referenced research papers that reject the expert consensus on human-caused global warming. In using this tool, we discovered some common themes among the contrarian research papers.


Cherry picking was the most common characteristic they shared. We found that many contrarian research papers omitted important contextual information or ignored key data that did not fit the research conclusions.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MouthLab: New hand-held device that quickly picks up vital signs from patient's lips and fingertips

MouthLab: New hand-held device that quickly picks up vital signs from patient's lips and fingertips | Amazing Science | Scoop.it

Vital sign monitors in hospitals are bulky, restrictive and capture limited information. A professor-engineer at Johns Hopkins has designed a battery-powered, hand-held, 3-D printed device that acts as a “check-engine light” for people. The device uses mouthpiece and thumb pad sensors to quickly test a patient’s blood pressure, breathing, blood oxygen, heart rate and heartbeat pattern.


In a study published in the September issue of the Annals of Biomedical Engineering, the MouthLab prototype’s measurements of heart rate, blood pressure, temperature, breathing rate and blood oxygen from 52 volunteers compared well with vital signs measured by standard hospital monitors. The device also takes a basic electrocardiogram.


“We see it as a ‘check-engine’ light for humans,” says the device’s lead engineer, Gene Fridman, Ph.D., an assistant professor of biomedical engineering and of otolaryngology–head and neck surgery at Johns Hopkins. “It can be used by people without special training at home or in the field.” He expects the device may be able to detect early signs of medical emergencies, such as heart attacks, or avoid unnecessary ambulance trips and emergency room visits when a patient’s vital signs are good.


Because it monitors vital signs by mouth, future versions of the device will be able to detect chemical cues in blood, saliva and breath that act as markers for serious health conditions. “We envision the detection of a wide range of disorders,” Fridman says, “from blood glucose levels for diabetics, to kidney failure, to oral, lung and breast cancers.”


The MouthLab prototype consists of a small, flexible mouthpiece like those that scuba divers use, connected to a hand-held unit about the size of a telephone receiver. The mouthpiece holds a temperature sensor and a blood volume sensor. The thumb pad on the hand-held unit has a miniaturized pulse oximeter — a smaller version of the finger-gripping device used in hospitals, which uses beams of light to measure blood oxygen levels. Other sensors measure breathing from the nose and mouth.


MouthLab also has three electrodes for ECGs — one on the thumb pad, one on the upper lip of the mouthpiece and one on the lower lip — that work about as well as the chest and ankle electrodes used on basic ECG equipment in many ambulances or clinics. That ECG signal is the basis for MouthLab’s novel way of recording blood pressure. When the signal shows the heart is contracting, the device optically measures changes in the volume of blood reaching the thumb and upper lip. Unique software converts the blood flow data into systolic and diastolic pressure readings. The study found that MouthLab blood pressure readings effectively match those taken with standard, arm-squeezing cuffs.


The hand unit relays data by Wi-Fi to a nearby laptop or smart device, where graphs display real-time results. The next generation of the device will display its own data readouts with no need for a laptop, says Fridman. Ultimately, he explains, patients will be able to send results to their doctors via cellphone, and an app will let physicians add them to patients’ electronic medical records.


A 3-D printer made the parts for the prototype, “which looks a lot like a hand-held taser,” Fridman says. “Our final version will be smaller, more ergonomic, more user-friendly and faster. Our goal is to obtain all vital signs in under 10 seconds.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Milk production older than mammals and traces back to dinosaurs

Milk production older than mammals and traces back to dinosaurs | Amazing Science | Scoop.it

Many components of milk have an ancient origin. We know this because many milk-related genes are older than the mammalsTake the caseins, which are usually the most abundant protein in mammalian milk. They help in transporting nutrients like calcium and phosphorus to babies, which helps the babies grow their skeleton and tissues.


Researchers have found that all mammals have highly organized clusters of genes, which code for three main types of caseins.

From this we can deduce that milk caseins are ancient. Researchers believe caseins diverged into the three main types that we see today long before the early mammals separated into monotremes, marsupials and placental mammals. Slowly, milk caseins went from being a nutrient supplement to egg yolk, to a major source of nutrients for babies.


Researchers have also traced how mammals became less dependent on the nutrients in egg yolk. About 170 million years ago, important egg yolk proteins called vitellogenins began disappearing one by one, according to a 2008 study. Again, this was before true mammals walked on earth.


All modern birds and reptiles have three genes associated with the production of vitellogenins. Egg-laying mammalian ancestors also had three genes. But among living mammals, only the egg-laying monotremes have one functional vitellogenin gene, alongside two inactive ones. In marsupials and placental mammals, all three vitellogenin genes are turned off.


The mammals would only have turned off these genes if they had substitutes to hand. So there must have been an alternative source of nutrients available, such as casein, before the vitellogenins were deactivated.


If egg yolk proteins began disappearing long before mammals appeared, it suggests that milk was already the chief source of nutrients for mammals' egg-laying ancestors, like the dinosaurs.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Self-assembly of molecular Archimedean polyhedra

Self-assembly of molecular Archimedean polyhedra | Amazing Science | Scoop.it
Chemists truly went back to the drawing board to develop new X-shaped organic building blocks that can be linked together by metal ions to form an Archimedean cuboctahedron. In the journal Angewandte Chemie, the scientists report that by changing the concentration or using different counterions, the cuboctahedron can be reversibly split into two octahedra—an interesting new type of fusion–fission switching process.


Archimedean polyhedra are a group of symmetrical solids with regular polygons for faces and equal angles at the vertices, like a classic soccer ball with its 12 pentagons and 20 hexagons. These forms are also found in nature: the rigid shells (capsids) of many viruses, as well as certain cellular transport vesicles are also Archimedean polyhedra. These biological forms are made by the self-assembly of individual protein building blocks. Chemists have frequently turned to this concept for inspiration to synthesize large molecular cages held together by coordination bonds.


A team headed by Chrys Wesdemiotis and George R. Newkome has now successfully produced an approximately 6 nm cuboctahedron out of organic molecules and metal ions. A cuboctahedron has a surface made of 8 triangles and 6 squares. The conceptual starting point was an X-shaped, organic building block, which, laid over the surface of a cuboctahedron, would give the correct angles between the edges, 60° and 90°. It should also be able to bind metal ions to hold everything together.


Using 12 of these tailored X-shaped terpyridine ligands and 24 metal ions (zinc or cadmium), the researchers were able to make cuboctahedra that self-assembled from the individual building blocks. The team from the University of Akron, the University of Chicago (Argonne), the University of South Florida (Tampa), Florida Atlantic University (Boca Raton), the University of Tokyo (Japan), and the Tianjin University of Technology (China) used a variety of spectroscopic techniques, model calculations, and single-crystal analyses with synchrotron X-ray diffraction to verify the structure. They were even able to see the shapes of the individual molecules with an electron microscope.


One new feature they observed was that the cuboctahedra split apart into two octahedra when the concentration is reduced. If the solution concentration is then increased, the octahedra fuse back together into cuboctahedra. This process could also be initiated by switching between different counterions. This new process could allow for the production of a new series of nanoscale building blocks for the materials sciences. In addition, the zinc cuboctahedra may be suitable for use as transport systems for drugs.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Living In The Past: Why People With Dementia Switch Back To The Old Times

Living In The Past: Why People With Dementia Switch Back To The Old Times | Amazing Science | Scoop.it

There is a link between the perception of time and memory function in those with dementia. Family members often report their loved ones with dementia sometimes live in the past, even reverting back to first languages. This is because memory is not just one process in the brain, but a collection of different systems. Those with Alzheimer’s disease may have impairments in short-term memory, however remote memory can be left relatively intact. So they’re able to remember public and personal events many decades ago, but unable to recall what happened earlier that day.


fascinating case study illustrates this dissociation in remote and short-term memory in Alzheimer’s disease. A retired taxi driver diagnosed with Alzheimer’s disease showed remarkable spatial memory of downtown Toronto, Canada, where he had driven taxis and worked as a courier for 45 years. This was despite showing impairments in short-term memory and general cognitive functioning.


But while those with Alzheimer’s disease can typically remember events in the distant past better than those in the immediate past, they still perform worse than older adults without Alzheimer’s disease in memory retrieval. Interestingly, it appears that events and facts most frequently retrieved and used over a lifetime are those better recalled by those with Alzheimer’s disease in late life, rather than those encountered at any particular age.


This frequency of use memory pattern is mirrored in bilingual people with dementia. A friend commented that her Yia-Yia (Grandmother), who immigrated to Australia from Greece over 50 years ago, is increasingly conversing in Greek despite predominantly speaking English for decades (causing problems for my monolingual English-speaking friend).


Those with dementia often revert to their first language. This commonly begins with utterances from the first language appearing in conversation from the second language. This occurs more often in those less proficient in their second language, rather than being related to the age of acquisition of their second language.


So, how does this happen? Probably because familiar memories rely more on the brain’s cortex, its outer layer, while short-term memories rely more on a structure called the hippocampus. The hippocampus is typically affected at the start of late-life dementias such as Alzheimer’s disease, with regions of the cortex affected subsequently.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Next Generation Sequencing (NGS)
Scoop.it!

Personalized care for aortic aneurysms, based on gene testing, has arrived

Personalized care for aortic aneurysms, based on gene testing, has arrived | Amazing Science | Scoop.it

Researchers at the Aortic Institute at Yale have tested the genomes of more than 100 patients with thoracic aortic aneurysms, a potentially lethal condition, and provided genetically personalized care. Their work will also lead to the development of a “dictionary” of genes specific to the disease, according to researchers.


The study published early online in The Annals of Thoracic Surgery.  Experts have known for more than a decade that thoracic aortic aneurysms — abnormal enlargements of the aorta in the chest area —run in families and are caused by specific genetic mutations. Until recently, comprehensive testing for these mutations has been both expensive and impractical. To streamline testing, the Aortic Institute collaborated with Dr. Allen Bale of Yale’s Department of Genetics to launch a program to test whole genomes of patients with the condition.


Over a period of three years, the researchers applied a technology known as Whole Exome Sequencing (WES) to more than 100 individuals with these aneurysms. “To our knowledge, it’s the first widespread application of this technology to this disease,” said lead author and cardiac surgeon Dr. John A. Elefteriades, director of the institute.


The researchers detected four mutations known to cause thoracic aortic aneurysms. “The key findings are that this technology can be applied to this disease and it identifies a lot of patients with genetic mutations,” said Elefteriades. Additionally, the testing program uncovered 22 previously unknown gene variants that likely also contribute to the condition.


Using the test results, the clinicians were able to provide treatment tailored to each patient’s genetic profile. “Personalized aortic aneurysm care is now a reality,” Elefteriades noted. The personalized care ranged from more frequent imaging tests to preventive surgery for those most at risk. “Patients who have very dangerous mutations are getting immediate surgery,” he said.


Given that aneurysm disease is a highly inherited condition, affecting each generation, the researchers offered testing to family members of patients, and found mutations in relatives with no clinical signs of disease.


The researchers anticipate identifying more gene variants over time, accumulating a whole dictionary of mutations. “In a few years, we’re going to have discovered many new genes and be able to offer personalized care to an even greater percentage of aneurysm patients, ” Elefteriades said.


Via Integrated DNA Technologies
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mayo Clinic researchers find new code that makes differentiation of cancer cells possible

Mayo Clinic researchers find new code that makes differentiation of cancer cells possible | Amazing Science | Scoop.it

Cancer researchers dream of the day they can force tumor cells to morph back to the normal cells they once were. Now, researchers on Mayo Clinic’s Florida campus have discovered a way to potentially reprogram cancer cells back to normalcy.


The finding, published in Nature Cell Biology, represents “an unexpected new biology that provides the code, the software for turning off cancer,” says the study’s senior investigator, Panos Anastasiadis, Ph.D., chair of the Department of Cancer Biology on Mayo Clinic’s Florida campus.


That code was unraveled by the discovery that adhesion proteins — the glue that keeps cells together — interact with the microprocessor, a key player in the production of molecules called microRNAs (miRNAs). The miRNAs orchestrate whole cellular programs by simultaneously regulating expression of a group of genes. The investigators found that when normal cells come in contact with each other, a specific subset of miRNAs suppresses genes that promote cell growth. However, when adhesion is disrupted in cancer cells, these miRNAs are misregulated and cells grow out of control. The investigators showed, in laboratory experiments, that restoring the normal miRNA levels in cancer cells can reverse that aberrant cell growth.


“The study brings together two so-far unrelated research fields — cell-to-cell adhesion and miRNA biology — to resolve a long-standing problem about the role of adhesion proteins in cell behavior that was baffling scientists,” says the study’s lead author Antonis Kourtidis, Ph.D., a research associate in Dr. Anastasiadis’ lab. “Most significantly, it uncovers a new strategy for cancer therapy,” he adds.


That problem arose from conflicting reports about E-cadherin and p120 catenin — adhesion proteins that are essential for normal epithelial tissues to form, and which have long been considered to be tumor suppressors. “However, we and other researchers had found that this hypothesis didn’t seem to be true, since both E-cadherin and p120 are still present in tumor cells and required for their progression,” Dr. Anastasiadis says. “That led us to believe that these molecules have two faces — a good one, maintaining the normal behavior of the cells, and a bad one that drives tumorigenesis.”


Their theory turned out to be true, but what was regulating this behavior was still unknown. To answer this, the researchers studied a new protein called PLEKHA7, which associates with E-cadherin and p120 only at the top, or the “apical” part of normal polarized epithelial cells. The investigators discovered that PLEKHA7 maintains the normal state of the cells, via a set of miRNAs, by tethering the microprocessor to E-cadherin and p120. In this state, E-cadherin and p120 exert their good tumor suppressor sides.


However, “when this apical adhesion complex was disrupted after loss of PLEKHA7, this set of miRNAs was misregulated, and the E-cadherin and p120 switched sides to become oncogenic,” Dr. Anastasiadis says. “We believe that loss of the apical PLEKHA7-microprocessor complex is an early and somewhat universal event in cancer,” he adds. “In the vast majority of human tumor samples we examined, this apical structure is absent, although E-cadherin and p120 are still present. This produces the equivalent of a speeding car that has a lot of gas (the bad p120) and no brakes (the PLEKHA7-microprocessor complex).

https://www.youtube.com/watch?v=yGYTLOGZ40U 

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from DNA & RNA Research
Scoop.it!

New DNA code makes synthetic proteins

New DNA code makes synthetic proteins | Amazing Science | Scoop.it

The world's first functioning organism with an expanded DNA alphabet has now met another milestone in artificial life: making proteins that don't exist in nature. The organism, a bacterium created by scientists at The Scripps Research Institute, incorporates two synthetic DNA letters, called X and Y, along with the four natural ones, A, T, C and G. A team led by Floyd Romesberg published a study last year demonstrating that the organism, an engineered strain of E. coli, can function and replicate with the synthetic DNA.


Synthorx, a biotech startup that licensed the technology from Scripps, has now used the bacterium to produce proteins incorporating artificial amino acids, the building blocks of proteins. These are placed at precisely specified intervals along the protein sequence, obeying the code of the expanded DNA alphabet.


The La Jolla startup plans to make drugs out of these artificial proteins with properties that can be adjusted, such as the length of action inside the body, and how tightly they bind to their target. By using the bacterium as living factories, Synthorx plans to make these drugs far more efficiently and cheaply than by traditional chemistry.


Via Integrated DNA Technologies
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from DNA & RNA Research
Scoop.it!

The human genome takes shape and shifts over time

The human genome takes shape and shifts over time | Amazing Science | Scoop.it

If you could unravel all the DNA in a single human cell and stretch it out, you’d have a molecular ribbon about 2 meters long and 2 nanometers across. Now imagine packing it all back into the cell’s nucleus, a container only 5 to 10 micrometers wide. That would be like taking a telephone cord that runs from Manhattan to San Francisco and cramming it into a two-story suburban house.


Fitting all that genetic material into a cramped space is step one. Just as important is how the material is organized. The cell’s complete catalog of DNA — its genome — must be configured in a specific three-dimensional shape to work properly. That 3-D organization of nuclear material — a configuration called the nucleome — helps control how and when genes are activated, defining the cell’s identity and its job in the body.


Researchers have long realized the importance of DNA’s precisely arranged structure. But only recently have new technologies made it possible to explore this architecture deeply. With simulations, indirect measurements and better imaging, scientists hope to reveal more about how the nucleome’s intricate folds regulate healthy cells. Better views will also help scientists understand the role that disrupted nucleomes play in aging and diseases, such as progeria and cancer.


Via Integrated DNA Technologies
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A new approach towards solving mysteries of the interstellar 'empty' space

A new approach towards solving mysteries of the interstellar 'empty' space | Amazing Science | Scoop.it

It is one of the most intriguing questions in astrochemistry: the mystery of the diffuse interstellar bands (DIBs), a collection of about 400 absorption bands that show up in spectra of light that reaches the earth after having traversed the interstellar medium. Despite intense research efforts over the last few decades, an assignment of the DIBs has remained elusive, although indications exist that they may arise from the presence of large hydrocarbon molecules in interstellar space. Recent experiments at the Max Born Institute lend novel credibility to this hypothesis.


Among the hydrocarbons that are possible carriers of the DIBs, polycyclic aromatic hydrocarbons (PAHs) are considered to be particularly promising. The presence of PAH molecules was previously inferred in many astronomical objects, as well as in the interstellar medium of the Milky Way. However, within the astronomical community, the linewidths of the DIBs, which are indicative of the lifetimes of the excited states that are involved in the absorption process, are often considered as an argument that speaks against the PAHs. The new experiment was performed in collaboration with scientists from the university of Lyon and aided by theoretical input from scientists at the universities of Heidelberg, Hyderabad and Leiden. It has been shown that the lifetimes of excited states of small to medium-size PAHs are consistent with the linewidths that are observed for the DIBs.


In the experiments, a series of small to medium-size PAH molecules (naphthalene, anthracene, pyrene and tetracene, containing 2-4 benzene-like aromatic rings), were ionized by an ultrashort extreme-ultraviolet (XUV) laser pulse. As a result of electron correlation, the absorption of an XUV photon not only led to removal of one of the electrons, but furthermore to electronic excitation of the molecular ion left behind. The lifetimes of these excited cationic electronic states were monitored by probing the ions with a moderately strong, time-delayed infrared (IR) laser pulse. When the ions are formed, the electronic excitation is at its highest, and only one or a few IR photons are needed to remove a second electron. However, a little later, when the ion relaxes and energy is transferred from the electronic to the vibrational degrees of freedom, more IR photons are needed to remove the second electron. In other words, monitoring the formation of doubly-charged ions as a function of the time delay between the XUV and IR laser pulses allowed extraction of the lifetimes of the states formed by the XUV ionization process. As it turned out, and as was further supported by high-level calculations, these lifetimes of a few 10s of femtoseconds are well within the range of what is required for potential carriers of the DIBs. 


Beyond the implications for the DIBs, the new experiments have implications for the further development of attosecond science. One of the most sought-after goals in attosecond science at the moment, is the observation of charge migration, i.e. ultrafast (attosecond to few-femtosecond) motion of an electron or hole through a molecular structure. It has been proposed that charge migration may provide new opportunities for control of chemical reactivity, a goal that is as old as the chemical research itself. First indications that attosecond to few-femtosecond time-scale dynamics can be observed in polyatomic molecules were obtained by researchers at the university of Milano last year. The PAH molecules that were investigated in the experiments at MBI represent the largest molecular species yet to which ultrafast XUV-IR pump-probe spectroscopy has been applied. Besides the insights into ultrafast electronic relaxation obtained from the current work, the theoretical work performed in order to interpret the experiments suggests that PAH molecules are also ideal candidates for observing attosecond to few-femtosecond timescale charge migration. Such experiments will therefore be attempted next.


Abb.: Schematic of the experiment. (a) Schematic of the XUV-induced dynamics in PAH molecules studied in this paper. Excited states are created in the valence shell of the cation through one of two possibilities, namely the formation of a single-hole configuration or the formation of a 2hole-1particle configuration (involving a shake-up process) (left) (IP stands for Ionization potential). The cation can be further ionized by the IR probe laser, provided that non-adiabatic relaxation has not taken place yet (middle). After relaxation, the IR probe cannot ionize the cation anymore (right). (b) Two-colour XUV-IR ion signals measured in the case of anthracene, as a function of the detected mass-to-charge ratio and the XUV-IR delay. XUV-only and IR-only signals have been subtracted. The XUV pump and IR probe pulses overlap at zero delay (black dashed line). A red colour corresponds to a signal increase, while a blue colour signifies depletion. For positive XUV-IR delays, a very fast dynamics is observed for the doubly charged anthracene ion (A2+, m/q=89). As explained in the text, the measurement reflects non-adiabatic relaxation in the anthracene cation (A+). The dynamics observed in the first fragment (A-C2H2+) is not discussed in this article.

more...
No comment yet.