"The first report from a big public-private project to improve genetic testing reveals it is not as rock solid as many people believe, with flaws that result in some people wrongly advised to worry about a disease risk and others wrongly told they can relax.
Researchers say the study shows the need for consumers to be careful about choosing where to have a gene test done and acting on the results, such as having or forgoing a preventive surgery.
"We have very clear documentation that there are differences in what patients are getting" in terms of how tests on the same gene variations are interpreted, said the study leader, Heidi Rehm, genetics lab chief at Brigham and Women's Hospital in Boston.
When deciding to get tested, either through a doctor's office or by sending in a swab to a private company, "patients need to choose labs that are sharing their data" with the broader research community so scientists can compare and learn from the results and make testing more accurate for everyone, she said.
Dozens of companies now offer gene tests to gauge a person's risk of developing various disorders. One of the newest tests on the market costs $250 and checks about 20 genes that can affect breast cancer risk.
But not all gene mutations, or variants, are equal. Some raise risk a lot, others just a little, and some not at all. Most are of unknown significance — a quandary for doctors and patients alike. And most variants are uncommon, making it even tougher to figure out which ones matter and how much.
To solve these mysteries and give patients better information, the U.S. government several years ago helped form and fund ClinVar, a database for researchers around the world to pool gene findings, coded to keep patients' identities confidential. More than 300 labs contribute to it, including universities such as Harvard and Emory and some private companies such as Ambry Genetics and GeneDX.
On Wednesday, the group made its first report at a conference in Washington. The study also was published online by the New England Journal of Medicine.
So far, the project has tracked more than 172,000 variants in nearly 23,000 genes, a small portion of the millions known to exist but some of the more common ones that have been identified.
More than 118,000 of these variants have an effect on the risk for a disease — and 11 percent have been analyzed by more than one lab so results can be compared. In 17 percent of those cases, labs interpreted the findings differently, as either raising the risk of a disease, having no effect on it or having an unknown effect.
At least 415 gene variants now have different interpretations that could sway a medical decision, such as whether to have healthy breasts or ovaries removed to lower the risk of cancer, or to get a medical device such as an implanted defibrillator to cut the risk of sudden cardiac death.
"The magnitude of this problem is bigger than most people thought," said Michael Watson, executive director of the American College of Medical Genetics and Genomics, one of the study's authors and a partner in the data pooling project.
And it can harm patients. Rehm described a woman who had genetic testing and wrongly was told she did not have elevated risks for breast cancer. She later developed the disease but could have had preventive surgery had the right gene analyses been done.
An independent expert, Dr. Eric Topol, director of the Scripps Translational Science Institute in La Jolla, California, commended the study leaders and the database project for "cleaning up the mess" from labs that have not shared data in the past.
"We need millions of people sequenced, sharing all the data," to make things better, he said. With more sharing, the mystery gene variant problem " will largely go away, but that's going to take a few years at least.".."
Funding from NIST AMTech Program Supplements SRC Semiconductor Synthetic Biology Effort, Helps Industry Develop Technology Roadmap | Virtual Strategy Magazine is an online publication devoted entirely to virtualization technologies.
Biotechnology is a widely interdisciplinary field focusing on the use of living cells or organisms to solve established problems in medicine, food production and agriculture. Synthetic biology, the science of engineering complex biological systems that do not exist in nature, continues to provide the biotechnology industry with tools, technologies and intellectual property leading to improved cellular performance. One key aspect of synthetic biology is the engineering of deliberately reprogrammed designer cells whose behavior can be controlled over time and space. This review discusses the most commonly used techniques to engineer mammalian designer cells; while control elements acting on the transcriptional and translational levels of target gene expression determine the kinetic and dynamic profiles, coupling them to a variety of extracellular stimuli permits their remote control with user-defined trigger signals. Designer mammalian cells with novel or improved biological functions not only directly improve the production efficiency during biopharmaceutical manufacturing but also open the door for cell-based treatment strategies in molecular and translational medicine. In the future, the rational combination of multiple sets of designer cells could permit the construction and regulation of higher-order systems with increased complexity, thereby enabling the molecular reprogramming of tissues, organisms or even populations with highest precision.
by Kuntal Mukherjee, Souryadeep Bhattacharyya and Pamela Peralta-Yahya
"A key limitation to engineering microbes for chemical production is a reliance on low-throughput chromatography-based screens for chemical detection. While colorimetric chemicals are amenable to high-throughput screens, many value-added chemicals are not colorimetric and require sensors for high-throughput screening. Here, we use G-protein coupled receptors (GPCRs) known to bind medium-chain fatty acids in mammalian cells to rapidly construct chemical sensors in yeast. Medium-chain fatty acids are immediate precursors to the advanced biofuel fatty acid methyl esters, which can serve as a “drop-in” replacement for D2 diesel. One of the sensors detects even-chain C8–C12 fatty acids with a 13- to 17-fold increase in signal after activation, with linear ranges up to 250 μM. Introduction of a synthetic response unit alters both dynamic and linear range, improving the sensor response to decanoic acid to a 30-fold increase in signal after activation, with a linear range up to 500 μM. To our knowledge, this is the first report of a whole-cell medium-chain fatty acid biosensor, which we envision could be applied to the evolutionary engineering of fatty acid-producing microbes. Given the affinity of GPCRs for a wide range of chemicals, it should be possible to rapidly assemble new biosensors by simply swapping the GPCR sensing unit. These sensors should be amenable to a variety of applications that require different dynamic and linear ranges, by introducing different response units."
RNA-based temperature sensing is common in bacteria that live in fluctuating environments. Most naturally-occurring RNA thermosensors are heat-inducible, have long sequences, and function by sequestering the ribosome binding site in a hairpin structure at lower temperatures. Here, we demonstrate the de novo design of short, heat-repressible RNA thermosensors. These thermosensors contain a cleavage site for RNase E, an enzyme native to Escherichia coli and many other organisms, in the 5' untranslated region of the target gene. At low temperatures, the cleavage site is sequestered in a stem-loop, and gene expression is unobstructed. At high temperatures, the stem-loop unfolds, allowing for mRNA degradation and turning off expression. We demonstrated that these thermosensors respond specifically to temperature and provided experimental support for the central role of RNase E in the mechanism. We also demonstrated the modularity of these RNA thermosensors by constructing a three-input composite circuit that utilizes transcriptional, post-transcriptional, and post-translational regulation. A thorough analysis of the 24 thermosensors allowed for the development of design guidelines for systematic construction of similar thermosensors in future applications. These short, modular RNA thermosensors can be applied to the construction of complex genetic circuits, facilitating rational reprogramming of cellular processes for synthetic biology applications.
The biosynthesis of benzylisoquinoline alkaloids such as morphine requires tyrosine oxidases, which are prone to overoxidation. A colorimetric readout that co-opts betaxanthin enzymes now enables discovery of an improved oxidase that, with other enzymes, makes reticuline in yeast.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.