Amazing Science
1.0M views | +2 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Unlimited, at-home coronavirus testing for your organization

Unlimited, at-home coronavirus testing for your organization | Amazing Science | Scoop.it

FOR MORE INFO AND VIDEO GO TO

(Genomatch.me)

Toll Free:1-800-605-8422  FREE
Regular Line:1-858-345-4817

  

This newsletter is aggregated from over 1450 news sources:

http://www.genautica.com/links/1450_news_sources.html

 

All my Tweets and Scoop.It! posts sorted and searchable:

archived twitter feed

•••••••••••••••••••••••••••••••••••••••••••••••••••••••

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND button (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.

 

You can also type your own query: e.g., you are looking for articles involving "DNA" as a keyword http://www.scoop.it/t/amazing-science/?q=dna

 

Or CLICK on the little FUNNEL symbol at the

 

===========================================

 

***MOST READS***

• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciencesgreen-energy • history • language • mapmaterial-science • math • med • medicine • microscopymost-reads • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

 

MOOC's

topthcvapestore's comment, July 1, 2021 2:05 AM
good
Traluma's comment, July 10, 2021 8:07 AM
good
modricscouk's comment, March 24, 1:37 AM
good
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Global profiling of the proteome: Immature sperm cells harbor 6,045 proteins

Global profiling of the proteome: Immature sperm cells harbor 6,045 proteins | Amazing Science | Scoop.it

Immature sperm cells isolated from the caput epididymis harbor the most complex proteome of the three samples analyzed, comprising 6,045 proteins, including 1,284 (93.9%) of all proteins previously identified in independent studies of mouse caput epididymal sperm cells, thereby extending the proteomic annotation of this sperm population by an additional 4,761 proteins. However, scientists now identified a dramatic reduction in the number of proteins comprising the mature sperm proteome, with as many as 3,385 proteins apparently being lost from spermatozoa during their epididymal transit. Notably, while this proportional loss of 56% of the sperm proteome may appear at odds with accepted paradigms of sperm maturation, it nevertheless draws striking parallels to the 42% loss of the sperm proteome reported in the only other large-scale proteomic investigation of mouse epididymal spermatozoa by Skerget et al., 2015. Notwithstanding a substantial loss of proteins coincident with epididymal transit, cauda epididymal sperm cells retained a core proteome of 2,660 proteins in common with their caput epididymal sperm counterparts. This core proteome was supplemented by the addition of 264 proteins uniquely detected in NC cauda epididymal spermatozoa, a proportional gain of 9% of the total proteome. The proteome (2,924 proteins) of mature cauda epididymal sperm included 87.9% of the proteins (i.e., 1,062 proteins) identified by Skerget et al., 2015 in their proteomic study of mouse spermatozoa as well as an additional 1,999 proteins that were not previously characterized.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cell division drives DNA methylation loss in late-replicating domains in primary human cells

Cell division drives DNA methylation loss in late-replicating domains in primary human cells | Amazing Science | Scoop.it

DNA methylation undergoes dramatic age-related changes, first described more than four decades ago. Loss of DNA methylation within partially methylated domains (PMDs), late-replicating regions of the genome attached to the nuclear lamina, advances with age in normal tissues, and is further exacerbated in cancer.

 

Researchers now show experimental evidence that this DNA hypomethylation is directly driven by proliferation-associated DNA replication. Within PMDs, loss of DNA methylation at low-density CpGs in A:T-rich immediate context (PMD solo-WCGWs) tracks cumulative population doublings in primary cell culture. Cell cycle deceleration results in a proportional decrease in the rate of DNA hypomethylation. Blocking DNA replication via Mitomycin C treatment halts methylation loss. Loss of methylation continues unabated after TERT immortalization until finally reaching a severely hypomethylated equilibrium. Ambient oxygen culture conditions increases the rate of methylation loss compared to low-oxygen conditions, suggesting that some methylation loss may occur during unscheduled, oxidative damage repair-associated DNA synthesis. 

 

Also, they were able to validate a model which estimates the relative cumulative replicative histories of human cells, which they call “RepliTali” (Replication Times Accumulated in Lifetime). DNA methylation loss has been observed in aging tissues and cancers for decades. Researchers from Van Andel Institute have now provided experimental evidence that this process is directly driven by cell division, something that hasn't been directly shown before.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Physicists create a wormhole using a quantum computer and send a signal through the open wormhole, though it’s not clear in what sense the wormhole can be said to exist

Physicists create a wormhole using a quantum computer and send a signal through the open wormhole, though it’s not clear in what sense the wormhole can be said to exist | Amazing Science | Scoop.it

Physicists have purportedly created the first-ever wormhole, a kind of tunnel theorized in 1935 by Albert Einstein and Nathan Rosen that leads from one place to another by passing into an extra dimension of space. The wormhole emerged like a hologram out of quantum bits of information, or “qubits,” stored in tiny superconducting circuits. By manipulating the qubits, the physicists then sent information through the wormhole, and they reported it in the journal Nature.

 

The team, led by Maria Spiropulu of the California Institute of Technology, implemented the novel “wormhole teleportation protocol” using Google’s quantum computer, a device called Sycamore housed at Google Quantum AI in Santa Barbara, California. With this first-of-its-kind “quantum gravity experiment on a chip,” as Spiropulu described it, she and her team beat a competing group of physicists who aim to do wormhole teleportation with IBM and Quantinuum’s quantum computers.

 

When Spiropulu saw the key signature indicating that qubits were passing through the wormhole, she said, “I was shaken.” The experiment can be seen as evidence for the holographic principle, a sweeping hypothesis about how the two pillars of fundamental physics, quantum mechanics and general relativity, fit together. Physicists have strived since the 1930s to reconcile these disjointed theories — one, a rulebook for atoms and subatomic particles, the other, Einstein’s description of how matter and energy warp the space-time fabric, generating gravity. The holographic principle, ascendant since the 1990s, posits a mathematical equivalence or “duality” between the two frameworks. It says the bendy space-time continuum described by general relativity is really a quantum system of particles in disguise. Space-time and gravity emerge from quantum effects much as a 3D hologram projects out of a 2D pattern.

 

Indeed, the new experiment confirms that quantum effects, of the type that we can control in a quantum computer, can give rise to a phenomenon that we expect to see in relativity — a wormhole. The evolving system of qubits in the Sycamore chip “has this really cool alternative description,” said John Preskill, a theoretical physicist at Caltech who was not involved in the experiment. “You can think of the system in a very different language as being gravitational.”

 

To be clear, unlike an ordinary hologram, the wormhole isn’t something we can see. While it can be considered “a filament of real space-time,” according to co-author Daniel Jafferis of Harvard University, lead developer of the wormhole teleportation protocol, it’s not part of the same reality that we and the Sycamore computer inhabit. The holographic principle says that the two realities — the one with the wormhole and the one with the qubits — are alternate versions of the same physics, but how to conceptualize this kind of duality remains mysterious.

 

Opinions will differ about the fundamental implications of the result. Crucially, the holographic wormhole in the experiment consists of a different kind of space-time than the space-time of our own universe. It’s debatable whether the experiment furthers the hypothesis that the space-time we inhabit is also holographic, patterned by quantum bits.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Storms are getting worse — Rapid rain bursts in Australia have become at least 40% more intense in last 2 decades

Storms are getting worse — Rapid rain bursts in Australia have become at least 40% more intense in last 2 decades | Amazing Science | Scoop.it

A series of major floods in Australia has made global headlines in recent years. People around the world were shocked to see Sydney, the city known for the 2000 Olympics, the Harbour Bridge, the Opera House, sunshine and Bondi beach culture inundated with flash floods this year. But were these floods a freak occurrence or a sign of things to come?

Recent research has found an alarming increase of at least 40% in the rate at which rain falls in the most intense rapid rain bursts in Sydney over the past two decades. This rapid increase in peak rainfall intensity has never been reported elsewhere, but may be happening in other parts of the world.

The findings of a recent study, published in Science, have major implications for the city’s preparedness for flash flooding. More intense downpours are likely to overwhelm stormwater systems that were designed for past conditions.

The study demonstrated the increases in the rate of rainfall of rapid rain bursts for each of three weather radars (at Newcastle, Terrey Hills and Wollongong). All radars showed a rate of change of at least 20% per decade. To increase the climate scientists' confidence, they also calculated the change for rapid rain bursts observed by two radars (Wollongong and Terrey Hills) at the same time. The rates of change in these storms are much higher (80-90% per decade) than for storms detected only by a single radar station.

 

One possible explanation could be these are more extreme storms that are well developed and can be seen by two radars simultaneously. So, it is possible that the change in rainfall rates for well-developed rapid rain bursts is even greater. A dangerous situation.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

1000x More Efficient Neural Networks: Building An Artificial Brain With 86 Billion Physical (But Not Biological) Neurons

1000x More Efficient Neural Networks: Building An Artificial Brain With 86 Billion Physical (But Not Biological) Neurons | Amazing Science | Scoop.it

What if in our attempt to build artificial intelligence we don’t simulate neurons in code and mimic neural networks in Python, but instead build actual physical neurons connected by physical synapses in ways very similar to our own biological brains? And in so doing create neural networks that are 1000X more energy efficient than existing AI frameworks? That’s precisely what Rain Neuromorphics is trying to do: build a non-biological yet very human-style artificial brain.

 

Which at one and the same time uses much less energy and is much faster at learning than existing AI projects. And that learns, in short, kind of like we meatspace humans do. Plus, that is built with analog chips, not digital. “We have kind of two missions that are very complimentary: One of them is to build a brain and the other one is to actually understand it,” Gordon Wilson, the soft-spoken but deep-thinking CEO of Rain Neuromorphics told me in a recent TechFirst podcast. “Ultimately, we see these as kind of like Lego pieces that due to their low-power footprint, we’ll be able to concatenate together using things like chiplet integration, advanced packaging, and ultimately scale out these systems to be brain scale — 86 billion neurons, 500 trillion synapses — and low-power enough that they can exist in autonomous devices.”

 

Wilson seems to be in the habit of very quietly and unassumingly saying things that are essentially completely mind-blowing and world-altering. So quietly you almost miss the gargantuan scale of the scheme. Which, in this case, is nothing less than the Frankenstein project.

 

Wilson and co-founders Jack Kendall and Juan Nino started four years ago with a small seed round. Late last year the team taped together a demonstration chip that proves out at least some of their theories about building brain-analog hardware for artificial intelligence workloads via a completely analog chip. And just a month ago the team was rewarded with a $25 million funding round to finish that design, engineer it to be manufacturable, and bring it to market. Who is one of the investors? Artificial intelligence heavyweight and Open AI CEO Sam Altman.

 

Key to the project is the fact that Rain Neuromorphics is building an analog chip. This is very different than 99.9% of the computer chips on the market that reduce reality as they see it to binary: on or off, zeroes or ones. Those chips have to model the facts and relationships and verbs of computer programs with very precise digital math.

 

Analog chips, on the other hand, represent reality in a very natural way. “Digital chips are ... built on the very bottom on zeros and ones, on this Boolean logic of on or off, and all of the other logic is then constructed on top of that,” explains Wilson. “When you zoom down to the bottom of an analog chip, you don’t have zeros or ones, you have gradients of information. You have voltages and currents and resistances. You have physical quantities you are measuring, that represent the mathematical operations you’re performing, and you’re exploiting the relationship between those physical quantities to then perform these very complex neural operations.”

 

How does that work?vBy making physics do the work of computation for us, rather than brute-forcing it through a reality-screen of ones and zeroes.vSo when you’re building out a neural network and modeling it on how the human brain so incredibly efficiently learns, stores data, and executes decisions, you are more measuring conclusions than arriving at them, using the artificial neurons and synapses that you’ve built.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Best
Scoop.it!

Release of Stable Diffusion 2.0 !

Release of Stable Diffusion 2.0 ! | Amazing Science | Scoop.it

The new Stable Diffusion 2.0 release features many improvements, including new base models, a 4x upscaling model, and a depth-guided stable diffusion model. These new models offer many creative possibilities for image transformation and synthesis.

New features in v2 include:

  • •Base 512x512 and 768x768 models trained from scratch with new OpenCLIP text encoder
  • X4 upscaling text-guided diffusion model

New “Depth2Image” functionality

Blog: https://t.co/o3udlBN8uz

This release is led by @robrombach @StabilityAI

The new SD2 base model is trained from scratch using OpenCLIP-ViT/H text encoder (https://t.co/KITMz1bkYX), with quality improvements over V1. It is fine-tuned using v-prediction (https://t.co/MNYGSNbht1) to produce 768x768 images.

 

A new 4x up-scaling text-guided diffusion model, enabling resolutions of 2048x2048 (or even higher!), when combined with the new text-to-image models in this release.

Made possible using Efficient Attention in (https://t.co/32q10tgJTS).

This current model is conditioned on monocular depth estimates inferred via MiDaS (https://t.co/lnqy1UMVbb) and can be used for structure-preserving img2img and shape-conditional synthesis. There’ll be so many creative uses for this. For example, the depth2image model can be used to transform Emad (@EMostaque) into various things.

Depth2Image can offer all sorts of new creative applications, delivering transformations that look radically different from the original but which still preserve the coherence and depth of that image: https://t.co/N22OkVXkFJ

Also included is a new text-guided inpainting model, fine-tuned on the new Stable Diffusion 2.0 base text-to-image, which makes it super easy to switch out parts of an image intelligently and quickly: https://t.co/WjcjvQk5JV

Just like version 1, the model can run on one GPU. That feature makes it accessible to as many people as possible from the very start. When millions of people get their hands on these models, they collectively create some truly amazing things.
https://t.co/rN4m2eVdUm

Shout-Outs:

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Quantify 7,000 proteins from a single drop of blood: SomaLogic’s 7K Platform receives high marks in recent NIH study

Quantify 7,000 proteins from a single drop of blood: SomaLogic’s 7K Platform receives high marks in recent NIH study | Amazing Science | Scoop.it

BOULDER, Colo., Oct. 27, 2022 (GLOBE NEWSWIRE) -- In a recent paper published earlier this month in Nature Scientific Reports, researchers at the National Institutes of Health (NIH) conducted what may be the largest technical assessment to date of the 7K SomaScan® Platform to measure its performance and to support the platform's growing popularity among users. The NIH researchers found the platform's extensive human proteome coverage, sensitivity and consistently low variability to be a particular strength and describe the key advantages over other proteomic approaches.

 

SomaScan is a high-throughput, aptamer-based proteomics assay designed for the simultaneous measurement of thousands of proteins with a broad range of endogenous concentrations. In its most current version, the 7k SomaScan assay v4.1 is capable of measuring 7288 human proteins. The authors of the paper present an extensive technical assessment of this platform based on a study of 2050 samples across 22 plates. Included in the study design were inter-plate technical duplicates from 102 human subjects, which allowed us to characterize different normalization procedures, evaluate assay variability by multiple analytical approaches, present signal-over-background metrics, and discuss potential specificity issues. By providing detailed performance assessments on this wide range of technical aspects, we aim for this work to serve as a valuable resource for the growing community of SomaScan users.

 
Concurrently with its wider adoption, the proteome coverage of SomaScan has increased over time, from roughly 800 SOMAmers in 2009, to 1,100 in 2012, 1,300 in 2015, 5,000 in 2018, and the most recent 7,000 protein assay available since 2020. Using 2, 624 samples analyzed between 2015 and 2017, teams at the U.S. National Institutes of Health (NIH) performed in-depth analyses to assess the technical features of the 1.1k and 1.3k SomaScan assays, including normalization procedures and their variability12, later followed by technical assessments from other laboratories13,14,15,16. However, no comprehensive technical assessments have been performed on the newest 7k SomaScan assay until now with this present study, presented here.
Dr. Stefan Gruenwald's insight:

https://www.nature.com/articles/s41598-022-22116-0

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MinD-Vis: Decoding visual stimuli from brain recordings with high accuracy

MinD-Vis: Decoding visual stimuli from brain recordings with high accuracy | Amazing Science | Scoop.it

For the first time, we show that non-invasive brain recordings can be used to decode images with similar performance as invasive measures.

 

Decoding visual stimuli from brain recordings aims to deepen our understanding of the human visual system and build a solid foundation for bridging human vision and computer vision through the Brain-Computer Interface. However, due to the scarcity of data annotations and the complexity of underlying brain information, it is challenging to decode images with faithful details and meaningful semantics.

In this work, AI scientists present MinD-Vis: Sparse Masked Brain Modeling with Double-Conditioned Diffusion Model for Vision Decoding. Specifically, by boosting the information capacity of representations learned in a large-scale resting-state fMRI dataset, they were able to show that the MinD-Vis framework reconstructed highly plausible images with semantically matching details from brain recordings with very few training pairs. The benchmarked model and its correlated method outperformed state-of-the-arts in both semantic mapping (100-way semantic classification) and generation quality (FID) by 66% and 41%, respectively. Exhaustive ablation studies are conducted to analyze this framework.

A human visual decoding system that only reply on limited annotations. State-of-the-art 100-way top-1 classification accuracy on GOD dataset: 23.9%, outperforming the previous best by 66%. State-of-the-art generation quality (FID) on GOD dataset: 1.67, outperforming the previous best by 41%.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New technology creates carbon neutral chemicals out of captured carbon dioxide

New technology creates carbon neutral chemicals out of captured carbon dioxide | Amazing Science | Scoop.it

The technology could allow scientists to both capture CO2 and transform it into useful chemicals such as carbon monoxide and synthetic natural gas in one circular process. 

 

Dr Melis Duyar, Senior Lecturer of Chemical Engineering at the University of Surrey commented:  “Capturing CO2 from the surrounding air and directly converting it into useful products is exactly what we need to approach carbon neutrality in the chemicals sector. This could very well be a milestone in the steps needed for the UK to reach its 2050 net-zero goals. We need to get away from our current thinking on how we produce chemicals, as current practices rely on fossil fuels which are not sustainable. With this technology we can supply chemicals with a much lower carbon footprint and look at replacing fossil fuels with carbon dioxide and renewable hydrogen as the building blocks of other important chemicals.” 

 

The technology uses patent-pending switchable Dual Function Materials (DFMs), that capture carbon dioxide on their surface and catalyse the conversion of captured CO2 directly into chemicals. The “switchable” nature of the DFMs comes from their ability to produce multiple chemicals depending on the operating conditions or the composition of the added reactant. This makes the technology responsive to variations in demand for chemicals as well as availability of renewable hydrogen as a reactant.  

 

Dr Duyar continued: “These outcomes are a testament to the research excellence at Surrey, with continuously improving facilities, internal funding schemes and a collaborative culture.” 

Loukia-Pantzechroula Merkouri, Postgraduate student leading this research at the University of Surrey added:  “Not only does this research demonstrate a viable solution to the production of carbon neutral fuels and chemicals, but it also offers an innovative approach to combat the ever-increasing CO2 emissions contributing to global warming.”  

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Leprosy has the potential to regenerate human livers

Leprosy has the potential to regenerate human livers | Amazing Science | Scoop.it
Leprosy is one of the world’s oldest and most persistent diseases but the bacteria that cause it may also have the surprising ability to grow and regenerate a vital organ.

 

Scientists have discovered that parasites associated with leprosy can reprogram cells to increase the size of a liver in adult animals without causing damage, scarring or tumors. The findings suggest the possibility of adapting this natural process to renew aging livers and increase healthy lifespan -- the length of time living disease-free -- in humans.

 

Experts say it could also help regrow damaged livers, thereby reducing the need for transplantation, which is currently the only curative option for people with end-stage scarred livers. Previous studies promoted the regrowth of mouse livers by generating stem cells and progenitor cells -- the step after a stem cell that can become any type of cell for a specific organ -- via an invasive technique that often resulted in scarring and tumor growth.

 

To overcome these harmful side-effects, Edinburgh researchers built on their previous discovery of the partial cellular reprogramming ability of the leprosy-causing bacteria, Mycobacterium leprae. Working with the US Department of Health and Human Services in Baton Rouge, Louisiana, the team infected 57 armadillos -- a natural host of leprosy bacteria -- with the parasite and compared their livers with those of uninfected armadillos and those that were found to be resistant to infection.

 

They found that the infected animals developed enlarged -- yet healthy and unharmed -- livers with the same vital components, such as blood vessels, bile ducts and functional units known as lobules, as the uninfected and resistant armadillos. The team believe the bacteria 'hijacked' the inherent regenerative ability of the liver to increase the organ's size and, therefore, to provide it with more cells within which to increase.

 

They also discovered several indicators that the main kinds of liver cells -- known as hepatocytes -- had reached a "rejuvenated" state in the infected armadillos. Livers of the infected armadillos also contained gene expression patterns -- the blueprint for building a cell -- similar to those in younger animals and human fetal livers.

Genes related to metabolism, growth and cell proliferation were activated and those linked with aging were down-regulated, or suppressed.

 

Scientists think this is because the bacteria reprogramed the liver cells, returning them to the earlier stage of progenitor cells, which in turn became new hepatocytes and grow new liver tissues.

The team are hopeful that the discovery has the potential to help develop interventions for aging and damaged livers in humans. Liver diseases currently result in two million deaths a year worldwide.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Exo-Cave research: space exploration goes finally underground

Exo-Cave research: space exploration goes finally underground | Amazing Science | Scoop.it

Is there life in Martian caves?

It's a good question, but it's not the right question -- yet. An international collaboration of scientists led by NAU researcher Jut Wynne has dozens of questions we need asked and answered. Once we figure out how to study caves on the Moon, Mars and other planetary bodies, then we can return to that question.

 

Wynne, an assistant research professor of cave ecology, is the lead author of two related studies, both published in a special collection of papers on planetary caves by the Journal of Geophysical Research Planets. The first, "Fundamental Science and Engineering Questions in Planetary Cave Research," was done by an interdisciplinary team of 31 scientists, engineers and astronauts who produced a list of 198 questions that they, working with another 82 space and cave scientists and engineers, narrowed down to the 53 most important.

 

Harnessing the knowledge of a considerable swath of the space science community, this work is the first study designed to identify the research and engineering priorities to advance the study of planetary caves. The team hopes their work will inform what will ultimately be needed to support robotic and human missions to a planetary cave -- namely on the Moon and/or Mars.

 

The second, "Planetary Caves: A Solar System View of Products and Processes," was born from the first study. Wynne realized there had been no effort to catalog planetary caves across the solar system, which is another important piece of the big-picture puzzle. He assembled another team of planetary scientists to tackle that question.

 

"With the necessary financial investment and institutional support, the research and technological development required to achieve these necessary advancements over the next decade are attainable," Wynne said. "We now have what I hope will become two foundational papers that will help propel planetary cave research from an armchair contemplative exercise to robots probing planetary subsurfaces."

 

What do we know about extraterrestrial caves?

There are a lot of them. Scientists have identified at least 3,545 potential caves on 11 different moons and planets throughout the solar system, including the Moon, Mars and moons of Jupiter and Saturn. Cave formation processes have even been identified on comets and asteroids. If the surrounding environment allows for access into the subsurface, that presents an opportunity for scientific discovery that's never been available before.

 

The discoveries in these caves could be massive. Caves may one day allow scientists to "peer into the depths" of these rocky and icy bodies, which will provide insights into how they were formed (but also can provide further insights into how Earth was formed). They could also, of course, hold secrets of life.

 

"Caves on many planetary surfaces represent one of the best environments to search for evidence of extinct or perhaps extant lifeforms," Wynne said. "For example, as Martian caves are sheltered from deadly surface radiation and violent windstorms, they are more likely to exhibit a more constant temperature regime compared to the surface, and some may even contain water ice. This makes caves on Mars one of the most important exploration targets in the search for life." And it's not just finding life -- these same factors make caves good locations for astronaut shelters on Mars and the Moon when crewed missions are able to explore.

 

"Radiation shielding will be essential for human exploration of the Moon and Mars," said Leroy Chiao, a retired astronaut, former commander of the International Space Station and co-author of the first paper. "One possible solution is to utilize caves for this purpose. The requirements for astronaut habitats, EVA suits and equipment should take cave exploration and development into consideration, for protection from both solar and galactic cosmic radiation."

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Solving the differential equation for brain dynamics gives rise to flexible machine learning models

Solving the differential equation for brain dynamics gives rise to flexible machine learning models | Amazing Science | Scoop.it

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

 

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution. 

 

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed. 

 

The models, dubbed a “closed-form continuous-time” (CfC) neural network, outperformed state-of-the-art counterparts on a slew of tasks, with considerably higher speedups and performance in recognizing human activities from motion sensors, modeling physical dynamics of a simulated walker robot, and event-based sequential image processing. On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients. 

 

The new paper on the work is published in Nature Machine Intelligence.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

SOFIA (Stratospheric Observatory for Infrared Astronomy) study says there is no phosphine on Venus

SOFIA (Stratospheric Observatory for Infrared Astronomy) study says there is no phosphine on Venus | Amazing Science | Scoop.it

Venus is considered Earth’s twin in many ways, but, thanks to the Stratospheric Observatory for Infrared Astronomy (SOFIA), one difference now seems clearer: Unlike Earth, Venus does not have any obvious phosphine. The spectral data from SOFIA overlain atop this image of Venus from NASA’s Mariner 10 spacecraft is what the researchers observed in their study, showing the intensity of light from Venus at different wavelengths. If a significant amount of phosphine were present in Venus’s atmosphere, there would be dips in the graph at the four locations labeled “PH3,” similar to but less pronounced than those seen on the two ends. 


Phosphine is a gas found in Earth’s atmosphere, but the announcement of phosphine discovered above Venus’s clouds made headlines in 2020. The reason was its potential as a biomarker. In other words, phosphine could be an indicator of life. Though common in the atmospheres of gas planets like Jupiter and Saturn, phosphine on Earth is associated with biology. Here, it’s formed by decaying organic matter in bogs, swamps, and marshes.

“Phosphine is a relatively simple chemical compound — it’s just a phosphorus atom with three hydrogens — so you would think that would be fairly easy to produce. But on Venus, it’s not obvious how it could be made,” said Martin Cordiner, a researcher in astrochemistry and planetary science at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

There may be other potential ways to form phosphine on a rocky planet, like through lightning or volcanic activity, but none of these apply if there simply isn’t any phosphine on Venus. And according to SOFIA, there isn’t.

Following the 2020 study, a number of different telescopes conducted follow-up observations to confirm or refute the finding. Cordiner and his team followed suit, using SOFIA in their search.

The recently retired SOFIA was a telescope on an airplane and, over the course of three flights in November 2021, it looked for hints of phosphine in Venus’s sky. Thanks to its operation from Earth’s sky, SOFIA could perform observations not accessible from ground-based observatories. Its high spectral resolution also enabled it to be sensitive to phosphine at high altitudes in Venus’s atmosphere, about 45 to 70 miles (about 75 to 110 kilometers) above the ground — the same region as the original finding — with spatial coverage across Venus’s entire disk.

The researchers didn’t see any sign of phosphine. According to their results, if there is any phosphine present in Venus’s atmosphere at all, it’s a maximum of about 0.8 parts phosphine per billion parts everything else, much smaller than the initial estimate.

Pointing SOFIA’s telescope at Venus was a challenge in and of itself. The window during which Venus could be observed was short, about half an hour after sunset, and the aircraft needed to be in the right place at the right time. Venus also goes through phases similar to the Moon, making it difficult to center the telescope on the planet. Add in its proximity to the Sun in the sky — which the telescope must avoid — and the situation quickly became tense.

“You don’t want sunlight accidentally coming in and shining on your sensitive telescope instruments,” Cordiner said. “The Sun is the last thing you want in the sky when you’re doing these kinds of sensitive observations.”

Despite the fact the group did not find phosphine after the stressful observations, the study was a success. Along with complementary data from other observatories that vary in the depths they probe within Venus’s atmosphere, the SOFIA results help build the body of evidence against phosphine anywhere in Venus’s atmosphere, from its equator to its poles.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Chroma – a generative model that creates new protein molecules based on geometric and functional programming instructions

Chroma – a generative model that creates new protein molecules based on geometric and functional programming instructions | Amazing Science | Scoop.it

Three billion years of evolution have produced a tremendous diversity of proteins, and yet the full potential of this molecular class is likely far greater than we can even imagine. Accessing this potential has been challenging for computation and experiment because the space of possible protein molecules is much larger than the space that are likely to host function.

 

Now, a team of scientists introduce Chroma, a generative model for proteins and protein complexes that can directly sample novel protein structures and sequences and that can be conditioned to steer the generative process towards desired properties and functions. To enable this, they were able to introduce a diffusion process that respects the conformational statistics of polymer ensembles, an efficient neural architecture for molecular systems based on random graph neural networks that enables long-range reasoning with sub-quadratic scaling, equivariant layers for efficiently synthesizing 3D structures of proteins from predicted inter-residue geometries, and a general low-temperature sampling algorithm for diffusion models. Importantly, Chroma can effectively realize protein design as Bayesian inference under external constraints, which can involve symmetries, substructure, shape, semantics, and even natural language prompts. With this unified approach, we hope to accelerate the prospect of programming protein matter for human health, materials science, and synthetic biology.

 

Chroma learns patterns in the three-dimensional structures and amino acid sequences of proteins and protein complexes from the Protein Data Bank. By learning these patterns in a way that generalizes across natural proteins, Chroma can synthesize new protein molecules that adhere to these principles while combining them in novel ways. Importantly, Chroma can be conditioned on a set of desired structural or functional properties, such as the presence of functional structural motifs, symmetry constraints, adhering to a pre-specified shape, belonging to a domain or functional class, or even satisfying text-based descriptions. Thus, Chroma will enable a new, programmable mode of protein engineering where it is routine and feasible to generate specific and tailored protein solutions to complex challenges for bioengineering and human health.

 

The Chroma system is built on several new machine learning components, including a new neural network architecture for processing and manipulating 3D molecular information, a new diffusion process for adding noise to structures while adhering to the biophysical constraints of protein chains, and a new generalized method for generating high-quality samples from diffusion models. As a result of these innovations, Chroma is able to generate extremely large proteins and protein complexes (e.g. 30,000+ heavy atoms across 4,000+ residues) in a few minutes on a single commodity GPU.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virus World
Scoop.it!

Shhh! Viruses Could Be Listening – And Watching!

Shhh! Viruses Could Be Listening – And Watching! | Amazing Science | Scoop.it

New Research Finds That Viruses May Have “Eyes and Ears” on Us

The newly-found, widespread ability of some viruses to monitor their environment could have implications for antiviral drug development. New research indicates that viruses are using information from their environment to “decide” when to sit tight inside their hosts and when to multiply and burst out, killing the host cell. The work has important implications for antiviral drug development. Led by the University of Maryland Baltimore County (UMBC), the study was recently published in Frontiers in Microbiology. A virus’s ability to sense its environment, including elements produced by its host, adds “another layer of complexity to the viral-host interaction,” says Ivan Erill. He is senior author on the new paper and professor of biological sciences at UMBC. Right now, viruses are taking advantage of that ability to their benefit. But he says that in the future, “we could exploit it to their detriment.”

 

Not a coincidence

 

The new study focused on bacteriophages, which are often referred to simply as “phages.” They are viruses that infect bacteria. In the study, the phages analyzed can only infect their hosts when the bacterial cells have special appendages, called pili and flagella, that help the bacteria move and mate. The bacteria produce a protein called CtrA that controls when they generate these appendages. The research revealed that many appendage-dependent phages have patterns in their DNA where the CtrA protein can attach, called binding sites. Erill says that a phage having a binding site for a protein produced by its host is unusual. Even more surprising, Erill and the paper’s first author Elia Mascolo, a Ph.D. student in Erill’s lab, discovered through detailed genomic analysis that these binding sites were not unique to a single phage, or even a single group of phages. Many different types of phages had CtrA binding sites—but they all required their hosts to have pili and/or flagella to infect them. They decided that it couldn’t be a coincidence. The ability to monitor CtrA levels “has been invented multiple times throughout evolution by different phages that infect different bacteria,” Erill says. When distantly related species exhibit a similar trait, it’s called convergent evolution—and it indicates that the trait is definitely useful.

 

Timing is everything

 

Another wrinkle in the story: The first phage in which the scientists identified CtrA binding sites infects a particular group of bacteria called Caulobacterales. Caulobacterales are an especially well-studied group of bacteria, because they exist in two forms: a “swarmer” form that swims around freely, and a “stalked” form that attaches to a surface. The swarmers have pili/flagella, and the stalks do not. In these bacteria, CtrA also regulates the cell cycle, determining whether a cell will divide evenly into two more of the same cell type, or divide asymmetrically to produce one swarmer and one stalk cell.  Since the phages can only infect swarmer cells, it’s in their best interest only to burst out of their host when there are many swarmer cells available to infect. Generally, Caulobacterales live in nutrient-poor environments, and they are very spread out. “But when they find a good pocket of microhabitat, they become stalked cells and proliferate,” Erill says, eventually producing large quantities of swarmer cells. So, “We hypothesize the phages are monitoring CtrA levels, which go up and down during the life cycle of the cells, to figure out when the swarmer cell is becoming a stalk cell and becoming a factory of swarmers,” Erill says, “and at that point, they burst the cell, because there are going to be many swarmers nearby to infect.”

 

Listening in

 

Unfortunately, the method to prove this hypothesis is extremely difficult and labor-intensive, so that wasn’t part of this latest paper—although Erill and colleagues hope to tackle that question in the future. However, the research team sees no other plausible explanation for the proliferation of CtrA binding sites on so many different phages, all of which require pili/flagella to infect their hosts. Even more interesting, they note, are the implications for viruses that infect other organisms—even humans. “Everything that we know about phages, every single evolutionary strategy they have developed, has been shown to translate to viruses that infect plants and animals,” he says. “It’s almost a given. So if phages are listening in on their hosts, the viruses that affect humans are bound to be doing the same.”  There are a few other documented examples of phages monitoring their environment in interesting ways, but none include so many different phages employing the same strategy against so many bacterial hosts. This new research is the “first broad scope demonstration that phages are listening in on what’s going on in the cell, in this case, in terms of cell development,” Erill says. But more examples are on the way, he predicts. Already, members of his lab have started looking for receptors for other bacterial regulatory molecules in phages, he says—and they’re finding them.

 

New therapeutic avenues

 

The key takeaway from this research is that “the virus is using cellular intel to make decisions,” Erill says, “and if it’s happening in bacteria, it’s almost certainly happening in plants and animals, because if it’s an evolutionary strategy that makes sense, evolution will discover it and exploit it.” For example, an animal virus might want to know what kind of tissue it is in, or how robust the host’s immune response is to its infection in order to optimize its strategy for survival and replication. While it might be disturbing to think about all the information viruses could gather and possibly use to make us sicker, these discoveries also open up opportunities for new therapies.  “If you are developing an antiviral drug, and you know the virus is listening in on a particular signal, then maybe you can fool the virus,” Erill says. That’s several steps away, however. For now, “We are just starting to realize how actively viruses have eyes on us—how they are monitoring what’s going on around them and making decisions based on that,” Erill says. “It’s fascinating.”

 

Research Cited Published in Frontiers in Microbiology (August 17, 2022):

https://doi.org/10.3389/fmicb.2022.918015 

 


Via Juan Lama
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

CircularNet: Reducing waste with Machine Learning

CircularNet: Reducing waste with Machine Learning | Amazing Science | Scoop.it

Humans do a poor job of recycling, with less than 10% of our global resources recycled, and tossing 1 of every 5 items (~17%) in a recycling bin that shouldn’t be there. That’s bad news for everyone -- recycling facilities catch fire, we lose billions of dollars in recyclable material every year -- and at an existential level, we miss an opportunity to leverage recycling as an impactful tool to combat climate change. With this in mind, we may ask ourselves - how might we use the power of technology to ensure that we recycle more and recycle right?

 

As the world population grows and urbanizes, waste production is estimated to reach 2.6 billion tons a year in 2030, an increase from its current level of around 2.1 billion tons. Efficient recycling strategies are critical to foster a sustainable future. The facilities where our waste and recyclables are processed are called “Material Recovery Facilities” (MRFs). Each MRF processes tens of thousands of pounds of our societal “waste” every day, separating valuable recyclable materials like metals and plastics from non-recyclable materials. A key inefficiency within the current waste capture and sorting process is the inability to identify and segregate waste into high quality material streams. The accuracy of the sorting directly determines the quality of the recycled material; for high-quality, commercially viable recycling, the contamination levels need to be low. Even though the MRFs use various technologies alongside manual labor to separate materials into distinct and clean streams, the exceptionally cluttered and contaminated nature of the waste stream makes automated waste detection challenging to achieve, and the recycling rates and the profit margins stay at undesirably low levels.

 

Enter what we call “CircularNet”, a set of models that lowers barriers to AI/ML tech for waste identification and all the benefits this new level of transparency can offer. The main goal with CircularNet is to develop a robust and data-efficient model for waste/recyclables detection, which can support the way we identify, sort, manage, and recycle materials across the waste management ecosystem. Models such as this could potentially help with:

  • Better understanding and capturing more value from recycling value chains
  • Increasing landfill diversion of materials
  • Identifying and reducing contamination in inbound and outbound material streams

 

Challenges

Processing tens of thousands of pounds of material every day, Material Recovery Facility waste streams present a unique and ever-changing challenge: a complex, cluttered, and diverse flow of materials at any given moment. Additionally, there is a lack of comprehensive and readily accessible waste imagery datasets to train and evaluate ML models. The models should be able to accurately identify different types of waste in “real world” conditions of a MRF - meaning identifying items despite severe clutter and occlusions, high variability of foreground object shapes and textures, and severe object deformation. In addition to these challenges, others that need to be addressed are visual diversity of foreground and background objects that are often severely deformed, and fine-grained differences between the object classes (e.g. brown paper vs. cardboard; or soft vs. rigid plastic). There also needs to be consistency while tracking recyclables through the recycling value chain e.g. at point of disposal, within recycling bins and hauling trucks, and within material recovery facilities.

 

Solution

The CircularNet model is built to perform Instance Segmentation by training on thousands of images with the Mask R-CNN algorithm. Mask R-CNN was implemented from the TensorFlow Model Garden, which is a repository consisting of multiple models and modeling solutions for Tensorflow users. By collaborating with experts in the recycling industry, we developed a customized and globally-applicable taxonomy of material types (e.g. “paper” “metal”,”plastic”, etc.) and material forms (e.g. “bag”, “bottle”, “can”, etc.), which is used to annotate training data for the model. Models were developed to identify material types, material forms and plastic types (HDPE, PETE, etc). Unique models were trained for different purposes, thus helping achieve better accuracy (when harmonized and flexibility to cater to different applications). The models are trained with various backbones such as ResNet, MobileNet and, SpineNet.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Sketch Guided Text-to-Image Diffusion Model

Sketch Guided Text-to-Image Diffusion Model | Amazing Science | Scoop.it

Text-to-Image models have introduced a remarkable leap in the evolution of machine learning, demonstrating high-quality synthesis of images from a given text-prompt. However, these powerful pretrained models still lack control handles that can guide spatial properties of the synthesized images.

 

In this work, the authors introduce a universal approach to guide a pretrained text-to-image diffusion model, with a spatial map from another domain (e.g., sketch) during inference time. Unlike previous works, this new method does not require to train a dedicated model or a specialized encoder for the task. The key idea here is to train a Latent Guidance Predictor (LGP) - a small, per-pixel, Multi-Layer Perceptron (MLP) that maps latent features of noisy images to spatial maps, where the deep features are extracted from the core Denoising Diffusion Probabilistic Model (DDPM) network. The LGP is trained only on a few thousand of images and constitutes a differential guiding map predictor, over which the loss is computed and propagated back to push the intermediate images to agree with the spatial map. The per-pixel training offers flexibility and locality which allows the technique to perform well on out-of-domain sketches, including free-hand style drawings. A particular focus is taken on the sketch-to-image translation task, revealing a robust and expressive way to generate images that follow the guidance of a sketch of arbitrary style or domain.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virus World
Scoop.it!

Scientists Make Breakthrough in Developing a New Vaccine That Could Finally Beat COVID

Scientists Make Breakthrough in Developing a New Vaccine That Could Finally Beat COVID | Amazing Science | Scoop.it

A new discovery in the fight against COVID-19 could lead to a long-lasting vaccine that works on all currently known variants of the ever-mutating virus. With new COVID variants and subvariants evolving faster and faster, each chipping away at the effectiveness of the leading vaccines, the hunt is on for a new kind of vaccine — one that works equally well on current and future forms of a novel coronavirus.

 

Recently, researchers at the National Institutes of Health (NIH) in Maryland think they have found a new approach to vaccine design that could lead them to a long-lasting jab. As an added bonus, it also might also work on other coronaviruses, not just the SARS-CoV-2 virus that causes COVID-19.

 

The NIH team reported its findings in a peer-reviewed study that appeared in the journal Cell Host & Microbe earlier this month. The key to the NIH’s potential vaccine design is a part of the virus called the “spine helix.” It’s a coil-shaped structure inside the spike protein, the part of the virus that helps it grab onto and infect host cells. Lots of current vaccines target the spike protein. But none of them specifically target the spine helix. And yet, there are good reasons to focus on that part of the pathogen.

 

Whereas many regions of the spike protein tend to change a lot as the virus mutates, the spine helix doesn’t. That gives scientists “hope that an antibody targeting this region will be more durable and broadly effective,” Joshua Tan, the lead scientist on the NIH team, told The Daily Beast. Vaccines that target and “bind,” say, the receptor-binding domain region of the spike protein might lose effectiveness if the virus evolves within that region. The great thing about the spine helix, from an immunological standpoint, is that it doesn’t mutate. At least, it hasn’t mutated yet, three years into the COVID pandemic. So a vaccine that binds the spine helix in SARS-CoV-2 should hold up for a long time. And it should also work on all the other coronaviruses that also include the spine helix—and there are dozens of them, including several such as SARS-CoV-1 and MERS that have already made the leap from animal populations and caused outbreaks in people.

 

To test their hypothesis, the NIH researchers extracted antibodies from 19 recovering COVID patients and tested them on samples of five different coronaviruses, including SARS-CoV-2, SARS-CoV-1 and MERS. Of the 55 different antibodies, most zeroed in on parts of the virus that tend to mutate a lot. Just 11 targeted the spine helix. But those 11 that went after the spine helix worked better, on average, on four of the coronaviruses. A fifth virus, HCoV-NL63, shrugged off all the antibodies.

 

The NIH team isolated the best spine-helix antibody, COV89-22, and also tested it on hamsters infected with the latest subvariants of the Omicron variant of COVID. “Hamsters treated with COV89-22 showed a reduced pathology score,” the team found. The results are promising. “These findings identify a class of antibodies that broadly neutralize coronaviruses by targeting the stem helix,” the researchers wrote. Don’t break out the champagne quite yet.

 

“Although these data are useful for vaccine design, we have not performed vaccination experiments in this study and thus cannot draw any definitive conclusions with regard to the efficacy of stem helix-based vaccines,” the NIH team warned. It’s one thing to test a few antibodies on hamsters. It’s another to develop, run trials with and get approval for a whole new class of vaccine. “It is really hard and most things that start out as good ideas fail for one reason or another,” James Lawler, an infectious disease expert at the University of Nebraska Medical Center, told The Daily Beast. And while the spine-helix antibodies appear to be broadly effective, it’s unclear how they stack up against antibodies that are more specific. In other words, a spine-helix jab might work against a bunch of different but related viruses, but work less well against any one virus than a jab that’s tailored specifically for that virus. “Further experiments need to be done to evaluate if they will be sufficiently protective in humans,” Tan said of the spine-helix antibodies.

 

There’s a lot of work to do before a spine-helix vaccine might be available at the corner pharmacy. And there are a lot of things that could derail that work. Additional studies could contradict the NIH team’s results. The new vaccine design might not work as well on people as it does on hamsters. The new jab could also turn out to be unsafe, impractical to produce or too expensive for widespread distribution. Barton Haynes, a Duke University immunologist, told The Daily Beast he looked at spine-helix vaccine designs last year and concluded they’d be too costly to warrant major investment. The main problem, he said, is that the spine-helix antibodies are less potent and “tough to induce” from their parent B-cells. The harder the pharmaceutical industry has to work to produce a vaccine, and the more vaccine it has to pack into a single dose in order to compensate for lower potency, the less cost-effective a vaccine becomes for mass-production. Maybe a spine-helix jab is in our future. Or maybe not.

 

Either way, it’s encouraging that scientists are making incremental progress toward a more universal coronavirus vaccine. One that could work for many years on a wide array of related viruses. COVID for one isn’t going anywhere. And with each mutation, it risks becoming unrecognizable to the current vaccines. What we need is a vaccine that’s mutation-proof.

 

Publication cited published in Cell Host and Microbe (Nov. 7, 2022):

https://doi.org/10.1016/j.chom.2022.10.010 

 

Via Juan Lama
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virus World
Scoop.it!

CRISPR Tools Found in Thousands of Viruses Could Boost Gene Editing

CRISPR Tools Found in Thousands of Viruses Could Boost Gene Editing | Amazing Science | Scoop.it

Phages probably picked up DNA-cutting systems from their microbial hosts, and may use them to fight other viruses. A systematic sweep of viral genomes has revealed a trove of potential CRISPR-based genome-editing tools.

 

CRISPR–Cas systems are common in the microbial world of bacteria and archaea, where they often help cells to fend off viruses. But an analysis1 published on 23 November 2022 in Cell finds CRISPR–Cas systems in 0.4% of publicly available genome sequences from viruses that can infect these microbes.

 

Researchers think that the viruses use CRISPR–Cas to compete with one another — and potentially also to manipulate gene activity in their host to their advantage. Some of these viral systems were capable of editing plant and mammalian genomes, and possess features — such as a compact structure and efficient editing — that could make them useful in the laboratory.

 

“This is a significant step forward in the discovery of the enormous diversity of CRISPR–Cas systems,” says computational biologist Kira Makarova at the US National Center for Biotechnology Information in Bethesda, Maryland. “There is a lot of novelty discovered here.”

 

DNA-cutting defenses

Although best known as a tool used to alter genomes in the laboratory, CRISPR–Cas can function in nature as a rudimentary immune system. About 40% of sampled bacteria and 85% of sampled archaea have CRISPR–Cas systems. Often, these microbes can capture pieces of an invading virus’s genome, and store the sequences in a region of their own genome, called a CRISPR array. CRISPR arrays then serve as templates to generate RNAs that direct CRISPR-associated (Cas) enzymes to cut the corresponding DNA. This can allow microbes carrying the array to slice up the viral genome and potentially stop viral infections. Viruses sometimes pick up snippets of their hosts’ genomes, and researchers had previously found isolated examples of CRISPR–Cas in viral genomes. If those stolen bits of DNA give the virus a competitive advantage, they could be retained and gradually modified to better serve the viral lifestyle. For example, a virus that infects the bacterium Vibrio cholera uses CRISPR–Cas to slice up and disable DNA in the bacterium that encodes antiviral defences2.

 

Molecular biologist Jennifer Doudna and microbiologist Jillian Banfield at the University of California, Berkeley, and their colleagues decided to do a more comprehensive search for CRISPR–Cas systems in viruses that infect bacteria and archaea, known as phages. To their surprise, they found about 6,000 of them, including representatives of every known type of CRISPR–Cas system. “Evidence would suggest that these are systems that are useful to phages,” says Doudna. The team found a wide range of variations on the usual CRISPR–Cas structure, with some systems missing components and others unusually compact. “Even if phage-encoded CRISPR–Cas systems are rare, they are highly diverse and widely distributed,” says Anne Chevallereau, who studies phage ecology and evolution at the French National Centre for Scientific Research in Paris. “Nature is full of surprises.”

 

Small, but efficient

Viral genomes tend to be compact, and some of the viral Cas enzymes were remarkably small. This could offer a particular advantage for genome-editing applications, because smaller enzymes are easier to shuttle into cells. Doudna and her colleagues focused on a particular cluster of small Cas enzymes called Casλ, and found that some of them could be used to edit the genomes of lab-grown cells from thale cress (Arabidopsis thaliana), wheat, as well as human kidney cells. The results suggest that viral Cas enzymes could join a growing collection of gene-editing tools discovered in microbes.

 

Although researchers have uncovered other small Cas enzymes in nature, many of those have so far been relatively inefficient for genome-editing applications, says Doudna. By contrast, some of the viral Casλ enzymes combine both small size and high efficiency. In the meantime, researchers will continue to search microbes for potential improvements to known CRISPR–Cas systems.

 

Makarova anticipates that scientists will also be looking for CRISPR–Cas systems that have been picked up by plasmids — bits of DNA that can be transferred from microbe to microbe. “Each year we have thousands of new genomes becoming available, and some of them are from very distinct environments,” she says. “So it’s really going to be interesting.”

 

Published in Nature (Nov. 23, 2022): https://doi.org/10.1038/d41586-022-03837-8


Via Juan Lama
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virus World
Scoop.it!

Monkeypox Mutations Cause Virus to Spread Rapidly, Evade Drugs and Vaccines, Study Finds

Monkeypox Mutations Cause Virus to Spread Rapidly, Evade Drugs and Vaccines, Study Finds | Amazing Science | Scoop.it

Monkeypox has infected more than 77,000 people in more than 100 countries worldwide, and—similar to COVID-19—mutations have enabled the virus to grow stronger and smarter, evading antiviral drugs and vaccines in its mission to infect more people. Now, a team of researchers at the University of Missouri have identified the specific mutations in the monkeypox virus that contribute to its continued infectiousness. The findings could lead to several outcomes: modified versions of existing drugs used to treat people suffering from monkeypox or the development of new drugs that account for the current mutations to increase their effectiveness at reducing symptoms and the spread of the virus. Kamlendra Singh, a professor in the MU College of Veterinary Medicine and Christopher S. Bond Life Sciences Center principal investigator, collaborated with Shrikesh Sachdev, Shree Lekha Kandasamy and Hickman High School student Saathvik Kannan, to analyze the DNA sequences of more 200 strains of monkeypox virus spanning multiple decades, from 1965, when the virus first started spreading, to outbreaks in the early 2000s and again in 2022. "By doing a temporal analysis, we were able to see how the virus has evolved over time, and a key finding was the virus is now accumulating mutations specifically where drugs and antibodies from vaccines are supposed to bind," Sachdev said. "So, the virus is getting smarter, it is able to avoid being targeted by drugs or antibodies from our body's immune response and continue to spread to more people."

 

Needles in a haystack

 

Singh has been studying virology and DNA genome replication for nearly 30 years. He said the homology, or structure, of the monkeypox virus is very similar to the vaccinia virus, which has been used as a vaccine to treat smallpox. This enabled Singh and his collaborators to create an accurate, 3D computer model of the monkeypox virus proteins and identify both where the specific mutations are located and what their functions are in contributing to the virus becoming so infectious recently. "Our focus is on looking at the specific genes involved in copying the virus genome, and monkeypox is a huge virus with approximately 200,000 DNA bases in the genome," Singh said. "The DNA genome for monkeypox is converted into nearly 200 proteins, so it comes with all the 'armor' it needs to replicate, divide and continue to infect others. Viruses will make billions of copies of itself and only the fittest will survive, as the mutations help them adapt and continue to spread."  Kannan and Kandasamy examined five specific proteins while analyzing the monkeypox virus strains: DNA polymerase, DNA helicase, bridging protein A22R, DNA glycosylase and G9R. "When they sent me the data, I saw that the mutations were occurring at critical points impacting DNA genome binding, as well as where drugs and vaccine-induced antibodies are supposed to bind," Singh said. "These factors are surely contributing to the virus' increased infectivity. This work is important because the first step toward solving a problem is identifying where the problem is specifically occurring in the first place, and it is a team effort."

 

The evolution of viruses

 

Researchers continue to question how the monkeypox virus has evolved over time. The efficacy of current CDC-approved drugs to treat monkeypox have been suboptimal, likely because they were originally developed to treat HIV and herpes but have since received emergency use authorization in an attempt to control the recent monkeypox outbreak. "One hypothesis is when patients were being treated for HIV and herpes with these drugs, they may have also been infected with monkeypox without knowing, and the monkeypox virus got smarter and mutated to evade the drugs," Singh said. "Another hypothesis is the monkeypox virus may be hijacking proteins we have in our bodies and using them to become more infectious and pathogenic." Singh and Kannan have been collaborating since the COVID-19 pandemic began in 2020, identifying the specific mutations causing COVID-19 variants, including Delta and Omicron. Kannan was recently recognized by the United Nations for supporting their 'Sustainable Development Goals,' which help tackle the world's greatest challenges. "I could not have done this research without my team members, and our efforts have helped scientists and drug developers assist with these virus outbreaks, so it is rewarding to be a part of it," Singh said. "Mutations in the monkeypox virus replication complex: Potential contributing factors to the 2022 outbreak" was recently published in Journal of Autoimmunity. Co-authors on the study include Shrikesh Sachdev, Athreya Reddy, Shree Lekha Kandasamy, Siddappa Byrareddy, Saathvik Kannan and Christian Lorson.

 

Research cited published in Journal of Autoimmunity (Dec. 22, 2022):

https://doi.org/10.1016/j.jaut.2022.102928 


Via Juan Lama
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Materials Made of Mechanical Neural Networks Can Learn to Adapt Their Physical Properties

Materials Made of Mechanical Neural Networks Can Learn to Adapt Their Physical Properties | Amazing Science | Scoop.it

A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper. The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.

 

The new material’s architecture is based on that of an artificial neural network—layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. Theoretically, such a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.

 

To find out if a mechanical lattice indeed would be able to adopt and maintain new properties—like taking on a new shape or changing directional strength—scientists started off by building a computer model. They then selected a desired shape for the material as well as input forces and had a computer algorithm tune the tensions of the connections so that the input forces would produce the desired shape. They did this training on 200 different lattice structures and found that a triangular lattice was best at achieving all of the shapes tested.

 

Once the many connections are tuned to achieve a set of tasks, the material will continue to react in the desired way. The training is—in a sense—remembered in the structure of the material itself.

 

The researchers then built a physical prototype lattice with adjustable electromechanical springs arranged in a triangular lattice. The prototype is made of 6-inch connections and is about 2 feet long by 1½ feet wide. And it worked beautifully. When the lattice and algorithm worked together, the material was able to learn and change shape in particular ways when subjected to different forces. The scientists call this new material a mechanical neural network.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Putting the brakes on lithium-ion batteries to prevent fires

Putting the brakes on lithium-ion batteries to prevent fires | Amazing Science | Scoop.it

Lithium-ion (Li-ion) batteries are used to power everything from smart watches to electric vehicles, thanks to the large amounts of energy they can store in small spaces. When overheated, however, they’re prone to catching fire or even exploding. But recent research published in ACS’ Nano Letters offers a possible solution with a new technology that can swiftly put the brakes on a Li-ion battery, shutting it down when it gets too hot.

 

The chemistry found in many batteries is essentially the same: Electrons are shuttled through an electronic device in a circuit from one electrode in the battery to another. But in a Li-ion cell, the electrolyte liquid that separates these electrodes can evaporate when it overheats, causing a short circuit. In certain cases, short circuiting can lead to thermal runaway, a process in which a cell heats itself uncontrollably. When multiple Li-ion cells are chained together — such as in electric vehicles — thermal runaway can spread from one unit to the next, resulting in a very large, hard-to-fight fire. To prevent this, some batteries now have fail-safe features, such as external vents, temperature sensors or flame-retardant electrolytes. But these measures often either kick in too late or harm performance. So, Yapei Wang, Kai Liu and colleagues wanted to create a Li-ion battery that could shut itself down quickly, but also work just as well as existing technologies.

 

The researchers used a thermally-responsive shape memory polymer covered with a conductive copper spray to create a material that would transmit electrons most of the time, but switch to being an insulator when heated excessively. At around 197 ˚F, a microscopic, 3D pattern programmed into the polymer appeared, breaking apart the copper layer and stopping the flow of electrons. This permanently shut down the cell but prevented a potential fire. At this temperature, however, traditional cells kept running, putting them at risk of thermal runaway if they became hot again. Under regular operating temperatures, the battery with the new polymer maintained a high conductivity, low resistivity and similar cycling lifetime to a traditional battery cell. The researchers say that this technology could make Li-ion batteries safer without having to sacrifice their performance.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Prehistoric vicious predator? Artificial intelligence says no!

Prehistoric vicious predator? Artificial intelligence says no! | Amazing Science | Scoop.it

Artificial intelligence has revealed that prehistoric footprints thought to be made by a vicious dinosaur predator were in fact from a timid herbivore. In an international collaboration, University of Queensland palaeontologist Dr Anthony Romilio used AI pattern recognition to re-analyse footprints from the Dinosaur Stampede National Monument, south-west of Winton in Central Queensland.

"Large dinosaur footprints were first discovered back in the 1970s at a track site called the Dinosaur Stampede National Monument, and for many years they were believed to be left by a predatory dinosaur, like Australovenator, with legs nearly two metres long," said Dr Romilio.

 

"The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago. "But working out what dino species made the footprints exactly -- especially from tens of millions of years ago -- can be a pretty difficult and confusing business. Particularly since these big tracks are surrounded by thousands of tiny dinosaur footprints, leading many to think that this predatory beast could have sparked a stampede of smaller dinosaurs. So, to crack the case, we decided to employ an AI program called Deep Convolutional Neural Networks."

 

It was trained with 1,500 dinosaur footprints, all of which were theropod or ornithopod in origin -- the groups of dinosaurs relevant to the Dinosaur Stampede National Monument prints. The results were clear: the tracks had been made by a herbivorous ornithopod dinosaur.

 

Dr Jens Lallensack, lead author from Liverpool John Moores University in the UK, said that the computer assistance was vital, as the team was originally at an impasse. "We were pretty stuck, so thank god for modern technology," Dr Lallensack said. "In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater. So -- to really check our science -- we decided to go to five experts for clarification, plus use AI. The AI was the clear winner, outperforming all of the experts by a wide margin, with a margin of error of around 11 per cent."

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Waymo Robotaxis Open to Public Transportation in Phoenix, AZ, Granted San Francisco License

Waymo Robotaxis Open to Public Transportation in Phoenix, AZ, Granted San Francisco License | Amazing Science | Scoop.it

In San Francisco, California, the self-driving tech company – owned by Google parent Alphabet – moved a step closer to launching a fully autonomous commercialized ride-hailing service, as is currently being operated by its chief rival, General Motors-owned Cruise.

 

And in Phoenix, Arizona, Waymo’s driverless service has been made available to members of the general public in the central Downtown area. The breakthrough in San Francisco has come via the approval by the California Department of Motor Vehicles to an amendment of the company’s current permit to operate. Now Waymo will be able to charge fees for driverless services in its autonomous vehicles (AVs), such as deliveries.

 

Once it has operated a driverless service on public roads in the city for a total of 30 days, it will then be eligible to submit an application to the California Public Utilities Commission (CPUC) for a permit that would enable it to charge fares for passenger-only autonomous rides in its vehicles.

 

This is the same permit that provided the greenlight for Cruise’s commercial driverless ride-hail service at the start of June. The CPUC awarded a drivered deployment permit to Waymo in February which allowed the company to charge its ‘trusted testers’ for autonomous rides with a safety operator on board. The trusted tester program comprises vetted members of the public who have applied to use the service and have signed an NDA which means they will not talk about their experiences publicly. In downtown Phoenix, the extension of the driverless ride-hail service is the latest evidence of the incremental progress Waymo has made in the city.

 

Over the past couple of years, the company has operated a paid rider-only service in some of Phoenix’s eastern suburbs, such as Gilbert, Mesa, Chandler and Tempe. Earlier this year it moved into the busier, more central downtown area, where driverless rides were made available for trusted testers. It also trialled an autonomous service for employees at Phoenix Sky Harbor International Airport, albeit with a safety operator on board. In early November, it was confirmed that airport rides would be offered to trusted testers, although again Waymo made clear that there would be a specialist in the driver’s seat, initially at least.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

BLOOM: A 176 Billion Parameter Open-Access Multilingual Language Model

BLOOM: A 176 Billion Parameter Open-Access Multilingual Language Model | Amazing Science | Scoop.it

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations (Google, OpenAI, Microsoft, etc.) and are frequently kept private and away from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176 Billion-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). The developers find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, the BLOOM team will publicly release all models and code under the Responsible AI License.

No comment yet.