Graphene has properties almost as if out of a picture book. It is simultaneously strong and flexible, it permeates water but filters gases, it is transparent like glass but at the same time conducts current better than a metal – and on top of it all, it is as thin as can be. Graphene consists just of a single layer of atoms. No surprise that interest in this wonder material (Nobel Prize 2010) explodes. Unfortunately, graphene displays all of these properties only when it is essentially defect free, and this is exactly what limits commercial production today.
Recently, highest-quality graphene films could be produced at copper catalysts. The gist was to melt the copper above 1100° degrees, so that the film growth occurs above the liquid metal surface. Yet, why these extreme conditions favor the formation of defect-free graphene is largely unknown. Not least the high temperatures and gas flows during growth hamper gaining the necessary atomic-level insights. Much hope is therefore placed on methods of AI/machine learning that can accelerate highest-accuracy quantum mechanical predictions to a level that allows so-called molecular dynamics simulations to correctly account for the high mobility of atoms at the liquid copper surface. Up to now, there is very little experience as to the reliability of these AI approaches though.
As an important benchmark, scientists at the Fritz Haber Institute (FHI) predicted the height of a growing graphene layer above the liquid copper catalyst using these novel approaches, and compared it against measurements from high-level synchrotron experiments achieved within a European consortium. The result is striking. The machines were right down to 10-11 m, or within less than one millionth the width of a hair. This high precision, called “sub-Ångstrøm“ in the jargon of the field, demonstrates the impressive capabilities of the novel AI approaches to acquire detailed insights into the microscopic growth process. “Our results nicely show the unprecedented possibilities”, summarizes the leader of the FHI-team Dr. Heenen proudly. “Intriguingly, the data on adsorption height suggest an essentially identical chemical interaction of graphene with solid and liquid copper. This renders the superior synthesis above the liquid metal surface even more mysterious.”
The Hubble Space Telescope identified its most distant single star to date, whose light has taken 12.9 billion years to reach Earth.
The most distant single star seen yet dates back to less than 1 billion years after the universe's birth in the Big Bang, and may shed light on the earliest stars in the cosmos, a new study finds.
The scientists nicknamed the star "Earendel," from an Old English word meaning "morning star" or "rising light." Earendel, whose technical designation is WHL0137-LS, is at least 50 times the mass of the sun and millions of times as bright.
This newfound star, detected by NASA's Hubble Space Telescope, is so far away that its light has taken 12.9 billion years to reach Earth, appearing to us as it was when the universe was about 900 million years old, just 7% of its current age. Until now, the most distant single star detected, discovered by Hubble in 2018, existed when the universe was about 4 billion years old, or 30% of its current age.
Normally, even a star as brilliant as Earendel would be impossible to see from Earth given the vast divide between the two. Previously, the smallest objects seen at such a great distance were clusters of stars embedded inside early galaxies.
Scientists detected Earendel with the help of a huge galaxy cluster, WHL0137-08, sitting between Earth and the newfound star. The gravitational pull of this enormous galaxy cluster warped the fabric of space and time, resulting in a powerful natural magnifying glass that greatly amplified the light from distant objects behind the galaxy, such as Earendel. This gravitational lensing has distorted the light from the galaxy hosting Earendel into a long crescent the researchers named the Sunrise Arc.
The rare way in which Earendel aligned with WHL0137-08 meant that the star appeared directly on, or extremely close to, a curve in spacetime that provided maximum brightening, causing Earendel to stand out from the general glow of its home galaxy. This effect is analogous to the rippled surface of a swimming pool creating patterns of bright light on the bottom of the pool on a sunny day — the ripples on the surface act as lenses and focus sunlight to maximum brightness on the pool floor. Welch emphasized this is not the most distant object scientists have ever discovered. "Hubble has observed galaxies at greater distances," he explained. "However, we see the light from their millions of stars all blended together. This is the most distant object where we can identify light from an individual star."
He also noted this star was distant, but not old. "We see the star as it was 12.8 billion years ago, but that does not mean the star is 12.8 billion years old," Welch said. Instead, it's probably just a few million years old and never reached old age. "Given its mass, it almost certainly has not survived to today, as more massive stars tend to burn through their fuel faster and thus explode, or collapse into black holes, sooner," he added of Earendel. "The oldest stars known would have formed at a similar time, but they are much less massive, so they have continued to shine until today."
Many details about Earendel remain uncertain, such as its mass, brightness, temperature and type. Scientists are not even sure yet if Earendel is one star or two — most stars of Earendel's mass usually do have a smaller, dimmer companion, and it's possible that Earendel is outshining its partner. Scientists intend to conduct followup observations with NASA's recently launched James Webb Space Telescope to analyze Earendel's infrared light and pin down many of its features. Such information in turn could help shed light on the first stars in the universe, which formed before the universe was filled with the heavy elements produced by successive generations of massive stars.
The exponential accumulation of digital data is expected to outstrip magnetic and optical storage media.
DNA is an incredibly dense (up to 455 exabytes per gram, 6 orders of magnitude denser than magnetic or optical media) and stable (readable over millennia) digital storage medium (1–3). Storage and retrieval of up to gigabytes of digital information in the form of text, images, and movies have been successfully demonstrated. DNA’s essential biological role ensures that the technology for manipulating DNA will never succumb to obsolescence. However, performing computation on the stored data typically involves sequencing the DNA, electronically computing the desired transformation, and synthesizing new DNA. This expensive and slow loop limits the applicability of DNA storage to rarely accessed data (cold storage).
In contrast to traditional (passive) DNA storage, schemes for dynamic DNA storage allow access and modification of data without sequencing and/or synthesis. Upon binding to molecular probes, files can be accessed selectively (4) and modified through PCR amplification (5). Introducing or inhibiting binding of molecular probes with existing data barcodes can rename or delete files (6). Information encoded in the hybridization pattern of DNA can be written and erased (7) and can even undergo basic logic operations such as AND and OR (8) using strand displacement. By encoding information in the nicks of naturally occurring DNA [a.k.a. native DNA (9)], data can be modified through ligation or cleavage (10). With image similarities encoded in the binding affinities of DNA query probes and data, similarity searches on databases can be performed through DNA hybridization (11, 12). Although these advances allow information to be directly accessed and edited within the storage medium, they nevertheless demonstrate limited or no capacity for computation in DNA.
Conveniently, beyond its role as a storage medium, DNA has proved to be a programmable medium for computation, primarily via “strand displacement” reactions. With the understanding of the kinetics and thermodynamics of DNA strand displacement (13–15), a variety of rationally designed molecular computing devices have been engineered. These include molecular implementations of logic circuits (16–18), neural networks (19, 20), consensus algorithms (21), dynamical systems including oscillators (22), and cargo-sorting robots (23). Given the achievements of strand displacement systems and their inherent molecular parallelism, DNA computation schemes appear well suited to directly carry out computation on big data stored in DNA.
A research team now proposes a paradigm called SIMD||||DNA* (Single Instruction Multiple Data† computation with DNA) which integrates DNA storage with in-memory computation by strand displacement. A preliminary version of the theoretical results appeared as a conference paper (25). Inspired by methods of storing information in the positions of nicks in double-stranded DNA (9, 10), SIMD||||DNA encodes information in a multi-stranded DNA complex with a unique pattern of nicks and exposed single-stranded regions (a register). Although storage density is somewhat reduced (approximately a factor of 30), encoding information in nicks still achieves orders of magnitude higher density than magnetic and optical technologies. To manipulate information, an instruction (a set of strands) is applied in parallel to all registers which all share the same sequence space. The strand composition of a register changes when the applied instruction strands trigger strand displacement reactions within that register. Non-reacted instruction strands and reaction waste products are washed away via magnetic bead separation to prepare for the next instruction. Each instruction can change every bit on every register, yielding a high level of parallelism. These experiments routinely manipulated 10^101010 registers in parallel. DNA storage recovery studies suggest that, in principle, 10^8108 distinct register values can be stored in one such sample.
Scientists are training a gargantuan one-trillion-parameter generative AI system dubbed 'ScienceGPT' based on scientific data from the newly established Aurora supercomputer.
The AuroraGPT AI model, which is being trained by researchers at the Argonne National Lab (ALN) in Illinois, USA, is powered by Intel's Ponte Vecchio GPUs which provide the main computing power, and is being backed by the US government. Training could take months to complete, according to HPC Wire, with training currently limited to 256 of the roughly 10,000 nodes of the Aurora supercomputer, before this is scaled up over time. Even given this limitation, Intel and ANL are only testing the model training on a string of 64 nodes, with caution due to Aurora's unique design as a supercomputer.
At one trillion parameters, ScienceGPT will be one of the largest LLMs out there. While it won't quite hit the size of the reported 1.7-trillion-parameter GPT-4, developed by OpenAI, it'll be almost twice as large as the 560-billion-parameter Pathways Language Model, which powers Google's Bard. “It combines all the text, codes, specific scientific results, papers, into the model that science can use to speed up research,” said Ogi Brkic, vice president and general manager for data center and HPC solutions, in a press briefing. It'll operate like ChatGPT, but it's yet unclear at the moment whether it will be multimodal, in that it will generate different kinds of media like text, images, and video.
Aurora – which will be the second exascale supercomputer in US history – has just established itself on the Top500 list of the most powerful supercomputers after years of being developed. It's the second-most powerful supercomputer after Froniter, and is powered by 60,000 Intel GPUs while boating 10,000 computing nodes over 166 racks, alongside more than 80,000 networking nodes. It is still being finished, however, and will likely exceed Frontier's performance when it's fully up to speed, and all testing and finetuning is complete, said Top500.
The landmark decision could transform the treatment of sickle-cell disease and β-thalassaemia — but the technology is expensive.
In a world first, the UK medicines regulator has approved a therapy that uses the CRISPR–Cas9 gene-editing tool as a treatment. The decision marks another high point for a biotechnology that has been lauded as revolutionary in the decade since its discovery.
The therapy, called Casgevy, will treat the blood conditions sickle-cell disease and β-thalassaemia. Sickle-cell disease, also known as sickle-cell anaemia, can cause debilitating pain, and people with β-thalassaemia often require regular blood transfusions.
“This is a landmark approval which opens the door for further applications of CRISPR therapies in the future for the potential cure of many genetic diseases,” said Kay Davies, a geneticist at the University of Oxford, UK, in comments to the UK Science Media Centre (SMC).
Nature magazine explains the research behind the treatment and explores what’s next.
What research led to the approval?
The approval by the Medicines and Healthcare products Regulatory Agency (MHRA) follows promising results from clinical trials that tested a one-time treatment, which is administered by intravenous infusion. The therapy was developed by the pharmaceutical company Vertex Pharmaceuticals in Boston, Massachusetts, and biotechnology company CRISPR Therapeutics in Zug, Switzerland.
The trial for sickle-cell disease has followed 29 out of 45 participants long enough to draw interim results. Casgevy completely relieved 28 of those people of debilitating episodes of pain for at least one year after treatment. Researchers also tested the treatment for a severe form of β-thalassaemia, which is conventionally treated with blood transfusions roughly once a month. In this trial, 54 people received Casgevy, of which 42 participated for long enough to provide interim results. Among those 42 participants, 39 did not need a red-blood-cell transfusion for at least one year. The remaining three had their need for blood transfusions reduced by more than a 70%.
The Chinese government will accelerate the widespread production of advanced humanoid robots by funding more startups in the robotics field.
Fourier Intelligence
China is hoping to welcome robotkind in just two years’ time. The country plans to produce its first humanoid robots by 2025, according to an ambitious blueprint published by the Ministry of Industry and Information (MITT) Technology last week. The MITT says the advanced bipedal droids have the power to reshape the world, carrying out menial, repetitive tasks in farms, factories, and houses to alleviate our workload.
“They are expected to become disruptive products after computers, smartphones, and new energy vehicles,” the document states.The government will accelerate the development of the robots by funding more young companies in the field, as reported by Bloomberg. Fourier Intelligence is one such Chinese startup hoping to start mass-producing general-purpose humanoid robots by the end of this year. The Fourier GR-1 measures five feet and four inches and weighs around 121 pounds. With 40 joints, the bot reportedly has “unparalleled agility” human-like movement. It can also walk at roughly 3 mph and complete basic tasks.
China isn’t the only country working on our future robot helpers, of course. In the U.S., Tesla is continuing to refine Optimus. The bipedal humanoid robot has progressed rapidly since the first shaky prototype was revealed at the marque’s AI day in 2022. It can now do yoga, in fact. Tesla has yet to announce a firm timetable for when Optimus will hit the market, but CEO Elon Musk has previously said that the $20,000 robot could be ready in three to five years.
Meanwhile, Boston Dynamics—makers of Spot, the $75,000 robotic dog—has built another decidedly agile bipedal robot. Atlas showed it could move various obstacles earlier this year, after nailing a parkour course in 2021. Boston Dynamic’s Atlas is a research platform and not available for purchase, but the robot does show the U.S. is on par with China in terms of droid design.
Integrating neurons into digital systems may enable performance infeasible with silicon alone. A team of neuroscientists have recently developped DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game “Pong.” Applying implications from the theory of active inference via the free energy principle, the researchers find apparent learning within five minutes of real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which creates synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.
Terrible to terrific: A new antifungal molecule tweaks a powerful drug to harness its power against infection while doing away with its toxicity.
A new antifungal molecule, devised by tweaking the structure of prominent antifungal drug Amphotericin B, has the potential to harness the drug’s power against fungal infections while doing away with its toxicity, researchers at the University of Illinois Urbana-Champaign and collaborators at the University of Wisconsin-Madison report in the journal Nature.
Amphotericin B, a naturally occurring small molecule produced by bacteria, is a drug used as a last resort to treat fungal infections. While AmB excels at killing fungi, it is reserved as a last line of defense because it also is toxic to the human patient – particularly the kidneys.
“Fungal infections are a public health crisis that is only getting worse. And they have the potential, unfortunately, of breaking out and having an exponential impact, kind of like COVID-19 did. So let’s take one of the powerful tools that nature developed to combat fungi and turn it into a powerful ally,” said research leader Dr. Martin D. Burke, an Illinois professor of chemistry, a professor in the Carle Illinois College of Medicine and also a medical doctor.
“This work is a demonstration that, by going deep into the fundamental science, you can take a billion-year head start from nature and turn it into something that hopefully is going to have a big impact on human health,” Burke said.
Burke’s group has spent years exploring AmB in hopes of making a derivative that can kill fungi without harm to humans. In previous studies, they developed and leveraged a building block-based approach to molecular synthesis and teamed up with a group specializing in molecular imaging tools called solid-state nuclear magnetic resonance, led by professor Chad Rienstra at the University of Wisconsin-Madison. Together, the teams uncovered the mechanism of the drug: AmB kills fungi by acting like a sponge to extract ergosterol from fungal cells.
In the recent work, Burke’s group worked again with Rienstra’s group to find that AmB similarly kills human kidney cells by extracting cholesterol, the most common sterol in people. The researchers also resolved the atomic-level structure of AmB sponges when bound to both ergosterol and to cholesterol.
“The atomic resolution models were really the key to zoom in and identify these very subtle differences in binding interactions between AmB and each of these sterols,” said Illinois graduate student Corinne Soutar, a co-first author of the paper. “Using this structural information along with functional and computational studies, we achieved a significant breakthrough in understanding how AmB functions as a potent fungicidal drug,” Rienstra said. “This provided the insights to modify AmB and tune its binding properties, reducing its interaction with cholesterol and thereby reducing the toxicity.”
Deep within every piece of magnetic material, electrons dance to the invisible tune of quantum mechanics. Their spins, akin to tiny atomic tops, dictate the magnetic behavior of the material they inhabit. This microscopic ballet is the cornerstone of magnetic phenomena, and it's these spins that a team of JILA researchers—headed by JILA Fellows and University of Colorado Boulder professors Margaret Murnane and Henry Kapteyn—has learned to control with remarkable precision, potentially redefining the future of electronics and data storage.
In a Science Advancespublication, the JILA team—along with collaborators from universities in Sweden, Greece, and Germany—probed the spin dynamics within a special material known as a Heusler compound: a mixture of metals that behaves like a single magnetic material.
For this study, the researchers utilized a compound of cobalt, manganese, and gallium, which behaved as a conductor for electrons whose spins were aligned upwards and as an insulator for electrons whose spins were aligned downwards. Using a form of light called extreme ultraviolet high-harmonic generation (EUV HHG) as a probe, the researchers could track the re-orientations of the spins inside the compound after exciting it with a femtosecond laser, which caused the sample to change its magnetic properties. The key to accurately interpreting the spin re-orientations was the ability to tune the color of the EUV HHG probe light.
"In the past, people haven't done this color tuning of HHG," explained co-first author and JILA graduate student Sinéad Ryan. "Usually, scientists only measured the signal at a few different colors, maybe one or two per magnetic element at most." In a monumental first, the JILA team tuned their EUV HHG light probe across the magnetic resonances of each element within the compound to track the spin changes with a precision down to femtoseconds (a quadrillionth of a second).
"On top of that, we also changed the laser excitation fluence, so we were changing how much power we used to manipulate the spins," Ryan elaborated, highlighting that that step was also an experimental first for this type of research.
Established in 1964, the IUCN Red List of Threatened Species has evolved to become the world’s most comprehensive information source on the global conservation status of animal, fungi and plant species.
In addition to species changing status, The IUCN Red List grows larger with each update as newly described species and species from the less well-known groups are assessed for the first time (Figure 1). IUCN and its partners are working to expand the number of taxonomic groups that have full and complete Red List assessments in order to improve our knowledge of the status of the world's biodiversity; see the Barometer of Life page for more information about this work.
Not all taxonomic groups have been completely assessed (see Table 1 and Figure 2). It is very important to consider this when looking at the numbers of species in each Red List Category and the proportions of threatened species within each group; although The IUCN Red List gives a good snapshot of the current status of species, it should not be interpreted as a full and complete assessment of the world's biodiversity. For more information the work underway to expand taxonomic coverage on The IUCN Red List, see the Barometer of Life page.
How many species are threatened?
Species assessed as Critically Endangered (CR), Endangered (EN), or Vulnerable (VU) are referred to as "threatened" species. However, Extinct in the Wild (EW) species can move into the threatened categories following successful reintroduction. Therefore, EW species should be included when reporting proportions of threatened species.
Reporting the proportion of threatened species on The IUCN Red List is complicated because:
not all species groups have been fully evaluated, and
some species have so little information available that they can only be assessed as Data Deficient (DD).
For many of the incompletely evaluated groups, assessment efforts have focused on those species that are likely to be threatened; therefore any percentage of threatened species for these groups would be heavily biased (i.e., the % threatened species would likely be an overestimate).
For those groups that have been comprehensively evaluated, the proportion of threatened species can be calculated, but the number of these species is often uncertain because it is not known whether DD species are actually threatened or not. Some taxonomic groups are much better known that others (i.e., they will have fewer DD species), and therefore a more accurate figure can be calculated. Other, less well known groups have a large proportion of DD species, which brings uncertainty into the estimate.
COVID-19, the disease caused by SARS-CoV-2, has caused significant morbidity and mortality worldwide. The betacoronavirus continues to evolve with global health implications as we race to learn more to curb its transmission, evolution, and sequelae. The focus of this review, the second of a three-part series, is on the biological effects of the SARS-CoV-2 virus on post-acute disease in the context of tissue and organ adaptations and damage. We highlight the current knowledge and describe how virological, animal, and clinical studies have shed light on the mechanisms driving the varied clinical diagnoses and observations of COVID-19 patients. Moreover, we describe how investigations into SARS-CoV-2 effects have informed the understanding of viral pathogenesis and provide innovative pathways for future research on the mechanisms of viral diseases.
No one had ever seen one virus latching onto another virus, until anomalous sequencing results sent a UMBC team down a rabbit hole leading to a first-of-its-kind discovery. It's known that some viruses, called satellites, depend not only on their host organism to complete their life cycle, but also on another virus, known as a "helper," explains Ivan Erill, professor of biological sciences.
The satellite virus needs the helper either to build its capsid, a protective shell that encloses the virus's genetic material, or to help it replicate its DNA. These viral relationships require the satellite and the helper to be in proximity to each other at least temporarily, but there were no known cases of a satellite actually attaching itself to a helper—until now.
In a paper published in The ISME Journal, a UMBC team and colleagues from Washington University in St. Louis (WashU) describe the first observation of a satellite bacteriophage (a virus that infects bacterial cells) consistently attaching to a helper bacteriophage at its "neck"—where the capsid joins the tail of the virus. In detailed electron microscopy images taken by Tagide deCarvalho, assistant director of the College of Natural and Mathematical Sciences Core Facilities and first author on the new paper, 80 percent (40 out of 50) helpers had a satellite bound at the neck. Some of those that did not had remnant satellite tendrils present at the neck. Erill, senior author on the paper, describes them as appearing like "bite marks." "When I saw it, I was like, I can't believe this," deCarvalho says. "No one has ever seen a bacteriophage—or any other virus—attach to another virus."
A long-term virus relationship
After the initial observations, Elia Mascolo, a graduate student in Erill 's research group and co-first author on the paper, analyzed the genomes of the satellite, helper, and host, which revealed further clues about this never-before-seen viral relationship. Most satellite viruses contain a gene that allows them to integrate into the host cell's genetic material after they enter the cell. This allows the satellite to reproduce whenever a helper happens to enter the cell from then on. The host cell also copies the satellite's DNA along with its own when it divides. A bacteriophage sample from WashU also contained a helper and a satellite. The WashU satellite has a gene for integration and does not directly attach to its helper, similar to previously observed satellite-helper systems. However, the satellite in UMBC's sample, named MiniFlayer by the students who isolated it, is the first known case of a satellite with no gene for integration. Because it can't integrate into the host cell's DNA, it must be near its helper—named MindFlayer—every time it enters a host cell if it is going to survive. Given that, although the team did not directly prove this explanation, "attaching now made total sense," Erill says, "because otherwise, how are you going to guarantee that you are going to enter into the cell at the same time?" Additional bioinformatics analysis by Mascolo and Julia López-Pérez, another Ph.D. student working with Erill, revealed that MindFlayer and MiniFlayer have been co-evolving for a long time. "This satellite has been tuning in and optimizing its genome to be associated with the helper for, I would say, at least 100 million years," Erill says, which suggests there may be many more cases of this kind of relationship waiting to be discovered.
An experimental gene therapy has restored the hearing of a child who was born deaf, the pharmaceutical giant Eli Lilly said in a statement. The child, who was 11 years old at the time the therapy was administered, experienced restored hearing within 30 days of treatment, Eli Lilly said in a release. The child participating in the clinical study was the first individual in the U.S. to receive the therapy for a genetic form of hearing loss, the company said.
The gene therapy, AK-OTOF, is being developed for the treatment of hearing loss due to mutations in the otoferlin gene. added AK-OTOF to its portfolio through its , a genetic-medicine company focused on inner-ear conditions.
Doctors and scientists have been pursuing gene therapy for hearing loss for more than 20 years, and “these initial results show that it may restore hearing better than many thought possible,” Dr. John Germiller, director of clinical research in the otolaryngology department at the Children’s Hospital of Philadelphia and principal investigator in the clinical trial, said in a statement.
The child’s hearing was restored across all tested frequencies and was within a normal range at some frequencies at 30 days after the treatment, Eli Lilly said.
Children with hearing problems due to otoferlin gene mutations are often born with profound hearing loss, but most of them have not had the genetic testing needed for a definitive diagnosis, Dr. Oliver Haag, head of otolaryngology at Sant Joan de Deu Hospital in Barcelona and an Akouos study investigator, said in a statement.
In observations of the planets Kepler-1625b and Kepler-1708b from the Kepler and Hubble space telescopes, researchers had discovered traces of such moons for the first time. A new study now raises doubts about these previous claims. As scientists from the Max Planck Institute for Solar System Research and the Sonnenberg Observatory, both in Germany, report in the journal Nature Astronomy, "planet-only" interpretations of the observations are more conclusive. For their analysis, the researchers used their newly developed computer algorithm Pandora, which facilitates and accelerates the search for exomoons. They also investigated what kind of exomoons can be found in principle in modern space-based astronomical observations. Their answer is quite shocking.
In our Solar System, the fact that a planet is orbited by one or more moons is rather the rule than the exception: apart from Mercury and Venus, all other planets have such companions; in the case of the gas giant Saturn researchers have found 140 natural satellites until today. Scientists therefore consider it likely that planets in distant star systems also harbor moons. So far, however, there has only been evidence of such exomoons in two cases: Kepler-1625b and Kepler-1708b. This low yield is not surprising. After all, distant satellites are naturally much smaller than their home worlds - and therefore much harder to find. And it is extremely time-consuming to comb through the observational data of thousands of exoplanets for evidence of moons.
To make the search easier and faster, the authors of the new study rely on a search algorithm they developed and optimized themselves for the search for exomoons.
They published their method last year and the algorithm is available to all researchers as open source code. When applied to the observational data from Kepler-1625b and Kepler-1708b, the results were astonishing. "We would have liked to confirm the discovery of exomoons around Kepler-1625b and Kepler-1708b," says first author of the new study, MPS scientist Dr. René Heller. "But unfortunately, our analyses show otherwise," he adds.
At age 45, Dr. Lakiea Bailey said, she was the oldest person with sickle cell anemia that she knew. The executive director of the nonprofit patient advocacy group the Sickle Cell Consortium was diagnosed with sickle cell disease at age 3. Because of it, she’s had heart problems, had her hips replaced, and experienced serious pain all her life. Bailey told the US Food and Drug Administration’s independent advisory committee that she believes a cutting-edge therapy that is currently under review offers the sickle cell community something many haven’t ever had before: hope. “Hope is on the horizon, and we are looking toward this hope for a change of the lives that we are living of excruciating pain,” Bailey told the FDA committee.
The independent committee is helping the FDA think through how it should evaluate a treatment called exa-cel that could potentially cure people of sickle cell disease, a painful and deadly disease with no universally successful treatment. This was an ongoing discussion with no vote or decision about the therapy, but the discussion likely moves the US one step closer to approving a groundbreaking new treatment that uses gene editing. If approved, exa-cel, made by Boston-based Vertex Pharmaceuticals and the Swiss company CRISPR Therapeutics, would be the first FDA-approved treatment that uses genetic modification called CRISPR. CRISPR, or clustered regularly interspaced short palindromic repeats, is a technology researchers use to selectively modify DNA, the carrier of genetic information that the body uses to function and develop. The FDA said treatment for severe sickle cell is an “unmet medical need.”
When someone has sickle cell disease their red blood cells don’t function the way they should. Red blood cells are the helper cells that carry oxygen from the lungs to the body’s tissues, which use this oxygen to produce energy. The process also generates waste in the form of carbon dioxide that the red blood cells take to the lungs to be exhaled out.
With sickle cell disease — also called sickle cell anemia — red blood cells take on a folded or sickle shape that can clog tiny blood vessels and cause progressive organ damage and pain, and can lead to organ failure. The sickle cells also die earlier than they should, which means the person is constantly short red blood cells. One person with sickle cell who testified said she had been hospitalized 100 times just last year. Median life expectancy is only 45 years. Sickle cell is rare, and it disproportionately impacts African Americans. About 100,000 people in the US are diagnosed with sickle cell and, of those, 20,000 have what’s considered severe disease. Up until now, the only real treatment has been a stem cell or bone marrow transplant. For stem cells, fewer than 20% of patients have an appropriately matched donor, the FDA said, and the transplants are risky and may not work. Sometimes a transplant can kill the patient. The new exa-cel treatment under FDA consideration can use the patient’s own stem cells. Doctors would alter them with CRISPR to fix the genetic problems that cause sickle cell, and then the altered stem cells are given back to the patient in a one-time infusion. In company studies, the treatment was considered safe, and it had a “highly positive benefit-risk for patients with severe sickle cell disease,” Dr. Stephanie Krogmeier, vice president for global regulatory affairs with Vertex Pharmaceuticals Incorporated, told the panel. Thirty-nine of the 40 people tested with the treatment did not have a single vaso-occlusive crisis, which means the misshapen red blood cells block normal circulation and can cause moderate to severe pain. It’s the top reason patients with sickle cell go to the emergency room or are hospitalized. Before the treatment, patients experienced about four of these painful crises a year, resulting in about two weeks in the hospital. The FDA sought the independent panel’s advice, in part, becausethis would be the first time the FDA would approve a treatment that uses CRISPR technology, but Dr. Fyodor Urnov, a professor in the Department of Molecular and Cell Biology at the University of California, Berkeley, reminded the committee CRISPR has been around for 30 years and, in that time, scientists have learned a lot about how to use it safely. “The technology is, in fact, ready for primetime,” Urnov said. With this kind of genetic editing, scientists could inadvertently make a change to a patient’s DNA that is off-target, and the therapy could harm the patient. The FDA wanted the experts’ advice so it could understand what criteria it should use to evaluate the treatment and determine how to evaluate long-term safety issues.
An FDA presentation to the panel suggested the agency may have some questions about the data. It called a lack of confirmatory testing “concerning.” It also noted the study’s small size. In a discussion of the company’s methodology, several panel experts said that they believed the data the companies have submitted for FDA approval were reasonable. Committee member Dr. Gil Wolfe,adistinguished professor in the department of neurology at University at Buffalo Jacobs School of Medicine and Biomedical Sciences, said he liked that the company promised to monitor patients for 15 years to see if there were any problems down the road. He said, generally, it was “exciting” to see how many patients have been treated and how positive the results have been so far. As far as any concern for what’s called “off-target effects,” meaning the potential unwanted or adverse alterations to the genome that could accidentally happen in this process and cause cancer or other problems down the road, Dr. Daniel Bauer, principal investigator and staff physician at Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, Boston Children’s Hospital, told the panel the risk is “relatively small.” Wolfe thought the depth of analysis the companies used should be good enough to detect any potential problems down the road. “We want to be careful to not let the perfect be the enemy of the good,” Wolfe said. “At some point, you have to just try things out.” “I think, in this case, that there’s a huge unmet need for individuals with sickle cell disease, and it’s important we think about how we can advance therapies that could potentially help them, and I certainly think this is one of them,” Wolfe added. Asked by the committee how he would advise patients how to evaluate the risks with this treatment, Bauer said he would be honest that there is some uncertainty, but most of the human genome is non-coding, meaning it doesn’t provide instructions to the cells to act a certain way. “It might be that many places in the human genome can tolerate an off-target edit and not have a functional consequence,” Bauer told the committee. In other words, if they made an editing error, it might not matter, and may not harm the patient. “My guess is it’s a relatively small risk in the scheme of this risk benefit. But it’s new, it’s unknown,” Bauer said. “We need to be humble and open to learning from these brave patients participating.” The FDA is expected to make an approval decision by December 8, 2023.
Scientists found the cells in mice — and say they could lead to a better understanding of human appetite.
Brain cells that control how quickly mice eat, and when they stop, have been identified. The findings, published in Nature1, could lead to a better understanding of human appetite, the researchers say.
Nerves in the gut, called vagal nerves, had already been shown to sense how much mice have eaten and what nutrients they have consumed2. The vagal nerves use electrical signals to pass this information to a small region in the brainstem that is thought to influence when mice, and humans, stop eating. This region, called the caudal nucleus of the solitary tract, contains prolactin-releasing hormone neurons (PRLH) and GCG neurons. But, until now, studies have involved filling the guts of anaesthetized mice with liquid food, making it unclear how these neurons regulate appetite when mice are awake.
To answer this question, physiologist Zachary Knight at the University of California, San Francisco, and his colleagues implanted a light sensor in the brains of mice that had been genetically modified so that the PRLH neurons released a fluorescent signal when activated by electrical signals transmitted along neurons from elsewhere in the body. Knight and his team infused a liquid food called Ensure — which contains a mixture of fat, protein, sugar, vitamins and minerals — into the guts of these mice. Over a ten-minute period, the neurons became increasingly activated as more of the food was infused. This activity peaked a few minutes after the infusion ended. By contrast, the PRLH neurons did not activate when the team infused saline solution into the mice’s guts.
When the team allowed the mice to freely eat liquid food, the PRLH neurons activated within seconds of the animals starting to lick the food, but deactivated when they stopped licking. This showed that PRLH neurons respond differently, depending on whether signals are coming from the mouth or the gut, and suggests that signals from the mouth override those from the gut, says Knight. By using a laser to activate PRLH neurons in mice that were eating freely, the researchers could reduce how quickly the mice ate.
Further experiments showed that PRLH neurons did not activate during feeding in mice that lacked most of their ability to taste sweetness, suggesting that taste activated the neurons. The researchers also found that GCG neurons are activated by signals from the gut, and control when mice stop eating. “The signals from the mouth are controlling how fast you eat, and the signals from the gut are controlling how much you eat,” says Knight.
“I’m extremely impressed by this paper,” says neuroscientist Chen Ran at Harvard University in Boston, Massachusetts. The work provides original insights on how taste regulates appetite, he says. The findings probably apply to humans, too, Ran adds, because these neural circuits tend to be well conserved across both species.
In Tokyo, engineer Moju Zhao has developed a transformative drone, named DRAGON, that can change its shape mid-flight to navigate tight spaces. The acronym DRAGON stands for "Dual-rotor embedded multilink Robot with the Ability of multi-deGree-of-freedom aerial transformatiON."
Made of multiple modules connected by battery-powered hinged joints, the drone can autonomously decide on its shape formations. Currently, it can stay airborne for three minutes, but the team aims to add more modules and grippers for object manipulation. The long-term vision is for the DRAGON to transition between flying and walking. Its design, reminiscent of serpentine shapes, is inspired by ancient Asian dragon mythology. This aligns with Japan's futuristic drone roadmap which encompasses applications in delivery, farming, and unique solutions such as drone-powered parasols. Meanwhile, MIT has developed insect-sized drones that use soft actuators mimicking the agility of bugs.
These micro-drones are resilient, capable of withstanding collisions, and even perform aerial somersaults. The drones have potential applications ranging from pollinating crops and inspecting machinery to search and rescue missions. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied, an electrostatic force is produced, causing the rubber to expand and contract, resulting in wing flapping. MIT's fabrication process involves layering elastomer and electrode, each as thin as a red blood cell, leading to power-efficient and resilient drones.
AI's remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions.
The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious. However, in this new research, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.
All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.
The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness. Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains.
Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.
For the first time, scientists have demonstrated that the brain’s electrical activity can be decoded and used to reconstruct music. Artificial intelligence has turned the brain’s electrical signals into somewhat garbled classic rock"
"Neuroscientists have reconstructed recognizable audio of a 1979 Pink Floyd song by using machine learning to decode electrical activity in the brains of listeners. As study participants undergoing surgery listened to “Another Brick in the Wall (Part 1),” electrodes placed on the surface of the brain captured the activity of regions attuned to the song’s acoustic profile. "
Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.
“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”
To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery. Why did the team choose Pink Floyd—and specifically “Another Brick in the Wall (Part 1),”? “The scientific reason, which we mention in the paper, is that the song is very layered. It brings in complex chords, different instruments and diverse rhythms that make it interesting to analyze,” says Ludovic Bellier, a cognitive neuroscientist and the study’s lead author. “The less scientific reason might be that we just really like Pink Floyd.”
If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That’s how NASA’s Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade. Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon’s disk or any surface features. Instead, Fermi’s Large Area Telescope (LAT) detects a prominent glow centered on the Moon’s position in the sky.
Mario Nicola Mazziotta and Francesco Loparco, both at Italy’s National Institute of Nuclear Physics in Bari, have been analyzing the Moon’s gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays. “Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes,” explained Mazziotta.
Because the particles are electrically charged, they’re strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.
Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts — more than 10 million times greater than the energy of visible light — and organized them over time, showing how longer exposures improve the view.
“Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full,” said Loparco. As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.
While the Moon’s gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun’s denser atmosphere, producing gamma rays that can reach Fermi.
Although the gamma-ray Moon doesn’t show a monthly cycle of phases, its brightness does change over time. Fermi LAT data show that the Moon’s brightness varies by about 20% over the Sun’s 11-year activity cycle. Variations in the intensity of the Sun’s magnetic field during the cycle change the rate of cosmic rays reaching the Moon, altering the production of gamma rays.
East China Normal University’s Dr. Yi-Hsuan Pan and colleagues showed that human ancestors went through a severe population bottleneck with about 1,280 breeding individuals between around 930,000 and 813,000 years ago.
Today, there are more than 8 billion human beings on the planet. We dominate Earth’s landscapes, and our activities are driving large numbers of other species to extinction. Had a researcher looked at the world sometime between 800,000 and 900,000 years ago, however, the picture would have been quite different. Hu et al. used a newly developed coalescent model to predict past human population sizes from more than 3000 present-day human genomes (see the Perspective by Ashton and Stringer). The model detected a reduction in the population size of our ancestors from about 100,000 to about 1000 individuals, which persisted for about 100,000 years. The decline appears to have coincided with both major climate change and subsequent speciation events. —Sacha Vignieri
Quantum communications have rapidly progressed toward practical, large-scale networks based on quantum key distributions that spearhead the process. Quantum key distribution systems typically include a sender "Alice," a receiver "Bob," who generate a shared secret from quantum measurements for secure communication. Although fiber-based systems are well-suited for metropolitan scale, a suitable fiber infrastructure might not always be in place.
In a new report in npj Quantum Information, Andrej Kržič and a team of scientists developed an entanglement-based, free-space quantum network. The platform offered a practical and efficient alternative for metropolitan applications. The team introduced a free-space quantum key distribution system to demonstrate its use in realistic applications in anticipation of the work to establish free-space networks as a viable solution for metropolitan applications in the future global quantum internet.
Quantum communication network
Quantum communication typically aims to distribute quantum information between two or more parties. A series of revolutionary applications of quantum networks have provided a roadmap towards engineering a full-blown quantum internet. The proposed invention provides a heterogeneous network of special purpose sub-networks with diverse links and interconnects. The concept of quantum key distribution networks have driven this development to pave the way for other distributed quantum information processing methods to benchmark the technological maturity of quantum networks in general.
In this work, Kržič and colleagues described a metropolitan free-space network architecture to secure communications at summits, conferences and other events, with the added capacity to complement an already existing network infrastructure in the absence of end-to-end fiber connections. The quantum physicists built the architecture around a central entanglement server to stream the entangled photons to the users of the network.
The largest extinction in Earth’s history marked the end of the Permian period, some 252 million years ago. Long before dinosaurs, our planet was populated with plants and animals that were mostly obliterated after a series of massive volcanic eruptions in Siberia. Fossils in ancient seafloor rocks display a thriving and diverse marine ecosystem, then a swath of corpses. Some 96 percent of marine species were wiped out during the “Great Dying,” followed by millions of years when life had to multiply and diversify once more.
What has been debated until now is exactly what made the oceans inhospitable to life – the high acidity of the water, metal and sulfide poisoning, a complete lack of oxygen, or simply higher temperatures.
New research from the University of Washington and Stanford University combines models of ocean conditions and animal metabolism with published lab data and paleoceanographic records to show that the Permian mass extinction in the oceans was caused by global warming that left animals unable to breathe. As temperatures rose and the metabolism of marine animals sped up, the warmer waters could not hold enough oxygen for them to survive.
COVID-19, the disease caused by SARS-CoV-2, has claimed approximately 5 million lives and 257 million cases reported globally. This virus and disease have significantly affected people worldwide, whether directly and/or indirectly, with a virulent pathogen that continues to evolve as we race to learn how to prevent, control, or cure COVID-19. The focus of this review is on the SARS-CoV-2 virus’ mechanism of infection and its proclivity at adapting and restructuring the intracellular environment to support viral replication. We highlight current knowledge and how scientific communities with expertize in viral, cellular, and clinical biology have contributed to increase our understanding of SARS-CoV-2, and how these findings may help explain the widely varied clinical observations of COVID-19 patients.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.