Amazing Science
867.6K views | +150 today
 
Scooped by Dr. Stefan Gruenwald
onto Amazing Science
Scoop.it!

Astrophysicists discover planets in extragalactic galaxies far away from milky way using microlensing

Astrophysicists discover planets in extragalactic galaxies far away from milky way using microlensing | Amazing Science | Scoop.it

A University of Oklahoma astrophysics team has discovered for the first time a population of planets beyond the Milky Way galaxy. Using microlensing—an astronomical phenomenon and the only known method capable of discovering planets at truly great distances from the Earth among other detection techniques—OU researchers were able to detect objects in extragalactic galaxies that range from the mass of the Moon to the mass of Jupiter.

 

Xinyu Dai, professor in the Homer L. Dodge Department of Physics and Astronomy, OU College of Arts and Sciences, with OU postdoctoral researcher Eduardo Guerras, made the discovery with data from the National Aeronautics and Space Administration's Chandra X-ray Observatory, a telescope in space that is controlled by the Smithsonian Astrophysical Observatory.

 

"We are very excited about this discovery. This is the first time anyone has discovered planets outside our galaxy," said Dai. "These small planets are the best candidate for the signature we observed in this study using the microlensing technique. We analyzed the high frequency of the signature by modeling the data to determine the mass."

 

While planets are often discovered within the Milky Way using microlensing, the gravitational effect of even small objects can create high magnification leading to a signature that can be modeled and explained in extragalactic galaxies. Until this study, there has been no evidence of planets in other galaxies.

 

"This is an example of how powerful the techniques of analysis of extragalactic microlensing can be. This galaxy is located 3.8 billion light years away, and there is not the slightest chance of observing these planets directly, not even with the best telescope one can imagine in a science fiction scenario," said Guerras. "However, we are able to study them, unveil their presence and even have an idea of their masses. This is very cool science."

 

For this study, OU researchers used the NASA Chandra X-ray Observatory at the Smithsonian Astrophysical Observatory. The microlensing models were calculated at the OU Supercomputing Center for Education and Research.

more...
No comment yet.
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

20,000+ FREE Online Science and Technology Lectures from Top Universities

20,000+ FREE Online Science and Technology Lectures from Top Universities | Amazing Science | Scoop.it

FOR FULL GENOME SEQUENCING CONTACT DIAGNOMICS

(www.diagnomics.com)

Toll Free:1-800-605-8422  FREE
Regular Line:1-858-345-4817

  

 

NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".

 

This newsletter is aggregated from over 1450 news sources:

http://www.genautica.com/links/1450_news_sources.html

 

All my Tweets and Scoop.It! posts sorted and searchable:

Twitter Feeds

 

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

You can search through all the articles semantically on my

archived twitter feed

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen)  and display all the relevant postings SORTED by TOPICS.

 

You can also type your own query:

 

e.g., you are looking for articles involving "dna" as a keyword

 

http://www.scoop.it/t/amazing-science/?q=dna

 

Or 

CLICK on the little

FUNNEL symbol at the

 

==================================================

 

***MOST READS***

• 3D-printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciencesgreen-energy • history • language • mapmaterial-science • math • med • medicine • microscopymost-reads • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video 

more...
Arturo Pereira's curator insight, August 12, 2017 9:01 AM
The democratization of knowledge!
Nevermore Sithole's curator insight, September 11, 2017 2:42 AM
FREE Online Science and Technology Lectures from Top Universities
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google aims for 100% renewable energy all day, every day

Google aims for 100% renewable energy all day, every day | Amazing Science | Scoop.it
Buying enough clean energy to make up for your dirty energy is one thing; using all clean energy 24/7 is another, and it could signal a new approach.

 

Our time to move away from dirty energy to green sources is limited. Federal governments can’t be relied upon to push the conversion–especially not the one in the U.S., which is actively working against large-scale adoption of green energy. Much of the progress we’ve seen so far has come from big corporate energy buyers demanding carbon-free power. There is an ecological motivation, but it’s also driven by a desire to get in on the falling cost and high cost predictability of renewable energy sources like wind and solar.

 

The largest corporate buyers are big tech companies that rely heavily on global networks of large power-hungry data centers, storing and serving up most of the internet’s digital content: videos and movies, webpages, search results. No surprise: Google is a gigantic energy hog, but it’s also currently the world’s largest buyer of renewable energy, in its various forms—over 3 gigawatts—according to a March report by Bloomberg New Energy Finance.

 

With its sizable purchase of renewables, Google says it’s currently matching all of its total energy use with clean energy sources. But when you hear a company like Google say, “We’re 100% renewable energy,” it usually means that it is, on balance, buying as much clean, renewable energy (wind, solar, etc.) as it is consuming unclean, non-renewable energy (coal, natural gas, etc.) in a given year. That’s not the same as directly “powering” their operations with all renewables all the time.

 

Companies typically can’t generate enough power for a data center from an onsite solar or wind farm; they have to connect to the local power grid like everyone else. And the local utilities that run the grid get their power from a mix of sources, some dirty, some clean. The energy buyer can’t choose to buy only the electrons from the grid that were generated from clean energy sources.

 

Instead, buyers offset their energy use. Many companies sign virtual power-purchase agreements whereby they buy renewable energy credits, financial instruments that certify that a certain amount of green energy has been added to the electric grid. In some markets, corporate customers can go directly to a green energy wholesaler to get their power.

 

Big buyers like Google, AppleMicrosoftAmazon, and Facebook actively organize as well as invest in new clean energy projects in markets where they operate, so there’s more of the stuff available to buy. But these tactics are just the first moves in a long game. In a new research paper, Google begins to look at an ultimate goal: converting its data centers to 100% green energy—all day, every day. And it provides a framework for achieving the real-world steps needed to get there.

 

“Achieving 100% renewable energy is just the beginning,” Michael Terrell, Google’s head of energy market development, told me. “We’re keeping our eyes on the prize, and that is getting to carbon free for every hour of the day for every location.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Astronomers Find a Cosmic Titanic Structure in the Early Universe

Astronomers Find a Cosmic Titanic Structure in the Early Universe | Amazing Science | Scoop.it
An international team of astronomers has discovered a titanic structure in the early universe, just 2 billion years after the Big Bang. This galaxy proto-supercluster, nicknamed Hyperion, is the largest and most massive structure yet found at such a remote time and distance.

 

The team that made the discovery was led by Olga Cucciati of Istituto Nazionale di Astrofisica (INAF) Bologna, Italy and project scientist Brian Lemaux in the Department of Physics, College of Letters and Science at the University of California, Davis, and included Lori Lubin, professor of physics at UC Davis. They used the VIMOSinstrument on ESO's Very Large Telescope in Paranal, Chile to identify a gigantic proto-supercluster of galaxies forming in the early Universe, just 2.3 billion years after the Big Bang.

 

Hyperion is the largest and most massive structure to be found so early in the formation of the Universe, with a calculated mass more than one million billion times that of the Sun. This enormous mass is similar to that of the largest structures observed in the Universe today, but finding such a massive object in the early Universe surprised astronomers.

 

"This is the first time that such a large structure has been identified at such a high redshift, just over 2 billion years after the Big Bang," Cucciati said. "Normally these kinds of structures are known at lower redshifts, which means when the Universe has had much more time to evolve and construct such huge things. It was a surprise to see something this evolved when the Universe was relatively young."

 

Supercluster mapped in three dimensions

Located in the constellation of Sextans (The Sextant), Hyperion was identified by a novel technique developed at UC Davis to analyze the vast amount of data obtained from the VIMOS Ultra-Deep Survey led by Olivier Le Fèvre from Laboratoire d'Astrophysique de Marseille, Centre National de la Recherche Scientifique and Centre National d'Etudes Spatiales. The VIMOS instrument can measure the distance to hundreds of galaxies at the same time, making it possible to map the position of galaxies within the forming supercluster in three dimensions.

 

The team found that Hyperion has a very complex structure, containing at least seven high-density regions connected by filaments of galaxies, and its size is comparable to superclusters closer to Earth, though it has a very different structure.

"Superclusters closer to Earth tend to a much more concentrated distribution of mass with clear structural features," Lemaux said. "But in Hyperion, the mass is distributed much more uniformly in a series of connected blobs, populated by loose associations of galaxies."

 

The researchers are comparing the Hyperion findings with results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) survey, led by Lubin. The ORELSE survey uses telescopes at the W.M. Keck Observatory in Hawaii to study superclusters closer to Earth. Lubin and Lemaux are also using the Keck observatory to map out Hyperion and similar structures more completely.

 

The contrast between Hyperion and less distant superclusters is most likely due to the fact that nearby superclusters have had billions of years for gravity to gather matter together into denser regions -- a process that has been acting for far less time in the much younger Hyperion.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This RNA-based technique could make gene therapy more effective

This RNA-based technique could make gene therapy more effective | Amazing Science | Scoop.it

Delivering functional genes into cells to replace mutated genes, an approach known as gene therapy, holds potential for treating many types of diseases. The earliest efforts to deliver genes to diseased cells focused on DNA, but many scientists are now exploring the possibility of using RNA instead, which could offer improved safety and easier delivery.

 

MIT biological engineers have now devised a way to regulate the expression of RNA once it gets into cells, giving them precise control over the dose of protein that a patient receives. This technology could allow doctors to more accurately tailor treatment for individual patients, and it also offers a way to quickly turn the genes off, if necessary.

 

“We can control very discretely how different genes are expressed,” says Jacob Becraft, an MIT graduate student and one of the lead authors of the study, which appears in the Oct. 16 issue of Nature Chemical Biology. “Historically, gene therapies have encountered issues regarding safety, but with new advances in synthetic biology, we can create entirely new paradigms of ‘smart therapeutics’ that actively engage with the patient’s own cells to increase efficacy and safety.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

String Theory: Is Dark Energy Even Allowed?

String Theory: Is Dark Energy Even Allowed? | Amazing Science | Scoop.it
A new conjecture causes excitement in the string theory community. Timm Wrase of the Vienna University of Technology has now published much-discussed results on recent new developments.

 

In string theory, a paradigm shift could be imminent. In June, a team of string theorists from Harvard and Caltech published a conjecture which sounded revolutionary: String theory is said to be fundamentally incompatible with our current understanding of "dark energy" -- but only with "dark energy" can we explain the accelerated expansion of our current universe.

 

Timm Wrase of the Vienna University of Technology quickly realized something odd about this conjecture: it seemed to be incompatible with the existence of the Higgs particle. His calculations, which he carried out together with theorists from Columbia University in New York and the University of Heidelberg, have now been published in Physical Review. At the moment, there are heated discussions about strings and dark energy all around the world. Wrase hopes that this will lead to new breakthroughs in this line of research.

 

The theory of everything

Great hope is placed in string theory. It is supposed to explain how gravity is related to quantum physics and how we can understand the laws of nature, which describe the entire physical world, from the smallest particles to the largest structure of the cosmos. Often, string theory has been accused of merely providing abstract mathematical results and making too few predictions that can actually be verified in an experiment. Now, however, the string theory community all around the world is discussing a question that is closely related to cosmic experiments measuring the expansion of the universe. In 2011, the Nobel Prize in Physics was awarded for the discovery that the universe is not only constantly growing larger, but that this expansion is actually accelerating. This phenomenon can only be explained by assuming an additional, previously unknown "dark energy." This idea originally came from Albert Einstein, who added it as a "cosmological constant" to his theory of general relativity. Einstein actually did this to construct a non-expanding universe. When Hubble discovered in 1929 that the universe was in fact expanding, Einstein described this modification of his equations as the biggest blunder of his life. But with the discovery of the accelerated expansion of the cosmos, the cosmological constant has been reintroduced as dark energy into the current standard model of cosmology.

 

Like an apple in the fruit bowl

"For a long time, we thought that such a dark energy can be well accommodated in string theory," says Timm Wrase from the Institute for Theoretical Physics of the Vienna University of Technology. String theory assumes that there are additional, previously unknown particles that can be described as fields. These fields have a state of minimal energy -- much like an apple lying in a bowl. It will always lie at the very bottom, at the lowest point of the bowl. Everywhere else its energy would be higher, if we want to shift it, we have to exert energy. But that does not mean that the apple at the lowest point has no energy at all. We can put the bowl with the apple on the ground, or on top of the table -- there the apple has more energy but it still cannot move, because it is still in a state of minimal energy in its bowl. "In string theory there are fields which could explain dark energy in a similar way -- locally, they are in a state of minimal energy, but still their energy has a value greater than zero," explains Timm Wrase. "So these fields would provide the so-called dark energy, with which we could explain the accelerated expansion of the universe." But Cumrun Vafa from Harvard University, one of the world's most renowned string theorists, published an article on June 25, raising many eyebrows. He suggested that such "bowl-shaped" fields of positive energy are not possible in string theory.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Perfectly round electron: Study supports Standard Model of particle physics, excluding alternative models

Perfectly round electron: Study supports Standard Model of particle physics, excluding alternative models | Amazing Science | Scoop.it

In a new study, researchers at Northwestern, Harvard and Yale universities examined the shape of an electron's charge with unprecedented precision to confirm that it is perfectly spherical. A slightly squashed charge could have indicated unknown, hard-to-detect heavy particles in the electron's presence, a discovery that could have upended the global physics community.

 

"If we had discovered that the shape wasn't round, that would be the biggest headline in physics for the past several decades," said Gerald Gabrielse, who led the research at Northwestern. "But our finding is still just as scientifically significant because it strengthens the Standard Model of particle physics and excludes alternative models."

 

The study will be published Oct. 18, 2018 in the journal Nature. In addition to Gabrielse, the research was led by John Doyle, the Henry B. Silsbee Professor of Physics at Harvard, and David DeMille, professor of physics at Yale. The trio leads the National Science Foundation (NSF)-funded Advanced Cold Molecule Electron (ACME) Electric Dipole Moment Search.

 

A longstanding theory, the Standard Model of particle physics describes most of the fundamental forces and particles in the universe. The model is a mathematical picture of reality, and no laboratory experiments yet performed have contradicted it. This lack of contradiction has been puzzling physicists for decades.

 

"The Standard Model as it stands cannot possibly be right because it cannot predict why the universe exists," said Gabrielse, the Board of Trustees Professor of Physics at Northwestern. "That's a pretty big loophole."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World’s fastest camera freezes time at 10 trillion frames per second

World’s fastest camera freezes time at 10 trillion frames per second | Amazing Science | Scoop.it

What happens when a new technology is so precise that it operates on a scale beyond our characterization capabilities? For example, the lasers used at INRS produce ultrashort pulses in the femtosecond range (10-15s) that are far too short to visualize. Although some measurements are possible, nothing beats a clear image, says INRS professor and ultrafast imaging specialist Jinyang Liang.

 

He and his colleagues, led by Caltech's Lihong Wang, have developed what they call T-CUP: the world's fastest camera, capable of capturing ten trillion (1013) frames per second. This new camera literally makes it possible to freeze time to see phenomena -- and even light! -- in extremely slow motion.

In recent years, the junction between innovations in non-linear optics and imaging has opened the door for new and highly efficient methods for microscopic analysis of dynamic phenomena in biology and physics. But to harness the potential of these methods, there needs to be a way to record images in real time at a very short temporal resolution -- in a single exposure.

 

Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones. For example, laser-engraved glass can tolerate only a single laser pulse, leaving less than a picosecond to capture the results. In such a case, the imaging technique must be able to capture the entire process in real time.

 

Compressed ultrafast photography (CUP) was a good starting point them. At 100 billion frames per second, this method approached, but did not meet, the specifications required to integrate femtosecond lasers. To improve on the concept, the new T-CUP system was developed based on a femtosecond streak camera that also incorporates a data acquisition type used in applications such as tomography.

 

"We knew that by using only a femtosecond streak camera, the image quality would be limited," says Professor Lihong Wang, the Bren Professor of Medial Engineering and Electrical Engineering at Caltech and the Director of Caltech Optical Imaging Laboratory (COIL). "So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames per second."

 

Setting the world record for real-time imaging speed, T-CUP can power a new generation of microscopes for biomedical, materials science, and other applications. This camera represents a fundamental shift, making it possible to analyze interactions between light and matter at an unparalleled temporal resolution.

The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time. This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.

 

"It's an achievement in itself," says Jinyang Liang, the leading author of this work, who was an engineer in COIL when the research was conducted, "but we already see possibilities for increasing the speed to up to one quadrillion (1015) frames per second!" Speeds like that are sure to offer insight into as-yet undetectable secrets of the interactions between light and matter.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Beyond Limits - How the internet is becoming a part of us

Beyond Limits - How the internet is becoming a part of us | Amazing Science | Scoop.it

For Professor Yuval Noah Harari from the Hebrew University of Jerusalem, the merging of man and machine will be the “greatest evolution in biology.”

 

“I think it is likely in the next 200 years or so Homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs, part organic part non-organic. It will be the greatest evolution in biology since the appearance of life. Nothing really has changed in four billion years biologically speaking. But we will be as different from today’s humans as chimps are now from us.”

 

But what role will the internet and all its devices – ever smaller and ever closer to us – play in this great evolution? Meet E-man…

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Alzheimer's Disease R&D Review
Scoop.it!

Designer proteins activate fluorescent molecules

Designer proteins activate fluorescent molecules | Amazing Science | Scoop.it
A method for designing β-barrels that bind to any small molecule.

 

Proteins are the molecular machines of life: they carry out the complex molecular processes required by cells with unrivalled accuracy and efficiency. Many of these processes depend on proteins having the ability to bind specifically to a given small molecule. If we could make proteins from scratch to bind any desired target molecule, it would open the door to a wide range of biotechnological applications that are not currently possible using natural proteins. Writing in Nature, Dou et al.1describe a computational method for designing proteins tailored to bind a small molecule of interest, and use it to make ‘fluorescence-activating’ proteins — biotechnological tools that have potential applications in biomedical research.


Via Krishan Maggon
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

New nanotechnology breakthrough uses atmospheric carbon to make useful chemicals

New nanotechnology breakthrough uses atmospheric carbon to make useful chemicals | Amazing Science | Scoop.it

Burning fossil fuels such as coal and natural gas releases carbon into the atmosphere as CO2 while the production of methanol and other valuable fuels and chemicals requires a supply of carbon. There is currently no economically or energy efficient way to collect CO2 from the atmosphere and use it to produce carbon-based chemicals, but researchers at the University of Pittsburgh Swanson School of Engineering have just taken an important step in that direction.

 

The team worked with a class of nanomaterials called metal-organic frameworks or "MOFs," which can be used to take carbon dioxide out of the atmosphere and combine it with hydrogen atoms to convert it into valuable chemicals and fuels. Karl Johnson, the William Kepler Whiteford Professor in the Swanson School's Department of Chemical and Petroleum Engineering, led the research group as principal investigator.

 

"Our ultimate goal is to find a low-energy, low-cost MOF capable of separating carbon dioxide from a mixture of gases and prepare it to react with hydrogen," says Dr. Johnson. "We found a MOF that could bend the CO2 molecules slightly, taking them to a state in which they react with hydrogen more easily."

 

The Johnson Research Group published their findings in the Royal Society of Chemistry (RSC) journal Catalysis Science & Technology (DOI: 10.1039/c8cy01018h). The journal featured their work on its cover, illustrating the process of carbon dioxide and hydrogen molecules entering the MOF and exiting as CH2O2 or formic acid—a chemical precursor to methanol. For this process to occur, the molecules must overcome a demanding energy threshold called the hydrogenation barrier.

 

Dr. Johnson explains, "The hydrogenation barrier is the energy needed to add two H atoms to CO2, which transforms the molecules into formic acid. In other words, it is the energy needed to get the H atoms and the CO2 molecules together so that they can form the new compound. In our previous work we have been able to activate H2 by splitting two H atoms, but we have not been able to activate CO2 until now."

 

The key to reducing the hydrogenation barrier was to identify a MOF capable of pre-activating carbon dioxide. Pre-activation is basically preparing the molecules for the chemical reaction by putting it into the right geometry, the right position, or the right electronic state. The MOF they modeled in their work achieves pre-activation of CO2 by putting it into a slightly bent geometry that is able to accept the incoming hydrogen atoms with a lower barrier.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Surgery Robot Remotely Hacked by Computer Science Experts

Surgery Robot Remotely Hacked by Computer Science Experts | Amazing Science | Scoop.it

Researchers at the University of Washington in Seattle have demonstrated the ability to remotely hack a research surgical robot, the RAVEN II platform.

 

Before continuing, I'll stop to clarify one thing. The RAVEN II is not a clinically used surgical robot like, say, the Da Vinci surgical robot. It's an "open-source" surgical robot developed at the University of Washington to test and demonstrate advanced concepts in robotic surgery. We contacted Applied Dexterity, which is now in charge of the RAVEN platform.

 

Co-founder David Drajeske explained: "The RAVEN II platform is not approved for use on humans. The system has been placed at 18 robotics research labs worldwide...that are using it to make advances in surgical robotics technologies ...The low-  level software is open-source and it is designed to be “hackable” or readily reprogrammed."

 

Clinically used surgical robots, like the Da Vinci platform, operate on secure local networks using proprietary (i.e. not publicly available) communications protocols between the console and the robot. By contrast, RAVEN II can work on unsecured public networks and uses a publicly available communications protocol (see below). So while some have proclaimed an imminent threat to robotic telesurgery based on this study, that's simply not the case.

 

That said, the work does have interesting implications; as pointed out by Mr. Drajeske and co-founder Blake Hannaford, RAVEN II is a great platform for testing these type of security issues. Tamara Bonaci, a graduate student at the University of Washigton, led this study to test the security vulnerabilities that could threaten surgeons using these tools and their patients. In this simulation, they aimed to recreate an environment that would be more akin to using these robots in remote areas, such as the use of public or unsecured networks to connect the robot to a remote surgeon. 

 

They tested a series of attacks on the RAVEN II system while an operator used it to complete a simulated task -- moving rubber blocks around. They found that not only were they able to disrupt the "surgeon" by causing erratic movements of the robot, they were able to hijack the robot entirely. They also discovered they were able to easily access the video feed from the robot.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Genome wide association analyses in type 2 diabetes: The gift that keeps on giving

Genome wide association analyses in type 2 diabetes: The gift that keeps on giving | Amazing Science | Scoop.it

Recently, Nature Genetics published the latest iteration of a series of genome wide association analysis for type 2 diabetes that has been compiled (as the DIAGRAM consortium) over the past decade. Genome-wide association data from nearly 900,000 individuals from 32 studies, focusing on individuals of European descent, were analyzed. Just under 10% of these participants had type 2 diabetes, making this comfortably the largest such study yet conducted for type 2 diabetes.

 

In addition to increasing the number of samples tested, this analysis was the first T2D association analysis to take full advantage of the much more detailed imputation reference panels now available. By upgrading from the 1000 Genomes panel (of a few hundred European genomes) to the Haplotype Reference Consortium panel (of around 30,000 genomes), the T2D consortium was able to undertake a much more robust survey of the contribution to T2D risk made by low frequency alleles.

 

It’s worth for a moment contemplating the staggering volume of data that a study such as this generates, and the scale of the advance over the past decade. The type 2 diabetes analysis conducted in 2007 as part of the Wellcome Trust Case Control Consortium featured 500K SNPs and 5K individuals (a total of 2.5billion genotypes). A decade on, the current study includes 27M SNPs and 900K individuals (25trillion genotypes). If each genotype was a 1cm marble, 25 trillion would be enough to fill – from pitch to brim –  around 20 stadia the size of Wembley.

 

These numbers would be expected to bring increased power to detect and characterise novel association signals, and the present study does not disappoint. The details are available in the manuscript, but the main “discovery” findings are these:

  • 243 loci at genome-wide significance, including 135 loci never previously implicated in type 2 diabetes predisposition were detected.
  • By performing conditional analyses around these primary signals, a further 160 secondary signals within these loci, for a total of 403 significant signals across the 243 loci were identified. (Note that this study was limited to European descent individuals, so there are a set of about 40-50 additional signals that were first described in non-European samples: because of ethnic differences in allele frequency and effect size, not all of these reached genome-wide significance in the present study, so the total number of confirmed T2D signals sits around the 450 mark).
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Ants regulate growth of seemingly 'useless' organ to make huge soldiers

Ants regulate growth of seemingly 'useless' organ to make huge soldiers | Amazing Science | Scoop.it

Scientists have found the answer to a question that perplexed Charles Darwin. So much so, that it actually led him to doubt his own theory of evolution. He wondered, if natural selection works at the level of the individual, fighting for survival and reproduction, how can a single colony produce worker ants that are so dramatically different in size—from the "minor" workers with their small heads and bodies, to the large-headed soldiers with their huge mandibles—especially if, as in the genus Pheidole, they are sterile? The answer, according to a paper published in Nature, is that the colony itself generates soldiers and regulates the balance between soldiers and "minor" workers thanks to a seemingly unimportant rudimentary "organ" which appears only briefly during the final stages of larval development. And only in some of the ants—the ones that will become soldiers.

 

"It was a completely unexpected finding. People had noticed that during the development of soldiers that a seemingly useless rudimentary "organ" would pop up and then disappear. But they assumed that it was just a secondary effect of the hormones and nutrition that were responsible for turning the larvae into soldiers," says Ehab Abouheif from McGill's Biology Department, the senior author on the paper.

 

Rajendhran Rajakumar the first author adds, "What we discovered was that these rudimentary "organs" are not a secondary effect of hormones and nutrition, but are instead responsible for generating the soldiers. It is their passing presence that regulates the head and body of soldiers to grow at rapid rates, until you get these big-headed soldiers with huge mandibles and big bodies."

 

Now you see it, now you don't

Abouheif has been studying wings in ants for the past twenty-three years. He was curious about the function of the wing imaginal disc which appear, transiently, in the final stages of larval development among the soldier ants. Even though the soldier ants never actually develop wings. So he and his team, spent nine years in the lab, using various techniques (surgical and molecular) to cut away portions of the rudimentary wing discs from the larvae of soldier ants in the widespread and very diverse Pheidole genus. They discovered that by doing so, they affected the growth of the head and the body. Indeed, they found that they were able to scale the size of soldier ants by cutting away differing degrees of the imaginal wing discs, with a corresponding decrease in the size of the heads and bodies of the soldier ants. It was clear confirmation that the rudimentary wing discs play a crucial role in the development of soldier ants.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neural network that securely finds potential useful drug candidates efficiently could encourage large-scale pooling of sensitive data

Neural network that securely finds potential useful drug candidates efficiently could encourage large-scale pooling of sensitive data | Amazing Science | Scoop.it
MIT researchers have developed a cryptographic system that could help neural networks identify promising drug candidates in massive pharmacological datasets, while keeping the data private. Secure computation done at such a massive scale could enable broad pooling of sensitive pharmacological data for predictive drug discovery.

 

Datasets of drug-target interactions (DTI), which show whether candidate compounds act on target proteins, are critical in helping researchers develop new medications. Models can be trained to crunch datasets of known DTIs and then, using that information, find novel drug candidates.

 

In recent years, pharmaceutical firms, universities, and other entities have become open to pooling pharmacological data into larger databases that can greatly improve training of these models. Due to intellectual property matters and other privacy concerns, however, these datasets remain limited in scope. Cryptography methods to secure the data are so computationally intensive they don't scale well to datasets beyond, say, tens of thousands of DTIs, which is relatively small.

 

In a paper published in Science, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a neural network securely trained and tested on a dataset of more than a million DTIs. The network leverages modern cryptographic tools and optimization techniques to keep the input data private, while running quickly and efficiently at scale.

 

The team's experiments show the network performs faster and more accurately than existing approaches; it can process massive datasets in days, whereas other cryptographic frameworks would take months. Moreover, the network identified several novel interactions, including one between the leukemia drug imatinib and an enzyme ErbB4—mutations of which have been associated with cancer—which could have clinical significance.

 

"People realize they need to pool their data to greatly accelerate the drug discovery process and enable us, together, to make scientific advances in solving important human diseases, such as cancer or diabetes. But they don't have good ways of doing it," says corresponding author Bonnie Berger, the Simons Professor of Mathematics and a principal investigator at CSAIL. "With this work, we provide a way for these entities to efficiently pool and analyze their data at a very large scale."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Dandelion seeds fly using ‘impossible’ method never before seen in nature

Dandelion seeds fly using ‘impossible’ method never before seen in nature | Amazing Science | Scoop.it
The seeds contain a lot of open space, which seems to be the key to sustaining flight.

 

The extraordinary flying ability of dandelion seeds is possible thanks to a form of flight that has not been seen before in nature, research has revealed. The discovery, which confirms the common plant among the natural world's best fliers, shows that movement of air around and within its parachute-shaped bundle of bristles enables seeds to travel great distances -- often a mile or more, kept afloat entirely by wind power.

 

Researchers from the University of Edinburgh carried out experiments to better understand why dandelion seeds fly so well, despite their parachute structure being largely made up of empty space. Their study revealed that a ring-shaped air bubble forms as air moves through the bristles, enhancing the drag that slows each seed's descent to the ground.

 

This newly found form of air bubble -- which the scientists have named the separated vortex ring -- is physically detached from the bristles and is stabilized by air flowing through it. The amount of air flowing through, which is critical for keeping the bubble stable and directly above the seed in flight, is precisely controlled by the spacing of the bristles. This flight mechanism of the bristly parachute underpins the seeds' steady flight. It is four times more efficient than what is possible with conventional parachute design, according to the research.

 

Researchers suggest that the dandelion's porous parachute might inspire the development of small-scale drones that require little or no power consumption. Such drones could be useful for remote sensing or air pollution monitoring.

 

The study, published in Nature, was funded by the Leverhulme Trust and the Royal Society. Dr Cathal Cummins, of the University of Edinburgh's Schools of Biological Sciences and Engineering, who led the study, said: "Taking a closer look at the ingenious structures in nature -- like the dandelion's parachute -- can reveal novel insights. We found a natural solution for flight that minimises the material and energy costs, which can be applied to engineering of sustainable technology."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Dying Star Emits a Whisper: Death of a massive star and birth of compact neutron star binary

Dying Star Emits a Whisper: Death of a massive star and birth of compact neutron star binary | Amazing Science | Scoop.it
The unexpectedly gentle death of a massive star suggests that it was being robbed by a dense companion lurking out of sight.

 

A Caltech-led team of researchers has observed the peculiar death of a massive star that exploded in a surprisingly faint and rapidly fading supernova. These observations suggest that the star has an unseen companion, gravitationally siphoning away the star's mass to leave behind a stripped star that exploded in a quick supernova. The explosion is believed to have resulted in a dead neutron star orbiting around its dense and compact companion, suggesting that, for the first time, scientists have witnessed the birth of a compact neutron star binary system.

 

The research was led by graduate student Kishalay De and is described in a paper appearing in the October 12 issue of the journal Science. The work was done primarily in the laboratory of Mansi Kasliwal (MS '07, PhD '11), assistant professor of astronomy. Kasliwal is the principal investigator of the Caltech-led Global Relay of Observatories Watching Transients Happen (GROWTH) project.

 

When a massive star—at least eight times the mass of the sun—runs out of fuel to burn in its core, the core collapses inwards upon itself and then rebounds outward in a powerful explosion called a supernova. After the explosion, all of the star's outer layers have been blasted away, leaving behind a dense neutron star—about the size of a small city but containing more mass than the sun.

 

A teaspoon of a neutron star would weigh as much as a mountain.

During a supernova, the dying star blasts away all of the material in its outer layers. Usually, this is a few times the mass of the sun. However, the event that Kasliwal and her colleagues observed, dubbed iPTF 14gqr, ejected matter only one fifth of the mass of the sun.

 

"We saw this massive star's core collapse, but we saw remarkably little mass ejected," Kasliwal says. "We call this an ultra-stripped envelope supernova and it has long been predicted that they exist. This is the first time we have convincingly seen core collapse of a massive star that is so devoid of matter."

 

The fact that the star exploded at all implies that it must have previously been enveloped in lots of material, or its core would never have become heavy enough to collapse. But where, then, was the missing mass?

 

The researchers inferred that the mass must have been stolen—the star must have some kind of dense, compact companion, either a white dwarf, neutron star, or black hole—close enough to gravitationally siphon away its mass before it exploded. The neutron star that was left behind from the supernova must have then been born into orbit with that dense companion. Observing iPTF 14gqr was actually observing the birth of a compact neutron star binary. Because this new neutron star and its companion are so close together, they will eventually merge in a collision similar to the 2017 event that produced both gravitational waves and electromagnetic waves. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Efficient method for making single-atom-thick, wafer-scale materials opens up opportunities in flexible electronics

Efficient method for making single-atom-thick, wafer-scale materials opens up opportunities in flexible electronics | Amazing Science | Scoop.it
MIT researchers efficient method for making single-atom-thick, wafer-scale materials opens up opportunities in flexible electronics.

 

Since the 2003 discovery of the single-atom-thick carbon material known as graphene, there has been significant interest in other types of 2-D materials as well. These materials could be stacked together like Lego bricks to form a range of devices with different functions, including operating as semiconductors. In this way, they could be used to create ultra-thin, flexible, transparent and wearable electronic devices. However, separating a bulk crystal material into 2-D flakes for use in electronics has proven difficult to do on a commercial scale.

 

The existing process, in which individual flakes are split off from the bulk crystals by repeatedly stamping the crystals onto an adhesive tape, is unreliable and time-consuming, requiring many hours to harvest enough material and form a device.

 

Now researchers in the Department of Mechanical Engineering at MIT have developed a technique to harvest 2-inch diameter wafers of 2-D material within just a few minutes. They can then be stacked together to form an electronic device within an hour.

The technique, which they describe in a paper published in the journal Science, could open up the possibility of commercializing electronic devices based on a variety of 2-D materials, according to Jeehwan Kim, an associate professor in the Department of Mechanical Engineering, who led the research.

 

The paper’s co-first authors were Sanghoon Bae, who was involved in flexible device fabrication, and Jaewoo Shim, who worked on the stacking of the 2-D material monolayers. Both are postdocs in Kim’s group.

 

The paper’s co-authors also included students and postdocs from within Kim’s group, as well as collaborators at Georgia Tech, the University of Texas, Yonsei University in South Korea, and the University of Virginia. Sang-Hoon Bae, Jaewoo Shim, Wei Kong, and Doyoon Lee in Kim’s research group equally contributed to this work.  “We have shown that we can do monolayer-by-monolayer isolation of 2-D materials at the wafer scale,” Kim says. “Secondly, we have demonstrated a way to easily stack up these wafer-scale monolayers of 2-D material.”

 

The researchers first grew a thick stack of 2-D material on top of a sapphire wafer. They then applied a 600-nanometer-thick nickel film to the top of the stack. Since 2-D materials adhere much more strongly to nickel than to sapphire, lifting off this film allowed the researchers to separate the entire stack from the wafer. What’s more, the adhesion between the nickel and the individual layers of 2-D material is also greater than that between each of the layers themselves.

 

As a result, when a second nickel film was then added to the bottom of the stack, the researchers were able to peel off individual, single-atom thick monolayers of 2-D material.

That is because peeling off the first nickel film generates cracks in the material that propagate right through to the bottom of the stack, Kim says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The UK Biobank resource with deep phenotyping and genomic data

The UK Biobank resource with deep phenotyping and genomic data | Amazing Science | Scoop.it

The UK Biobank project is a prospective cohort study with deep genetic and phenotypic data collected on approximately 500,000 individuals from across the United Kingdom, aged between 40 and 69 at recruitment. The open resource is unique in its size and scope.

 

A rich variety of phenotypic and health-related information is available on each participant, including biological measurements, lifestyle indicators, biomarkers in blood and urine, and imaging of the body and brain. Follow-up information is provided by linking health and medical records.

 

Genome-wide genotype data have been collected on all participants, providing many opportunities for the discovery of new genetic associations and the genetic bases of complex traits. The study researchers describe the centralized analysis of the genetic data, including genotype quality, properties of population structure and relatedness of the genetic data, and efficient phasing and genotype imputation that increases the number of testable variants to around 96 million. Classical allelic variation at 11 human leukocyte antigen genes was imputed, resulting in the recovery of signals with known associations between human leukocyte antigen alleles and many diseases.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Study documents paternal transmission of epigenetic memory via sperm

Study documents paternal transmission of epigenetic memory via sperm | Amazing Science | Scoop.it

Studies of human populations and animal models suggest that a father's experiences such as diet or environmental stress can influence the health and development of his descendants. How these effects are transmitted across generations, however, remains mysterious.

 

Susan Strome's lab at UC Santa Cruz has been making steady progress in unraveling the mechanisms behind this phenomenon, using a tiny roundworm called Caenorhabditis elegans to show how marks on chromosomes that affect gene expression, called "epigenetic" marks, can be transmitted from parents to offspring. Her team's most recent paper, published October 17 in Nature Communications, focuses on transmission of epigenetic marks by C. elegans sperm.

 

In addition to documenting the transmission of epigenetic memory by sperm, the new study shows that the epigenetic information delivered by sperm to the embryo is both necessary and sufficient to guide proper development of germ cells in the offspring (germ cells give rise to eggs and sperm).

 

"We decided to look at C. elegans because it is such a good model for asking epigenetic questions using powerful genetic approaches," said Strome, a distinguished professor of molecular, cell, and developmental biology.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cell-sized robots can sense their environment

Cell-sized robots can sense their environment | Amazing Science | Scoop.it

Researchers at MIT have created what may be the smallest robots yet that can sense their environment, store data, and even carry out computational tasks. These devices, which are about the size of a human egg cell, consist of tiny electronic circuits made of two-dimensional materials, piggybacking on minuscule particles called colloids.

 

Colloids, which insoluble particles or molecules anywhere from a billionth to a millionth of a meter across, are so small they can stay suspended indefinitely in a liquid or even in air. By coupling these tiny objects to complex circuitry, the researchers hope to lay the groundwork for devices that could be dispersed to carry out diagnostic journeys through anything from the human digestive system to oil and gas pipelines, or perhaps to waft through air to measure compounds inside a chemical processor or refinery.

 

“We wanted to figure out methods to graft complete, intact electronic circuits onto colloidal particles,” explains Michael Strano, the Carbon C. Dubbs Professor of Chemical Engineering at MIT and senior author of the study, which was published today in the journal Nature Nanotechnology. MIT postdoc Volodymyr Koman is the paper’s lead author.

 

“Colloids can access environments and travel in ways that other materials can’t,” Strano says. Dust particles, for example, can float indefinitely in the air because they are small enough that the random motions imparted by colliding air molecules are stronger than the pull of gravity. Similarly, colloids suspended in liquid will never settle out.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Cancer Immunotherapy Review
Scoop.it!

Killer cell immunotherapy offers potential cure for advanced pancreatic cancer

Killer cell immunotherapy offers potential cure for advanced pancreatic cancer | Amazing Science | Scoop.it
A new approach to treating pancreatic cancer using 'educated killer cells' has shown promise, according to early research by Queen Mary University of London.

 

The new cell-based immunotherapy, which has not yet been tested in humans with pancreatic cancer, led to mice being completely cancer-free, including cancer cells that had already spread to the liver and lungs.

 

Each year around 9,800 people in the UK are diagnosed with pancreatic cancer. The disease is particularly aggressive and has one of the lowest survival rates of all cancers. This is because it is often diagnosed at a late and advanced stage, when the tumor has already spread to other organs.

 

In the study, published in the journal Gut, the team used pancreatic cancer cells from patients with late-stage disease, and transplanted them into mice. They then took the patients' immune cells and modified them to specifically identify and eliminate the cancer cells - creating 'educated killer cells', or CAR-T cells. And for the first time, the team introduced a new technology that allowed them to completely control the activity of CAR-T cells, making them potentially safer.

 

First author Dr Deepak Raj from Queen Mary University of London, said: "Immunotherapy using CAR-T cells has been tremendously successful in blood cancers, but unfortunately, there have been toxic side effects in its treatment of solid tumors. Given the dismal prognosis of pancreatic cancer with conventional treatments, it's vitally important that we develop safe and effective CAR-T cell therapies for solid tumors, such as pancreatic cancer. "Our work suggests that our new 'switchable' CAR-T cells could be administered to human patients with pancreatic cancer, and we could control their activity at a level that kills the tumor without toxic side effects to normal tissues."

 

The team's new 'switchable' CAR-T system means the treatment can be turned on and off, or have its activity changed to a desired level, making the therapy extremely safe and minimizing the side effects and improving the safety of the treatment. The activity of the treatment was controlled through administration or withdrawal of the 'switch' molecule within living mice, without affecting the ability of the treatment to kill the pancreatic cancers. The team now hopes to bring this promising therapy to the clinic.


Via Krishan Maggon
more...
Omar Castro's curator insight, October 14, 11:11 PM
Reputation-The reputation of the article and where it comes from is a "claimed global source for science", the article itself comes from a university in London Queen Mary. 

Ability to see- This article has the ability to see due to the fact the University has documented its research and explains the capabilities the uses for the research
  
Vested interest- The article doesn't have a particular vested interest in explaining its research.

Expertise - When it comes to expertise the article covered are from scientist looking for a new approach in a University. 

Neutrality- This article leans toward the more optimistic view of the study instead of an analytical approach. So far this article from what is written is very hopeful in making this approach come to fruition.
 
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Novel design could help shed excess heat in next-generation fusion power plants

Novel design could help shed excess heat in next-generation fusion power plants | Amazing Science | Scoop.it

A class exercise at MIT, aided by industry researchers, has led to an innovative solution to one of the longstanding challenges facing the development of practical fusion power plants: how to get rid of excess heat that would cause structural damage to the plant.

 

The new solution was made possible by an innovative approach to compact fusion reactors, using high-temperature superconducting magnets. This method formed the basis for a massive new research program launched this year at MIT and the creation of an independent startup company to develop the concept. The new design, unlike that of typical fusion plants, would make it possible to open the device's internal chamber and replace critical components; this capability is essential for the newly proposed heat-draining mechanism.

 

The new approach is detailed in a paper in the journal Fusion Engineering and Design, authored by Adam Kuang, a graduate student from that class, along with 14 other MIT students, engineers from Mitsubishi Electric Research Laboratories and Commonwealth Fusion Systems, and Professor Dennis Whyte, director of MIT's Plasma Science and Fusion Center, who taught the class.

 

In essence, Whyte explains, the shedding of heat from inside a fusion plant can be compared to the exhaust system in a car. In the new design, the "exhaust pipe" is much longer and wider than is possible in any of today's fusion designs, making it much more effective at shedding the unwanted heat. But the engineering needed to make that possible required a great deal of complex analysis and the evaluation of many dozens of possible design alternatives.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DNA Forensics: Even if you’ve never taken a DNA test, a distant relative’s could reveal your identity

DNA Forensics: Even if you’ve never taken a DNA test, a distant relative’s could reveal your identity | Amazing Science | Scoop.it

The genetic sleuthing approach that broke open the Golden State Killer case could potentially be used to identify more than half of Americans of European descent from anonymous DNA samples, according to a provocative new study that highlights the unintended privacy consequences of consumer genetic testing for ancestry and health.

 

The idea that people who voluntarily spit into a tube and share their genetic data online to search for relatives could unwittingly aid law enforcement was thrust into the spotlight recently. This spring, genetic genealogy helped California police identify a suspected serial killer and rapist in a grisly, decades-old cold case. But the new study, published in the journal Science, drives home the reality that that instance was not an outlier; a majority of Americans of European descent could be matched to a third cousin or closer using an open-access genetic genealogy database.

 

“Each individual in the database is like a beacon of genetic information, and this beacon illuminates hundreds of individuals — distant relatives connected to this person via their family tree,” said Yaniv Erlich, the chief science officer of the direct-to-consumer genetics company MyHeritage, who led the study.

 

Erlich and colleagues then showed how a match, combined with basic information such as age and a reconstructed family tree, could be used to figure out the identity of an anonymous person who participated in a research project. A separate study found that even the minimal DNA kept in law enforcement databases could be cross-referenced with consumer genetic data to identify relatives.

 

“This really brings us to the crossroads of where science and technology and law and policy and ethics meet,” said Frederick Bieber, a medical geneticist at Brigham and Women’s Hospital who consults with crime labs and public defenders’ offices. “Both of these papers are very important because they ... raise the issue that we, collectively, are beginning to face head-on: Where do our privacy expectations interfere with the natural social instinct for public safety?”

 

The public is overwhelmingly supportive of police searches of genetic websites to solve violent crimes, according to a recent survey published in PLOS Biology. Leading consumer genetics companies signed on to guidelines to be transparent about how people’s data is used, and many have policies that do not allow law enforcement to search their databases without explicit approval. The website commonly used in law enforcement cases, GEDmatch, is an open-access genetic genealogy database, and people must voluntarily decide to upload their genetic profile.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Model helps robots to navigate more like humans do

Model helps robots to navigate more like humans do | Amazing Science | Scoop.it

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

 

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they've learned before in similar situations. A paper describing the model was presented at this week's IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

 

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can't leverage information about how they or other agents acted previously in similar environments.

 

"Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents," says co-author Andrei Barbu, a researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT's McGovern Institute. "The thousandth time they go through the same crowd is as complicated as the first time. They're always exploring, rarely observing, and never using what's happened in the past."

 

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot's movement in an environment.

 

In their paper, "Deep sequential models for sampling-based planning," the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The experience of mathematical beauty and its neural correlates in the brain

The experience of mathematical beauty and its neural correlates in the brain | Amazing Science | Scoop.it

The beauty of mathematical formulas lies in abstracting, in simple equations, truths that have universal validity. Many—among them the mathematicians Bertrand Russell (1919) and Hermann Weyl (Dyson, 1956Atiyah, 2002), the physicist Paul Dirac (1939) and the art critic Clive Bell (1914)—have written of the importance of beauty in mathematical formulations and have compared the experience of mathematical beauty to that derived from the greatest art (Atiyah, 1973). Their descriptions suggest that the experience of mathematical beauty has much in common with that derived from other sources, even though mathematical beauty has a much deeper intellectual source than visual or musical beauty, which are more “sensible” and perceptually based. Past brain imaging studies exploring the neurobiology of beauty have shown that the experience of visual (Kawabata and Zeki, 2004), musical (Blood et al., 1999Ishizu and Zeki, 2011), and moral (Tsukiura and Cabeza, 2011) beauty all correlate with activity in a specific part of the emotional brain, field A1 of the medial orbito-frontal cortex, which probably includes segments of Brodmann Areas (BA) 10, 12 and 32 (see Ishizu and Zeki, 2011 for a review).

 

It is hypothesized that the experience of beauty derived from so abstract an intellectual source as mathematics will correlate with activity in the same part of the emotional brain as that of beauty derived from other sources. Plato (1929) thought that “nothing without understanding would ever be more beauteous than with understanding,” making mathematical beauty, for him, the highest form of beauty. The premium thus placed on the faculty of understanding when experiencing beauty creates both a problem and an opportunity for studying the neurobiology of beauty.

 

Several studies of the neurobiology of musical or visual beauty, in which participating subjects were neither experts nor trained in these domains, were recently performed. If the experience of mathematical beauty is not strictly related to understanding (of the equations), what can the source of mathematical beauty be? That is perhaps more difficult to account for in mathematics than in visual art or music. Whereas the source for the latter can be accounted for, at least theoretically, by preferred harmonies in nature or preferred distribution of forms or colors (see Bell, 1914Zeki and Stutters, 2011Zeki, 2013), it is more difficult to make such a correspondence in mathematics.

 

The Platonic tradition would emphasize that mathematical formulations are experienced as beautiful because they give insights into the fundamental structure of the universe (see Breitenbach, 2013). For Immanuel Kant, by contrast, the aesthetic experience is as well grounded in our own nature because, for him, “Aesthetic judgments may thus be regarded as expressions of our feeling that something makes sense to us” (Breitenbach, 2013). Dirac (1939) wrote: “the mathematician plays a game in which he himself invents the rules while the physicist plays a game in which the rules are provided by Nature, but as time goes on it becomes increasingly evident that the rules which the mathematician finds interesting are the same as those which Nature has chosen” and therefore that in the choice of new branches of mathematics, “One should be influenced very much… by considerations of mathematical beauty” (ellipsis added).

 

Several recent studies highlight further the extent to which even future mathematical formulations may, by being based on beauty, reveal something about our brain on the one hand, and about the extent to which our brain organization reveals something about our universe on the other.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Never forget a face? Research suggests people know an average of 5,000 faces

Never forget a face? Research suggests people know an average of 5,000 faces | Amazing Science | Scoop.it

For the first time scientists have been able to put a figure on how many faces people actually know- a staggering 5,000 on average.

 

The research team, from the University of York, tested study participants on how many faces they could recall from their personal lives and the media, as well as the number of famous faces they recognized.

 

Humans have typically lived in small groups of around one hundred individuals, but the study suggests our facial recognition abilities equip us to deal with the thousands of faces we encounter in the modern world—on our screens as well as in social interactions.

 

The results provide a baseline with which to compare the "facial vocabulary" size of humans with facial recognition software that is increasingly used to identify people at airports and in police investigations.

 

Dr. Rob Jenkins, from the Department of Psychology at the University of York, said: "Our study focused on the number of faces people actually know- we haven't yet found a limit on how many faces the brain can handle.

 

"The ability to distinguish different individuals is clearly important—it allows you to keep track of people's behavior over time, and to modify your own behavior accordingly."

more...
No comment yet.