Is it possible for anything living to cause an earthquake? — Megan
Yes, people can cause an earthquake through human activity. The most common way is by building a dam. It's very common to get small earthquakes after filling a dam, firstly because of the extra load due to the weight of the water; and then secondly because water seeping down into faults can cause them to move if they're at breaking point. Liquid acts as a lubricant enabling faults to slide more easily.
Another way humans can cause earthquakes is with mining - taking material out of the ground also causes little stresses which can results in earthquakes.
Pumping oil out can cause earthquakes by changing the stresses underground or because water pumped down to flush the oil out can have a lubricating effect.
Another human-related cause of earthquakes is when water is pumped through hot rocks several kilometres underground in order to harness geothermal energy. This can cause little tremors, up to magnitude 3 on the Richter Scale. Scientists use these small earthquakes to trace what is happening underground - they can follow exactly where the water is by following the little earthquakes.
— Clive Colins, seismologist, Geoscience Australia
Click headline to read more Ask the Expert Q & As--
Parasites that take over hosts, effectively turning them into zombies, are far from rare. But only recently have scientists started to work out the sophisticated biochemistry that the parasites use.
n the rain forests of Costa Rica lives Anelosimus octavius, a species of spider that sometimes displays a strange and ghoulish habit. From time to time these spiders abandon their own webs and build radically different ones, a home not for the spider but for a parasitic wasp that has been living inside it. Then the spider dies — a zombie architect, its brain hijacked by its parasitic invader — and out of its body crawls the wasp’s larva, which has been growing inside it all this time.
There are many of such examples of zombies in nature. They are far from rare. Viruses, fungi, protozoans, wasps, tapeworms and a vast number of other parasites can control the brains of their hosts and get them to do their bidding. But only recently have scientists started to work out the sophisticated biochemistry that the parasites use.
“The knowledge that parasites can manipulate their hosts is old. The new part is how they do it,” said Shelley Adamo of Dalhousie University in Nova Scotia, a co-editor of the new issue. “The last 5 to 10 years have really been exciting.”
In the case of the Costa Rican spider, the new web is splendidly suited to its wasp invader. Unlike the spider’s normal web, mostly a tangle of threads, this one has a platform topped by a thick sheet that protects it from the rain. The wasp larva crawls to the edge of the platform and spins a cocoon that hangs down through an opening that the spider has kindly provided for the parasite.
To manipulate the spiders, the wasp must have genes that produce proteins that alter spider behavior, and in some species, scientists are now pinpointing this type of gene. Such is the case with the baculovirus, a virus sprinkled liberally on leaves in forests and gardens. (The cabbage in a serving of coleslaw carries 100 million baculoviruses.)
David P. Hughes of Penn State University and his colleagues have found that a single gene, known as egt, is responsible for driving the caterpillars up trees. The gene encodes an enzyme. When the enzyme is released inside the caterpillar, it destroys a hormone that signals a caterpillar to stop feeding and molt.
Dr. Hughes suspects that the virus goads the caterpillar into a feeding frenzy. Normally, gypsy moth caterpillars come out at night to feed and then return to crevices near the bottom of trees to hide from predators. The zombie caterpillars, on the other hand, cannot stop searching for food.
“The infected individuals are out there, just eating and eating,” Dr. Hughes said. “They’re stuck in a loop.”
Whether humans are susceptible to this sort of zombie invasion is less clear. It is challenging enough to figure out how parasites manipulate invertebrates, which have a few hundred thousand neurons in their nervous systems. Vertebrates, including humans, have millions or billions of neurons, and so scientists have made fewer advances in studying their zombification.
Cell death can be classified according to its morphological appearance (which may be apoptotic, necrotic, autophagic or associated with mitosis), enzymological criteria (with and without the involvement of nucleases or of distinct classes of proteases, such as caspases, calpains, cathepsins and transglutaminases), functional aspects (programmed or accidental, physiological or pathological) or immunological characteristics (immunogenic or non-immunogenic).
The Nomenclature Committee on Cell Death (NCCD) has formulated a first round of recommendations in 2005, in Cell Death and Differentiation. Since then, the field of cell death research has continued its expansion, significant progress has been made and new putative cell death modalities have been described. The NCCD provides a forum in which names describing distinct modalities of cell death are critically evaluated and recommendations on their definition and use are formulated, hoping that a non-rigid, yet uniform, nomenclature will facilitate the communication among scientists and ultimately accelerate the pace of discovery.
As it stands now, three distinct routes of cellular catabolism can be defined according to morphological criteria, namely apoptosis, which is a form of cell death, autophagy, which causes the destruction of a part of the cytoplasm, but mostly avoids cell death, and necrosis, which is another form of cell death. Although frequently employed in the past, the use of Roman numerals (i.e., type I, type II and type III cell death, respectively) to indicate these catabolic processes should be abandoned.
Moreover, several critiques can be formulated against the clear-cut distinction of different cell types in the triad of apoptosis, autophagic cell death and necrosis. First, although this vocabulary was originally introduced based on observations of developing animals, it has rapidly been adopted to describe the results of in vitro studies performed on immortalized cell lines, which reflect very poorly the physiology of cell death in vivo. In tissues, indeed, dying cells are usually engulfed well before signs of advanced apoptosis or necrosis become detectable. Thus, it may be acceptable - if the irreversibility of these phenomena is demonstrated - to assess caspase activation and/or DNA fragmentation to diagnose apoptotic cell death in vivo.
Second, there are numerous examples in which cell death displays mixed features, for instance with signs of both apoptosis and necrosis, a fact that lead to the introduction of terms like ‘necroapoptosis’ and ‘aponecrosis’ (whose use is discouraged by the NCCD to avoid further confusion). Similarly, in the involuting D. melanogaster salivary gland, autophagic vacuolization is synchronized with signs of apoptosis, and results from genetic studies indicate that caspases and autophagy act in an additive manner to ensure cell death in this setting. Altogether, these data argue against a clear-cut and absolute distinction between different forms of cell death based on morphological criteria.
Third and most importantly, it would be a desideratum to replace morphological aspects with biochemical/functional criteria to classify cell death modalities. Unfortunately, there is no clear equivalence between morphology and biochemistry, suggesting that the ancient morphological terms are doomed to disappear and to be replaced by truly biochemical definitions. In this context, ‘loss-of-function’ and ‘gain-of function’ genetic approaches (e.g., RNA interference, knockout models and plasmid-driven overexpression systems) represent invaluable tools to characterize cell death modes with more precision, but only if such interventions truly reduce/augment the rate of death, instead of changing its morphological appearance, as it is often the case.
Present cell death classifications are reminiscent of the categorization of tumors that has been elaborated by pathologists over the last one and a half centuries. As old morphological categorizations of tumors are being more and more supported and will presumably be replaced by molecular diagnostics, which allows for a more sophisticated stratification of cancer subtypes based on molecular criteria, the current catalog of cell death types is destined to lose its value as compared with biochemical / functional tests. In the end, such efforts of classification are only justified when they have a prognostic and/or predictive impact, allowing the matching of each individual cancer with the appropriate therapy.
Similarly, a cell death nomenclature will be considered useful only if it predicts the possibilities to pharmacologically/genetically modulate (induce or inhibit) cell death and/or if it predicts the consequences of cell death in vivo, with regard to inflammation and recognition by the immune system.
If you've ever wondered where — and why — earthquakes happen the most, look no further than a new map, which plots more than a century's worth of nearly every recorded earthquake strong enough to at least rattle the bookshelves.
The map shows earthquakes of magnitude 4.0 or greater since 1898; each is marked in a lightning-bug hue that glows brighter with increasing magnitude.
The overall effect is both beautiful and arresting, revealing the silhouettes of Earth's tectonic boundaries in stark, luminous swarms of color.
The map's maker, John Nelson, the user experience and mapping manager for IDV Solutions, a data visualization company, said the project offered several surprises.
"First, I was surprised by the sheer amount of earthquakes that have been recorded," Nelson told OurAmazingPlanet. "It's almost like you could walk from Seattle to Wellington [New Zealand] if these things were floating in the ocean, and I wouldn't have expected that."
In all, 203,186 earthquakes are marked on the map, which is current through 2003. And it reveals the story of plate tectonics itself.
The long volcanic seams where Earth's crust is born appear as faint, snaking lines cutting through the world's oceans. The earthquakes along these so-called spreading centers tend to be rather mild. The best studied spreading center, called the Mid-Atlantic Ridge, bisects the Atlantic Ocean, on the right side of the image.
Its Pacific counterpart wanders along the eastern edge of the Pacific Ocean, cutting a wide swath offshore of South America. Another spreading center makes a jog though the Indian Ocean and up through the Red Sea.
But one glance at the map shows that the real earthquake action is elsewhere. Subduction zones, the places where tectonic plates overlap and one is forced to dive deep beneath the other and into the Earth's crushing interior — a process that generates the biggest earthquakes on the planet — stand out like a Vegas light show.
Nelson said this concept hit home particularly for the Ring of Fire, the vast line of subduction zones around the northern and western edge of the Pacific Ocean.
"I have a general sense of where it is, and a notion of plate tectonics, but when I first pulled the data in and started painting it in geographically, it was magnificent," Nelson said. "I was awestruck at how rigid those bands of earthquake activity really are."
Chemistry, a branch of physical science, is the study of the composition, properties and behavior of matter. As it is a fundamental component of matter, the atom is the basic unit of chemistry. Chemistry is concerned with atoms and their interactions with other atoms, with particular focus on the properties of the chemical bonds formed between species. Chemistry is also concerned with the interactions between atoms or molecules and various forms of energy. Chemistry is sometimes called "the central science" because it bridges other natural sciences like physics, geology and biology.
The solid iron core is actually crystalline, surrounded by liquid.
But the temperature at which that crystal can form had been a subject of long-running debate.
Experiments outlined in Science used X-rays to probe tiny samples of iron at extraordinary pressures to examine how the iron crystals form and melt.
Seismic waves captured after earthquakes around the globe can give a great deal of information as to the thickness and density of layers in the Earth, but they give no indication of temperature.
That has to be worked out either in computer models that simulate the Earth's insides, or in the laboratory. Measurements in the early 1990s of iron's "melting curves" - from which the core's temperature can be deduced - suggested a core temperature of about 5,000˚C.
"It was just the beginning of these kinds of measurements so they made a first estimate... to constrain the temperature inside the Earth," said Agnes Dewaele of the French research agency CEA and a co-author of the new research.
"Other people made other measurements and calculations with computers and nothing was in agreement. It was not good for our field that we didn't agree with each other."
The core temperature is crucial to a number of disciplines that study regions of our planet's interior that will never be accessed directly - guiding our understanding of everything from earthquakes to the Earth's magnetic field.
By Charles Q. Choi OurAmazingPlanet updated 7/17/2011 3:17:27 PM ET
Half of the extraordinary heat of the Earth that erupts on its surface volcanically and drives the titanic motions of the continents is due to radioactivity, scientists find. This new discovery shows that the planet still retains an extraordinary amount of heat it had from its primordial days. To better understand the sources of the Earth's heat, scientists studied antineutrinos, elementary particles that, like their neutrino counterparts, only rarely interact with normal matter. Using the Kamioka Liquid-scintillator Antineutrino Detector (KamLAND) located under a mountain in Japan, they analyzed geoneutrinos — ones emitted by decaying radioactive materials within the Earth — over the course of more than seven years. The specific amount of energy an antineutrino packs on the rare occasions one does collide with normal matter can tell scientists about what material emitted it in the first place — for instance, radioactive material from within the Earth, as opposed to in nuclear reactors. If one also knows how rarely such an antineutrino interacts with normal matter, one can then estimate how many antineutrinos are being emitted and how much energy they are carrying in total. The researchers found the decay of radioactive isotopes uranium-238 and thorium-232 together contributed 20 trillion watts to the amount of heat Earth radiates into space, about six times as much power as the United States consumes. U.S. power consumption in 2005 averaged about 3.34 trillion watts. As huge as this value is, it only represents about half of the total heat leaving the planet. The researchers suggest the remainder of the heat comes from the cooling of the Earth since its birth. Knowing what the sources of heat from Earth are "is a very important issue in geophysics," researcher Itaru Shimizu, an elementary particle physicist at Tohoku University in Miyagi, Japan, told OurAmazingPlanet. For instance, the heat from Earth's primordial days is thought to be bound to the planet's core, while the heat from radioactive decay is thought to be distributed in the crust and mantle layers of the planet, greatly influencing currents in the mantle, "which drive plate tectonics and geophysical activity," Shimizu said. The scientists at the KamLAND Collaboration detailed their findings online July 17 in the journal Nature Geoscience.
We watch the ongoing COP19 climate talks in Warsaw with our usual hope that perhaps this year will be the year that the global community wakes up to the crisis of climate change. Historically there is no reason for optimism. The Kyoto agreement excluded the major polluters including the United States and China and we still have the dynamic of those most responsible for the current crisis refusing to be part of the solution to mitigation.
There has only been one global climate treaty which has had a record of success and that was the Montreal Protocol, a treaty enacted in 1987 to protect the Earth’s thinning ozone layer. The treaty has had the unintended benefit of helping to slow the rate of global warming since the mid-1990s, according to a new study. The study, published in the journal Nature Geoscience, relies on a statistical analysis of global average temperatures as well as greenhouse gas emission trends, including chlorofluorocarbons or CFCs, which both break down ozone in the upper atmosphere and help warm the climate.
The study provides evidence that the Montreal Protocol was an effective climate treaty, albeit an accidental one, and it is the first to link the treaty to the recent slowdown in warming. At the time the treaty was negotiated, CFCs were known to be greenhouse gases, but the treaty was not initially meant to address global warming, an issue that was just starting to gain public attention.
According to the study, the phase down in the use of CFCs during the 1990s into the early twenty-first century, which was solely intended to reverse the loss of Earth’s protective ozone layer in the upper atmosphere, has shaved nearly 0.2-degrees fahrenheit of global warming since that time. While that may seem small, considering that the world has warmed by an average of about 1.6-degrees fahrenheit between 1901-2012, it is not a trivial amount.
New model of Earth's interior reveals clues to hotspot volcanoes Toronto NewsFIX Berkeley — Scientists at the University of California, Berkeley, have detected previously unknown channels of slow-moving seismic waves in Earth's upper mantle, a...
The biological information that makes us unique is encoded in our DNA. DNA damage is a natural biological occurrence that happens every time cells divide and multiply. External factors such as overexposure to sunlight can also damage DNA.
Michael Feig, professor of biochemistry and molecular biology at Michigan State University, studies the proteins MutS and MSH2-MSH6, which recognize defective DNA and initiate DNA repair. Natural DNA repair occurs when proteins like MutS (the primary protein responsible for recognizing a variety of DNA mismatches) scan the DNA, identify a defect, and recruit other enzymes to carry out the actual repair.
“The key here is to understand how these defects are recognized,” Feig explained. “DNA damage occurs frequently and if you couldn’t repair your DNA, then you won’t live for very long.” This is because damaged DNA, if left unrepaired, can compromise cells and lead to diseases such as cancer.
DNA bending is believed to facilitate the initial recognition of the mismatched base for repair. The repair efficiencies are dependent on both the mismatch type and neighboring nucleotide sequence. We have studied bending of several DNA duplexes containing canonical matches: A:T and G:C; various mismatches: A:A, A:C, G:A, G:G, G:T, C:C, C:T, and T:T; and a bis-abasic site: X:X. Free-energy profiles were generated for DNA bending using umbrella sampling. The highest energetic cost associated with DNA bending is observed for canonical matches while bending free energies are lower in the presence of mismatches, with the lowest value for the abasic site. In all of the sequences, DNA duplexes bend toward the major groove with widening of the minor groove.
For homoduplexes, DNA bending is observed to occur via smooth deformations, whereas for heteroduplexes, kinks are observed at the mismatch site during strong bending. In general, pyrimidine:pyrimidine mismatches are the most destabilizing, while purine:purine mismatches lead to intermediate destabilization, and purine:pyrimidine mismatches are the least destabilizing. The ease of bending is partially correlated with the binding affinity of MutS to the mismatch pairs and subsequent repair efficiencies, indicating that intrinsic DNA bending propensities are a key factor of mismatch recognition.
The biological repair machinery seems to take advantage of this propensity by ‘testing’ DNA to determine whether it can be bent easily. If that is the case, the protein has found a mismatch and repair is initiated.
“When the MutS protein is deficient in certain people, they have a high propensity to develop certain types of cancer,” Feig said. “We’re interested in understanding, first of all, how exactly this protein works. The long-term idea is to develop strategies for compensating for this protein, basically substituting some other mechanism for recognizing defective DNA and enabling repair.”
The strongest link between diseases and defects from the MutS protein has been made for a specific type of genetically inherited colon cancer.
“If an essential protein like MutS is missing or less than adequate, then the cells will not behave in a normal way,” he explained. “They will turn cancerous. The cells will refuse to die and proliferate in an uncontrollable state.”
In these cases, cancer is not a result of damaged DNA, but occurs because of a problem in the DNA repair mechanism itself.
“It probably has effects on many other cancers as well, because all the cancers are ultimately linked to defective DNA,” he said. “If DNA damage is not recognized and repaired in time then it can lead to any type of cancer. It is a fairly generic mechanism.”
According to Matt Cowperthwaite, TACC’s medical informatics programs coordinator, Feig’s research is enormously important for advancing our understanding of how cells repair the mistakes that inevitably occur during DNA replication. “For the first time, we have a mechanistic insight of how MutS finds mutations. This is extremely important research because the process of mutation underlies some of the deadliest diseases to affect humans, such as cancer.”
Research in this area, being very fundamental in nature, throws up many challenges, but its potential in future impact, Feig believes, is tremendous.
Chemists have produced the first high resolution structure of a nano-scale square made from ribonucleic acid, or RNA.
The structure was published in a paper in this week's early online edition of theProceedings of the National Academy of Sciences by a team of chemists headed by Thomas Hermann, an assistant professor of chemistry and biochemistry at UCSD.
The scientists said the ability to carry structural information encoded in the sequence of the constituent building blocks is a characteristic trait of RNA, a key component of the genetic code. The nano square self-assembles from four corner units directed by the sequence that was programmed into the RNA used for preparing the corners.
Hermann said the RNA square has potential applications as a self-assembling nano platform for the programmed combination of molecular entities that are linked to the corner units.
The Earth's magnetic field has reversed many times at an irregular rate throughout its history. Long periods without reversal have been interspersed with eras of frequent reversals. What is the reason for these reversals and their irregularity?
Researchers from CNRS and the Institut de Physique du Globe, France, have shed new light on the issue by demonstrating that, over the last 300 million years, reversal frequency has depended on the distribution of tectonic plates on the surface of the globe. This result does not imply that terrestrial plates themselves trigger the switch over of the magnetic field.
Instead, it establishes that although the reversal phenomenon takes place, in fine, within the Earth's liquid core, it is nevertheless sensitive to what happens outside the core and more specifically in the Earth's mantle.
This work is published on 16 October 2011 in Geophysical Research Letters.
"The surface of the Earth is broken up into large plates. It’s easy to confuse these plates with the Earth’s crust – the thin outermost layer of the Earth. But there is more to the structure of the Earth than this simple image of a ‘cracked egg-shell’.
The Earth’s layers can be defined in two different ways – based on the chemical composition or the mechanical properties of the rock. To understand what plates are, it is important to understand both of ..."
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.