In experiments at the GSI Helmholtzzentrum für Schwerionenforschung accelerator facility scientists discovered a total of six new elements.
Chemical elements are produced in stars and in stellar explosions. Ultimately, these elements are the building blocks of all materials that surround us — including every atom of our bodies. However, the universe is also home to a large number of other atoms that do not occur on the Earth.
One of the key tasks of the researchers at GSI is to attempt to create previously unknown elements in the laboratory. For the creation of a new element, scientists use two elements existing on Earth of which the atomic nuclei added together have as many protons as the new element. They try to fuse the nuclei of the two elements together in order to create a new atomic nucleus much larger and heavier than the two original nuclei. For this purpose, scientists accelerate charged atoms – so-called ions – of the one element by means of a 120 m long linear accelerator to extremely high velocities of roughly 30,000 kilometers per second. Subsequently, the accelerated ions are “fired” at a very thin foil of the other element. In a very rare cases, e. g. once a week, the two elements fuse to form a new element.
By means of a very sensitive detector, the new element is being identified. Thereby it is decisive that the new element is not stable. It decays into another, lighter element already after splits of a second. During the decay process, it emits a characteristic alpha particle. This process is repeated several times. The detector can precisely measure these emitted alpha particles and thus clearly identify the new element.
In these experiments scientists at the GSI Helmholtzzentrum discovered the chemical elements with atomic numbers 107 to 112. The highest currently discovered element is 118. It is not clear whether any element can exist beyond that number.
Korean researchers use semiconductor manufacturing processes rather than chemical synthesis to build better nanowires faster.
Nanowires don’t quite get the recognition that their high-profile nanomaterial cousins carbon nanotubes and graphene receive. But nanowires are quietly leading toward big improvements in a new generation of photovoltaics, plastic OLEDs (organic light-emitting devices), and a bunch of other applications.
Nanowires have suffered from the same manufacturing issues that other nanomaterials have endured, namely achieving large scale production while maintaining quality. One of the key problems nanowire developers have had to overcome is getting the nanowires to orient themselves in perfectly even arrays.
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) in cooperation with LG Innotek have found a solution to that problem. And that solution moves away from traditional chemical synthesis to toward tricks common to semiconductor manufacturing.
In research published in the journal Nano Letters (“High Throughput Ultralong (20 cm) Nanowire Fabrication Using a Wafer-Scale Nanograting Template”), the Korean team leveraged semiconductor processes to produce highly-ordered and arrays of long (up to 20 centimeters) nanowires, eliminating the need for post-production arrangement.
The process involves a photo engraving technique on a 20-centimeter diameter silicon wafer. First the researchers created a template on the wafer consisting of an ultrafine 100-nanometer linear grid pattern. Then they used this pattern to lay down the nanowires using a sputtering process. The method produces nanowires in bulk in perfect shapes of 50-nm width and 20 cm maximum length.
“The significance is in resolving the issues in traditional technology, such as low productivity, long manufacturing time, restrictions in material synthesis, and nanowire alignment,” commented Professor Jun-Bo Yoon of KAIST in a press release. “Nanowires have not been widely applied in the industry, but this technology will bring forward the commercialization of high performance semiconductors, optic devices, and biodevices that make use of nanowires.”
Because the process doesn’t require a long synthesis time and results in perfectly aligned nanowires, the industrial partners in the research believe that it’s a technique that should lend itself to commercialization.
Throughout the history of medicine, many devastating illnesses were first treated with dangerous, sometimes even barbaric, methods that initially seemed almost as bad as the sickness. Over time, those treatments get refined and it becomes hard to believe that people once died of epilepsy, for example, or from what seem now to be minor heart conditions.
Many types of arrythmia, for example, that used to kill patients no longer do. More than 4 million people around the world wear pacemakers. But it’s still a serious matter to have life-saving pacemakers installed and a limitation to live with one.
But in the coming year, it will likely become significantly easier to receive and live with a pacemaker. Developed by Silicon Valley startup Nanostim, a device about the size of a AAA battery, or one-tenth the size of a conventional pacemaker, was recently approved for use in Europe. It is installed through a catheter in the femoral vein in a minimally invasive procedure. Then, for about 10 years it sits inside the ventricle of the heart and delivers its regulatory electrical pulses wirelessly.
“For the past 40 years the therapeutic promise of leadless [or wireless] pacing has been discussed, but until now, no one has been able to overcome the technical challenges,” Dr. Johannes Sperzel of the Kerckhoff Clinic in Bad Nauheim, Germany, said in a news release.
The first pacemaker was the size of an ice hockey puck and had to be installed in the abdomen. Currently, most are about the size of a watch and are installed in a “surgical pocket” under the skin near the collarbone. The padded wires, or leads, that feed down to the heart to stimulate it cause many patients discomfort.
After the approval came through, Minnesota-based St. Jude Medical acquired Nanostim for $123.5 million. The company, which made the first pacemaker in 1958, had funded Nanostim’s work, and the Nanostim pacemaker uses a St. Jude Medical electrode.
All animals have to make decisions every day. Where will they live and what will they eat? How will they protect themselves?
For the first time, Arizona State University researchers have discovered that at least in ants, animals can change their decision-making strategies based on experience. They can also use that experience to weigh different options.
The findings are featured today in the early online edition of the scientific journal Biology Letters, as well as in its Dec. 23 edition.
Co-authors Taka Sasaki and Stephen Pratt, both with ASU’s School of Life Sciences, have studied insect collectives, such as ants, for years. Sasaki, a postdoctoral research associate, specializes in adapting psychological theories and experiments that are designed for humans to ants, hoping to understand how the collective decision-making process arises out of individually ignorant ants.
“The interesting thing is we can make decisions and ants can make decisions – but ants do it collectively,” said Sasaki. “So how different are we from ant colonies?”
To answer this question, Sasaki and Pratt gave a number ofTemnothorax rugatulus ant colonies a series of choices between two nests with differing qualities. In one treatment, the entrances of the nests had varied sizes, and in the other, the exposure to light was manipulated. Since these ants prefer both a smaller entrance size and a lower level of light exposure, they had to prioritize.
“It’s kind of like a humans and buying a house,” said Pratt, an associate professor with the school. “There’s so many options to consider – the size, the number of rooms, the neighborhood, the price, if there’s a pool. The list goes on and on. And for the ants it’s similar, since they live in cavities that can be dark or light, big or small. With all of these things, just like with a human house, it’s very unlikely to find a home that has everything you want.”
team of researchers at Singapore's Nanyang Technological University believes they have solved the mystery of why warm water freezes faster than cooler water. It has to do with the way energy is stored in the hydrogen bonds between water molecules they suggest in their paper which they've uploaded to the preprint server arXiv.
The researchers demonstrate that the Mpemba paradox arises intrinsically from the release rate of energy initially stored in the covalent H-O part of the O:H-O bond in water albeit experimental conditions. Generally, heating raises the energy of a substance by lengthening and softening all bonds involved. However, the O:H nonbond in water follows actively the general rule of thermal expansion and drives the H-O covalent bond to relax oppositely in length and energy because of the inter-electron-electron pair coupling [J Phys Chem Lett 4, 2565 (2013); ibid 4, 3238 (2013)].
Heating stores energy into the H-O bond by shortening and stiffening it. Cooling the water as the source in a refrigerator as a drain, the H-O bond releases its energy at a rate that depends exponentially on the initially storage of energy, and therefore, Mpemba effect happens.
This effect is formulated in terms of the relaxation time tau to represent all possible processes of energy loss. Consistency between predictions and measurements revealed that the tau drops exponentially intrinsically with the initial temperature of the water being cooled.
UC Berkeley and University of Hawaii astronomers analyzed all four years of Kepler data in search of Earth-size planets in the habitable zones of sun-like stars, and then rigorously tested how many planets they may have missed.
A major question is whether planets suitable for biochemistry are common or rare in the universe. Small rocky planets with liquid water enjoy key ingredients for biology. Astronomers now used the National Aeronautics and Space Administration Kepler telescope to survey 42,000 Sun-like stars for periodic dimmings that occur when a planet crosses in front of its host star. They found 603 planets, 10 of which are Earth size and orbit in the habitable zone, where conditions permit surface liquid water. They measured the detectability of these planets by injecting synthetic planet-caused dimmings into Kepler brightness measurements. They find that 22% of Sun-like stars harbor Earth-size planets orbiting in their habitable zones. The nearest such planet may be within 12 light-years.
"It's been nearly 20 years since the discovery of the first extrasolar planet around a normal star. Since then we have learned that most stars have planets of some size and that Earth-size planets are relatively common in close-in orbits that are too hot for life," said Andrew Howard, a former UC Berkeley post-doctoral fellow who is now on the faculty of the Institute for Astronomy at the University of Hawaii. "With this result we've come home, in a sense, by showing that planets like our Earth are relatively common throughout the Milky Way galaxy."
The Arctic covers around 5% of the planet's surface, but it is capturing a disproportionate amount of attention. With temperatures rising at twice the global rate, the region's summer sea ice is shrinking rapidly, making access easier than ever before. At the same time, countries are racing to claim parts of the Arctic's sea floor and the vast deposits of hydrocarbons that lie beneath it.
Since satellite observations started in 1979, the September sea-ice extent has declined by 12% per decade, and the past 5 years have marked the lowest on record. The ice cover is thinning, making it more vulnerable to warmer temperatures. Forecasts by climate models suggest that summer sea ice will largely disappear in the second half of the century, but the current rate of ice loss exceeds the models' forecasts, suggesting that ice-free conditions could arrive sooner.
Under the United Nations Convention on the Law of the Sea, countries can claim rights to seabed resources in the Arctic Ocean, depending on their coastline and the sea-floor geology. Dark shading on this map represents each nation's existing exclusive economic zone, which extends up to 370 kilometres from its coastline. Lighter shading depicts extended regions to which countries may be eligible. Russia and Norway are the only Arctic nations to have submitted their bids.
University of Cincinnati researchers have developed the first-of-its-kind nanostructure which is unusual because it can carry a variety of cancer-fighting materials on its double-sided (Janus) surface and within its porous interior.
Because of its unique structure, the nano carrier can do all of the following:
• Transport cancer-specific detection nanoparticles and biomarkers to a site within the body, e.g., the breast or the prostate. This promises earlier diagnosis than is possible with today’s tools.
• Attach fluorescent marker materials to illuminate specific cancer cells, so that they are easier to locate and find for treatment, whether drug delivery or surgery.
• Deliver anti-cancer drugs for pinpoint targeted treatment of cancer cells, which should result in few drug side effects. Currently, a cancer treatment like chemotherapy affects not only cancer cells but healthy cells as well, leading to serious and often debilitating side effects.
This recently developed Janus nanostructure is unusual in that, normally, these super-small structures (that are much smaller than a single cell) have limited surface. This makes is difficult to carry multiple components, e.g., both cancer detection and drug-delivery materials. The Janus nanocomponent, on the other hand, has functionally and chemically distinct surfaces to allow it to carry multiple components in a single assembly and function in an intelligent manner.
“In this effort, we’re using existing basic nano systems, such as carbon nanotubes, graphene, iron oxides, silica, quantum dots and polymeric nano materials in order to create an all-in-one, multidimensional and stable nano carrier that will provide imaging, cell targeting, drug storage and intelligent, controlled drug release,” said UC’s Shi, adding that the nano carrier’s promise is currently greatest for cancers that are close to the body’s surface, such as breast and prostate cancer.
Researchers at Oregon State University have discovered that one gene in a common fungus acts as a master regulator, and deleting it has opened access to a wealth of new compounds that have never before been studied – with the potential to identify new antibiotics.
Scientists succeeded in flipping a genetic switch that had silenced more than 2,000 genes in this fungus, the cereal pathogen Fusarium graminearum. Until now this had kept it from producing novel compounds that may have useful properties, particularly for use in medicine but also perhaps in agriculture, industry, or biofuel production.
"About a third of the genome of many fungi has always been silent in the laboratory," said Michael Freitag, an associate professor of biochemistry and biophysics in the OSU College of Science. "Many fungi have antibacterial properties. It was no accident that penicillin was discovered from a fungus, and the genes for these compounds are usually in the silent regions of genomes.
"What we haven't been able to do is turn on more of the genome of these fungi, see the full range of compounds that could be produced by expression of their genes," he said. "Our finding should open the door to the study of dozens of new compounds, and we'll probably see some biochemistry we've never seen before."
In the past, the search for new antibiotics was usually done by changing the environment in which a fungus or other life form grew, and see if those changes generated the formation of a compound with antibiotic properties.
Few genes have made the headlines as much as FOXP2. The first gene associated with language disorders, it was later implicated in the evolution of human speech. Girls make more of the FOXP2 protein, which may help explain their precociousness in learning to talk. Now, neuroscientists have figured out how one of its molecular partners helps Foxp2 exert its effects.
The findings may eventually lead to new therapies for inherited speech disorders, says Richard Huganir, the neurobiologist at Johns Hopkins University School of Medicine in Baltimore, Maryland, who led the work. Foxp2 controls the activity of a gene called Srpx2, he notes, which helps some of the brain's nerve cells beef up their connections to other nerve cells. By establishing what SRPX2 does, researchers can look for defective copies of it in people suffering from problems talking or learning to talk.
Until 2001, scientists were not sure how genes influenced language. Then Simon Fisher, a neurogeneticist now at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and his colleagues fingered FOXP2 as the culprit in a family with several members who had trouble with pronunciation, putting words together, and understanding speech. These people cannot move their tongue and lips precisely enough to talk clearly, so even family members often can’t figure out what they are saying. It “opened a molecular window on the neural basis of speech and language,” Fisher says.
A few years later, other researchers showed that the FOXP2 gene in humans differed from the chimp version by only two bases, the "letters" that make up DNA. That small difference may have affected Foxp2 performance such that animal calls could eventually transform into the human gift of gab. In 2009, a team put the human version of the gene in mice and observed that the rodents produced more frequent and complex alarm calls, suggesting these mutations may have been involved in the evolution of more complex speech. But how Foxp2 works has largely remained a mystery.
Huganir didn't start out trying to solve this mystery. He was testing 400 proteins to see if they helped or hindered the development of specialized junctions between nerve cells, called synapses, which allow nerve cells to communicate with one another. A single neuron can have up to 10,000 synapses, or connections to other neurons, Huganir says. Of the 10 proteins he identified, one that strongly promoted synapse formation was Srpx2, a gene other researchers had linked to epilepsy and language problems.
Huganir and his colleagues examined Srpx2 activity in isolated nerve cells, determining that it stimulated the formation of "excitatory" connections, ones where a "turn on" message was conveyed to the receiving nerve cell. Srpx2 also enhanced the number of excitatory connections in the part of the brain in developing mice that is the equivalent of the human language center, the researchers report online today in Science. Because Foxp2 regulates the activity of several genes, including Srpx2, Huganir and his team took a closer look at howFoxp2 affected this gene. When Foxp2 is around, Srpx2 makes fewer excitatory synapses, they report. It may be that the right balance of excitatory synapses and other connections may be necessary for complex vocalizations, Huganir suggests.
As a final test, the researchers looked to see how changing the activity of the Srpx2 gene affected alarm calls of baby mice. Mice pups separated from their moms call for help with squeals too high-pitched for humans to hear. When the researchers artificially inhibited Srpx2's activity, the mice squealed less. But the pups squealed normally again when gene activity was restored, Huganir and his colleagues report.
The work "shows that Foxp2 affects synapse formation through Srpx2," says Svante Pääbo, a paleogeneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who has studied Foxp2 in primates and in mice. "It is the first target gene of Foxp2 that has a clear function with respect to neuronal function."
Our hardy ancestors survived north of the Arctic Circle as far back as the last ice age, unearthed tools now reveal. The mammoth hunters braved sub-zero temperatures on desolate tundra at least 20,000 years earlier than was thought, the remains suggest, although whether the people were Neanderthals or modern humans is a mystery.
The artefacts, dug up in an Arctic riverbed, show that humans once lived as far north as Siberia and Alaska, say archaeologist John Svendsen, of the University of Bergen in Norway, and his team1. The stone tools, horse and reindeer bones and a mammoth tusk with hand-made markings, were found at Mamontovaya Kurya in European Russia.
Radiocarbon dating puts the finds between 35,000 and 40,000 years old. Previously, humans were thought to have colonized this northern region in the last stages of the ice age only some 13,000 years ago.
The 40,000-year date "marks a turning point in the history of human evolution in Europe," says Svendsen's team. Around that time, roaming Neanderthals are thought to have given way to anatomically modern humans migrating northwards out of Africa and into Europe. The new haul does not reveal the identity of the Arctic dwellers to be either Neanderthal or modern.
Either way the result is exciting, says archaeologist John Gowlett of the University of Liverpool, UK. Either Neanderthals travelled further north than was thought, or modern humans moved and adjusted to northern extremes very quickly - within a few thousand years of leaving hotter climes.
Temperatures seem to have fluctuated markedly at that time, pushing populations north or south. Early modern humans may have followed herds of mammoths, wild horses or reindeer northwards during a warmer period, speculates Gowlett. During colder spells, freezing steppes extended as far south as Greece. "Humans had a hold on the north, if only for a short time," he says.
To survive at these latitudes, humans have to be well adapted, explains anthropologist Chris Stringer of the Natural History Museum in London. Temperatures fall to -40°C and there is 24-hour darkness for part of the year. "You've got to have clothing, housing and fire," he says. Eating meat and fat would have been important, as there are few plants. Cold-dwelling populations such as the Inuits also have physiological differences that make them more tolerant to the cold. Whereas modern humans are known for their ability to survive in extreme conditions, Neanderthals were thought to lack such skills. If the remains are Neanderthal, then "they were not a load of numbskulls," says Gowlett.
Our view of the historic landscape in which the hunters lived is also changing. The animal bones add to mounting evidence that this region of the Arctic, although cold, was not ice-bound 35,000 years ago. Instead, it probably consisted of open, grassy steppes.
A team of astrophysicists at the German Aerospace Center (Deutsches Zentrum für Luft und- Raumfahrt; DLR), together with German and European colleagues, has discovered the most extensive planetary system to date. Seven planets circle the star KOI-351 – more than in other known planetary systems.
They are arranged in a similar fashion to the eight planets in the Solar System, with small rocky planets close to the parent star and gas giant planets at greater distances. Although the planetary system around KOI-351 is packed together more tightly, it provides an interesting comparison to our cosmic home.
Astrophysicists around the world have been searching for a star system similar to our own for a long time. Now, the team led by Juan Cabrera, an astrophysicist at the DLR Institute of Planetary Research in Berlin-Adlershof has taken a major step in this direction. Three of the seven planets in orbit around the star KOI-351 were discovered in recent years, and have periods of 331, 211 and 60 days, similar to those of Earth, Venus and Mercury.
The planets discovered by Cabrera and his team are even closer to the star and have orbital periods of 7, 9, 92 and 125 days. The outermost planet orbits the star at a distance of about 150 million kilometres, or roughly one Astronomical Unit (AU), so the entire planetary system is compressed into a space corresponding to the distance between Earth and the Sun.
In the article published in the Astrophysical Journal, Juan Cabrera and his colleagues emphasise the similarities between KOI-351 and the Solar System: “No other planetary system shows such a similar ‘architecture’ to that of our cosmic home as does the planetary system around KOI-351,” says Cabrera. “Just as in the Solar System, rocky planets with roughly the size of Earth are found close to the star, while, ‘gas giants’ similar to Jupiter and Saturn are found as you move away from the star.”
“We cannot stress just how important this discovery is. It is a big step in the search for a ‘twin’ to the Solar System, and thus also in finding a second Earth,” said Cabrera. Heike Rauer, head of the Extrasolar Planets and Atmospheres working group at the DLR Institute of Planetary Research and professor at the Centre for Astronomy and Astrophysics at the University of Berlin, adds: “The discovery of this complex planetary system helps us to better understand the processes that give rise to such planetary systems.” Tilman Spohn, Head of the DLR Institute of Planetary Research states that: “DLR is proud to have made a significant contribution to the discovery of new planetary systems.”
The development of a special computer algorithm enabled Juan Cabrera and his team to detect the four new planets around KOI-351. The DLR astrophysicist was able to filter out the light curves that reveal the ‘transit’ of a planet across its parent star from the Kepler measurements. A transit is inferred from the small, periodic dimming of the star’s light as the planet crosses the star’s disc. This technological development is likely to be crucial in the search for similar multiple systems using large data sets from future space telescopes. The discovery was confirmed shortly afterwards by a US group led by Joseph R. Schmitt of Yale University, by visual inspection of the light curves recorded by Kepler.
KOI is the abbreviation for ‘Kepler Object of Interest’, which means the star was observed by NASA’s Kepler space telescope, between 2008 and 2013, and classified as a candidate for the existence of exoplanets. At present, KOI-351 is the star with the most extrasolar planets, or exoplanets for short. The star is 2500 light years away from Earth.
It doesn't take a Watson to realize that even the world's best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS.
“Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”
A University of Alabama at Birmingham (UAB) surgical team has performed one of the first surgeries using a telepresence augmented reality technology from VIPAAR in conjunction with Google Glass.
The combination of the two technologies could be an important step toward the development of useful, practical telemedicine.
VIPAAR (Virtual Interactive Presence in Augmented Reality) is commercializing a UAB-developed technology that provides real-time, two-way, interactive video conferencing.
UAB orthopedic surgeon Brent Ponce, M.D., performed a shoulder replacement surgery Sept. 12 at UAB Highlands Hospital in Birmingham. Watching and interacting with Ponce via the VIPAAR technology was Phani Dantuluri, M.D., from his office in Atlanta.
The VIPAAR technology allowed Dantuluri to see exactly what Ponce saw in the operating room and introduce his hands or instruments into the virtual surgical field.
At the same time, Ponce saw Dantuluri’s hands and instruments in his Google Glass display, along with his own field of view, as a merged-reality environment.
The two surgeons were able to discuss the case in a truly interactive fashion since Dantuluri could watch Ponce perform the surgery and simultaneously introduce his hands or instruments into Ponce’s view as if they were standing next to each other during the case.
“It’s real-time, real-life, right there, as opposed to a Skype or video conference call, which allows for dialogue back and forth but is not really interactive,” said Ponce.
UAB physicians say this kind of technology could greatly enhance patient care by allowing a veteran surgeon to remotely provide valuable expertise to less experienced surgeons.
German researchers have developed a new gelatin bio-ink that can be used by 3D printing technology to produce various types of tissue and organs.
Scientists have long been working to improve methods and procedures for artificially producing tissue. In the current work, researchers at Fraunhofer Institute for Interfacial Engineering and Biotechnology (IGB) in Stuttgart, Germany, developed a suitable bio-ink for 3D printing that consist of gelatin-based components from natural tissue matrix and living cells. Gelatin is a well-known biological material derived from collagen that serves as the main constituent of native tissue.
The IGB researchers were able to chemically modify the gelling behavior of the gelatin to adapt the biological molecules for printing. This allowed the bio-ink to remain fluid during printing, instead of gelling like unmodified gelatin. Once the bio-inks are irradiated with UV light, they crosslink and cure to form hydrogels – polymers containing a large amount of water (just like native tissue), but which are stable in aqueous environments and when heated to 98.6 degree Fahrenheit – the average temperature of the human body.
The chemical modification of these biological molecules can be controlled so that the resulting gels have differing strengths and swelling characteristics, allowing researchers to imitate various properties of natural tissue – from solid cartilage to soft adipose tissue.
The IGB research facility also prints synthetic raw materials that can serve as substitutes for the extracellular matrix, such as systems that cure to a hydrogel devoid of by-products, which can immediately be populated with genuine cells.
“We are concentrating at the moment on the ‘natural’ variant. That way we remain very close to the original material. Even if the potential for synthetic hydrogels is big, we still need to learn a fair amount about the interactions between the artificial substances and cells or natural tissue. Our biomolecule-based variants provide the cells with a natural environment instead, and therefore can promote the self-organizing behavior of the printed cells to form a functional tissue model,” said Dr. Kirsten Borchers in describing the approach at IGB.
"Cancers develop in complex tissue environments, which they depend on for sustained growth, invasion and metastasis. Unlike tumor cells, stromal cell types within the tumor microenvironment (TME) are genetically stable and thus represent an attractive therapeutic target with reduced risk of resistance and tumor recurrence. However, specifically disrupting the pro-tumorigenic TME is a challenging undertaking, as the TME has diverse capacities to induce both beneficial and adverse consequences for tumorigenesis. Furthermore, many studies have shown that the microenvironment is capable of normalizing tumor cells, suggesting that re-education of stromal cells, rather than targeted ablation per se, may be an effective strategy for treating cancer. Here we discuss the paradoxical roles of the TME during specific stages of cancer progression and metastasis, as well as recent therapeutic attempts to re-educate stromal cells within the TME to have anti-tumorigenic effects."
Bidirectional communication between cells and their microenvironment is critical for both normal tissue homeostasis and tumor growth. In particular, interactions between tumor cells and the associated stroma represent a powerful relationship that influences disease initiation and progression and patient prognosis. The link between chronic inflammation and tumorigenesis was first proposed by Rudolf Virchow in 1863 after the observation that infiltrating leukocytes are a hallmark of tumors. Since then, a plethora of studies have contributed to the characterization of the TME, further complicating the already challenging task of understanding and treating cancer. Whereas cancer was previously viewed as a heterogeneous disease involving aberrant mutations in tumor cells, it is now evident that tumors are also diverse by nature of their microenvironmental composition and their stromal cell proportions or activation states. In response to evolving environmental conditions and oncogenic signals from growing tumors, the TME continually changes over the course of cancer progression, underscoring the need to consider the influences of the TME on metastasis as a dynamic process and understand how tumor cells drive the construction of their own niche.
Using a 3D printer, people can already determine the length, width and depth of an object that they create. Thanks to research being conducted at the University of Colorado, Boulder, however, a fourth dimension can now be included – time. And no, we're not talking about how long it takes to 3D-print an item. Instead, it's now possible to print objects that change their shape at a given time.
The scientists, led by Prof. H. Jerry Qi, have developed a "4D printing" process in which shape-memory polymer fibers are deposited in key areas of a composite material item as it's being printed. By carefully controlling factors such as the location and orientation of the fibers, those areas of the item will fold, stretch, curl or twist in a predictable fashion when exposed to a stimulus such as water, heat or mechanical pressure.
The concept was proposed earlier this year by MIT's Skylar Tibbits, who used his own 4D printing process to create a variety of small self-assembling objects. "We advanced this concept by creating composite materials that can morph into several different, complicated shapes based on a different physical mechanism,” said Martin L. Dunn of the Singapore University of Technology and Design, who collaborated with Qi on the latest research.
This means that one 4D-printed object could change shape in different ways, depending on the type of stimulus to which it was exposed. That functionality could make it possible (for example) to print a photovoltaic panel in a flat shape, expose it to water to cause it to fold up for shipping, and then expose it to heat to make it fold out to yet another shape that's optimal for catching sunlight.
Scientists at Northwestern University say they were able to demonstrate the successful delivery of a drug that turns off a critical gene in glioblastoma multiforme (GBM), increasing survival rates significantly in animals with the deadly disease. This form of brain cancer, which ended Sen. Edward Kennedy’s life, kills approximately 13,000 Americans a year.
According to the investigators, the novel therapeutic, which is based on nanotechnology, is small and nimble enough to cross the blood-brain barrier and get to where it is needed—the brain tumor.
Designed to target a specific cancer-causing gene in cells, the drug flips the switch of the oncogene to “off,” silencing the gene, they added. This knocks out the proteins that keep cancer cells immortal.
In a study of mice (“Spherical Nucleic Acid Nanoparticle Conjugates as an RNAi-Based Therapy for Glioblastoma”), the nontoxic drug was delivered by intravenous injection. In animals with GBM, the survival rate increased nearly 20%, and tumor size was reduced three to four fold, as compared to the control group. The results were published October 30 in Science Translational Medicine.
“We preclinically evaluate an RNA interference (RNAi)–based nanomedicine platform, based on spherical nucleic acid (SNA) nanoparticle conjugates, to neutralize oncogene expression in GBM,” wrote the scientists. “In vivo, the SNAs penetrated the blood-brain barrier and blood-tumor barrier to disseminate throughout xenogeneic glioma explants. SNAs targeting the oncoprotein Bcl2Like12 (Bcl2L12)—an effector caspase and p53 inhibitor overexpressed in GBM relative to normal brain and low-grade astrocytomas—were effective in knocking down endogenous Bcl2L12 mRNA and protein levels, and sensitized glioma cells toward therapy-induced apoptosis by enhancing effector caspase and p53 activity.”
“This is a beautiful marriage of a new technology with the genes of a terrible disease,” said Chad A. Mirkin, Ph.D., a nanomedicine expert and a senior co-author of the study.
“This proof-of-concept further establishes a broad platform for treating a wide range of diseases, from lung and colon cancers to rheumatoid arthritis and psoriasis.”
The power of gene regulation technology is that a disease with a genetic basis can be attacked and treated if scientists have the right tools, pointed out Dr. Mirkin. Thanks to the Human Genome Project and genomics research over the last two decades, there is an enormous number of genetic targets; having the right therapeutic agents and delivery materials has been the challenge, he explained.
“The RNA interfering-based SNAs are a completely novel approach in thinking about cancer therapy,” said Alexander H. Stegh, Ph.D., a co-author on the study. “One of the problems is that we have large lists of genes that are somehow disregulated in glioblastoma, but we have absolutely no way of targeting all of them using standard pharmacological approaches.
That's where we think nanomaterials can play a fundamental role in allowing us to implement the concept of personalized medicine in cancer therapy.”
Delays in the installation of key parts of ITER, a multibillion-euro international nuclear-fusion experiment, are forcing scientists to change ITER’s research programme to focus exclusively on the key goal of generating power by 2028. As a result, much research considered non-essential to the target, including some basic physics and studies of plasmas aimed at better understanding industrial-scale fusion, will be postponed. A 21-strong expert panel of international plasma scientists and ITER staff, convened to reassess the project’s research plan in the light of the construction delays. The plans were discussed at a meeting of ITER’s Science and Technology Advisory Committee (STAC). The meeting is the start of a year-long review by ITER to try to keep the experiment on track to generate 500 MW of power from an input of 50 MW by 2028, and so hit its target of attaining the so-called Q ≥ 10,where power output is ten times input or more.
ITER, which will be the world’s largest tokamak thermonuclear reactor (see ‘A fusion of ideas’), is being built in St-Paul-lez-Durance in southern France by the European Union, China, India, Japan, South Korea, Russia and the United States at a cost of €15 billion (US$20.3 billion). Q ≥ 10 is seen as its raison d’être, and achieving it would be likely to revitalize public and political interest in fusion. Crucial to that is getting to the point, scheduled for 2027, when the first nuclear fuel would be injected into the reactor. The fuel will be a plasma of two heavy hydrogen isotopes, deuterium and tritium (DT).
The original 2010 research plan foresaw the entire reactor being built by 2020, when ITER was also scheduled to produce its first plasma, using hydrogen as a test fuel. But cost-cutting and cash-flow problems in member states mean that while the reactor is likely to be operating by then, the delivery of some parts is being deferred until several years later. These include some diagnostics devices for analysing the physics of plasmas at the very large scales of ITER, and elements of the heating system that will eventually take the plasmas to 150,000,000 °C.
Life scientists from UCLA's College of Letters and Science have discovered fundamental rules of leaf design that underlie plants' ability to produce leaves that vary enormously in size. In their mathematical design, leaves are the "perfect machines," said Lawren Sack, a professor of ecology and evolutionary biology and senior author of the research.
The UCLA team discovered the mathematical relationships using "allometric analysis," which looks at how the proportions of parts of an organism change with differences in total size. This approach has been used by scientists since Galileo but had never before been applied to the interior of leaves. Reporting in the October issue of the American Journal of Botany, the biologists focused on how leaf anatomy varies across leaves of different sizes. They examined plant species from around the world, all grown on the UCLA campus.
While it is easy to observe major differences in leaf surface area among species, they said, differences in leaf thickness are less obvious but equally important. "Once you start rubbing leaves between your fingers, you can feel that some leaves are floppy and thin, while others are rigid and thick," said Grace John, a UCLA doctoral student in ecology and evolutionary biology and lead author of the research. "We started with the simplest questions — but ones that had never been answered clearly — such as whether leaves that are thicker or larger in area are constructed of different sizes or types of cells."
The researchers embedded pieces of leaf in plastic and cut cross-sections thinner than a single cell to observe each leaf's microscopic layout. This allowed them to test the underlying relationship between cell and tissue dimensions and leaf size across species. Leaves are made up of three basic tissues, each containing cells with particular functions: the outer layer, or epidermis; the mesophyll, which contains cells that conduct photosynthesis; and the vascular tissue, whose cells are involved in water and sugar transport.
The team found that the thicker the leaf, the larger the size of the cells in all of its tissues — except in the vascular tissue. These relationships also applied to the components of the individual cells. Plant cells, unlike animal cells, are surrounded by carbohydrate-based cell walls, and the scientists discovered that the larger cells of thicker leaves are surrounded by thicker cell walls, in a strict proportionality.
The team was surprised by the "extraordinary" strength of the relationships linking cell size, cell-wall thickness and leaf thickness across diverse and distantly related plant species. These relationships can be described by new, simple mathematical equations, effectively allowing scientists to predict the dimension of cells and cell walls based on the thickness of a leaf. In most cases, the relationships the team found were what is known as "isometric."
A new study finds Pacific ocean is taking in heat at the most rapid rate in many thousands of years, providing yet another indicator of global warming.
From about 7,000 years ago until the start of the Medieval Warm Period in northern Europe, at about 1100, the water cooled gradually, by almost 1 degree C, or almost 2 degrees F. The rate of cooling then picked up during the so-called Little Ice Age that followed, dropping another 1 degree C, or 2 degrees F, until about 1600. The authors attribute the cooling from 7,000 years ago until the Medieval Warm Period to changes in Earth’s orientation toward the sun, which affected how much sunlight fell on both poles. In 1600 or so, temperatures started gradually going back up. Then, over the last 60 years, water column temperatures, averaged from the surface to 2,200 feet, increased 0.18 degrees C, or .32 degrees F.
"As the industrial age is drawing to a close, I think that we're witnessing the dawn of the era of biological design. DNA, as digitized information, is accumulating in computer databases. Thanks to genetic engineering, and now the field of synthetic biology, we can manipulate DNA to an unprecedented extent, just as we can edit software in a computer. We can also transmit it as an electromagnetic wave at or near the speed of light and, via a "biological teleporter", use it to recreate proteins, viruses and living cells at another location, changing forever how we view life."
"At this point in time we are limited to making protein molecules, viruses, phages and single microbial cells, but the field will move to more complex living systems. I am confident that we will be able to convert digitised information into living cells that will become complex multicellular organisms or functioning tissues."
"We could send sequence information to a digital-biological converter on Mars in as little as 4.3 minutes, that's at the closest approach of the red planet, to provide colonists with personalised drugs. Or, if Nasa's Mars Curiosity rover were equipped with a DNA-sequencing device, it could transmit the digital code of a Martian microbe back to Earth, where we could recreate the organism in the laboratory. We can rebuild the Martians in a P4 spacesuit lab -- that is, a maximum-containment lab -- instead of risking them crash-landing on the surface. I am assuming that Martian life is, like life on Earth, based on DNA. I think that because we know that Earth and Mars have continually exchanged material, in the order of 100kg a year, making it likely that Earth microbes have travelled to and populated Martian oceans long ago and that Martian microbes have survived to thrive on Earth. Simple calculations indicate that there is as much biology and biomass in the subsurface of our Earth as in the entire visible world on the planet's surface. The same could be true for Mars."
"If the life-digitalizing technology works, then we will have a new means of exploring the universe and the Earth-sized exoplanets and super Earths. To get a sequencer to them soon is out of the question with present-day rocket technology -- the planets orbiting the red dwarf Gliese 581 are "only" about 22 light-years away -- but it would take only 22 years to get the beamed data back. And that if advanced DNA-based life does exist in that system, perhaps it has already been broadcasting sequence information."
"Creating life at the speed of light is part of a new industrial revolution. Manufacturing will shift from centralised factories to a distributed, domestic manufacturing future, thanks to the rise of 3D printer technology. Since my own genome was sequenced, my software has been broadcast into space in the form of electromagnetic waves, carrying my genetic information far beyond Earth. Whether there is any creature out there capable of making sense of the instructions in my genome, well, that's another question."
Beads of haematite can pick up and carry other particles more than 10 times their size with the flick of a switch
Ant-like beads of haematite could be the giants of nanoscale construction. Tiny particles of the iron mineral have been made to pick up and carry cargo more than 10 times their size. The feat could be used in targeted drug delivery or building artificial muscles.
Iron-based nanoparticles are ideal cargo-carriers because they can be steered using magnetic fields or by following a thinly etched track. Previous versions relied on chemical glues to pick up stuff, but getting them to drop it has proved difficult.
To tackle that problem, Jérémie Palacci at New York University and his colleagues started by suspending haematite nano-beads and a variety of cargo particles in a hydrogen peroxide solution. Shining a light gave the haematite electrical charge, which broke bonds in the neighbouring solution.
The resulting halo of water and oxygen was not in chemical balance with its surroundings, a disturbance which drew larger particles to the beads. A bead and its cargo could then be steered together. To make the bead release its load, the team simply turned off the light.
"The drop-off has been problematic in other papers. We had to come up with really jerry-rigged situations in order to do it," says Ayusman Sen at Pennsylvania State University in University Park, who was not involved in the new work. "They have a better way of picking up and dropping particles than anyone else." The same iron bead can even be used repeatedly to round up a whole flock of larger particles.
Palacci's team envision using the nano-beads in future micro-manufacturing plants, for instance, to create artificial muscles by laying down the required particles and building fibres along tiny tracks. "That would be really cool," he says. "If you can make that, you can start thinking about everything muscles are used for in biology and try to see if you can mimic it."
Jonathan Rothberg is on the hunt for the genes that code for mathematical prowess. He founded two genetic-sequencing companies and sold them for hundreds of millions of dollars. He helped to sequence the genomes of a Neanderthal man and James Watson, who co-discovered DNA’s double helix. Now, entrepreneur Jonathan Rothberg has set his sights on another milestone: finding the genes that underlie mathematical genius.
Rothberg and physicist Max Tegmark, who is based at the Massachusetts Institute of Technology in Cambridge, have enrolled about 400 mathematicians and theoretical physicists from top-ranked US universities in a study dubbed ‘Project Einstein’. They plan to sequence the participants’ genomes using the Ion Torrent machine that Rothberg developed.
The team will be wading into a field fraught with controversy. Critics have assailed similar projects, such as one at the BGI (formerly the Beijing Genomics Institute) in Shenzhen, China, that is sequencing the genomes of 1,600 people identified as mathematically precocious children in the 1970s. The critics say that the sizes of these studies are too small to yield meaningful results for such complex traits. And some are concerned about ethical issues. If the projects find genetic markers for maths ability, these could be used as a basis for the selective abortion of fetuses or in choosing between embryos created through in vitrofertilization, says Curtis McMullen. A mathematician at Harvard University in Cambridge, Massachusetts, and a 1998 winner of the prestigious Fields Medal, McMullen was asked to participate in Project Einstein and declined.
Rothberg is pushing ahead. “I’m not at all concerned about the critics,” he says, adding that he does not think such rare genetic traits could be useful in selecting for smarter babies. Influenced by a college class he took from a pioneer in artificial intelligence, and by the diagnosis of his daughter with tuberous sclerosis complex, a disease that can cause mental retardation and autism, Rothberg has long been interested in cognition. He is also in awe of the abilities of famous scientists. “Einstein said ‘the most incomprehensible thing about the Universe is that it is comprehensible’,” he says. “I’d love to find the genes that make the Universe comprehensible.”