Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centers for reading across cultures.
Via Sakis Koukouvis
Your new post is loading...
NOTE: To subscribe to the RSS feed of Amazing Science, copy http://www.scoop.it/t/amazing-science/rss.xml into the URL field of your browser and click "subscribe".
This newsletter is aggregated from over 1450 news sources:
All my Tweets and Scoop.It! posts sorted and searchable:
You can search through all the articles semantically on my
NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.
You can also type your own query:
e.g., you are looking for articles involving "dna" as a keyword
MOST_READ • 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Intel has been working on a 3D scanner small enough to fit in the bezel of even the thinnest tablets. The company aims to have the technology in tablets from 2015, with CEO Brian Krzanich telling the crowd at MakerCon in New York on Thursday that he hopes to put the technology in phones as well.
"Our goal is to just have a tablet that you can go out and buy that has this capability," Krzanich said. "Eventually within two or three years I want to be able to put it on a phone."
Krzanich and a few of his colleagues demonstrated the technology, which goes by the name "RealSense," on stage using a human model and an assistant who simply circled the model a few times while pointing a tablet at the subject. A full 3D rendering of the model slowly appeared on the screen behind the stage in just a few minutes. The resulting 3D models can be manipulated with software or sent to a 3D printer.
"The idea is you go out, you see something you like and you just capture it," Krzanich explained. He said consumer tablets with built in 3D scanners will hit the market in the third or fourth quarter of 2015, with Intel also working on putting the 3D scanning cameras on drones.
The predecessor to the 3D scanning tablets demonstrated on stage were announced earlier this month in the form of the Dell Venue 8 7000 series Android tablet sports Intel's RealSense snapshot depth camera, which brings light-field camera-like capabilities to a tablet. It will be available later this year.
A growing body of evidence suggests that environmental stresses can cause changes in gene expression that are transmitted from parents to their offspring, making "epigenetics" a hot topic. Epigenetic modifications do not affect the DNA sequence of genes, but change how the DNA is packaged and how genes are expressed. Now, a study by scientists at UC Santa Cruz shows how epigenetic memory can be passed across generations and from cell to cell during development.
"There has been ongoing debate about whether the methylation mark can be passed on through cell divisions and across generations, and we've now shown that it is," said corresponding author Susan Strome, a professor of molecular, cell and developmental biology at UC Santa Cruz.
Strome's lab created worms with a mutation that knocks out the enzyme responsible for making the methylation mark, then bred them with normal worms. Using fluorescent labels, they were able to track the fates of marked and unmarked chromosomes under the microscope, from egg cells and sperm to the dividing cells of embryos after fertilization. Embryos from mutant egg cells fertilized by normal sperm had six methylated chromosomes (from the sperm) and six unmarked or "naked" chromosomes (from the egg).
As embryos develop, the cells replicate their chromosomes and divide. The researchers found that when a marked chromosome replicates, the two daughter chromosomes are both marked. But without the enzyme needed for histone methylation, the marks become progressively diluted with each cell division.
"The mark stays on the chromosomes derived from the initial chromosome that had the mark, but there's not enough mark for both daughter chromosomes to be fully loaded," Strome said. "So the mark is bright in a one-cell embryo, less bright after the cell divides, dimmer still in a four-cell embryo, and by about 24 to 48 cells we can't see it anymore."
The researchers then did the converse experiment, fertilizing normal egg cells with mutant sperm. The methylation enzyme (called PRC2) is normally present in egg cells but not in sperm, which don't contribute much more than their chromosomes to the embryo. So the embryos in the new experiment still had six naked chromosomes (this time from the sperm) and six marked chromosomes, but now they also had the enzyme.
"Remarkably, when we watch the chromosomes through cell divisions, the marked chromosomes remain marked and stay bright, because the enzyme keeps restoring the mark, but the naked chromosomes stay naked, division after division," Strome said. "That shows that the pattern of marks that was inherited is being transmitted through multiple cell divisions."
A first-ever standard “operating system” for drones, developed by a startup with MIT roots, could soon help manufacturers easily design and customize unmanned aerial vehicles (UAVs) for multiple applications.
Today, hundreds of companies worldwide are making drones for infrastructure inspection, crop- and livestock-monitoring, and search-and-rescue missions, among other things. But these are built for a single mission, so modifying them for other uses means going back to the drawing board, which can be very expensive.
Now Airware, founded by MIT alumnus Jonathan Downey ’06, has developed a platform — hardware, software, and cloud services — that lets manufacturers pick and choose various components and application-specific software to add to commercial drones for multiple purposes.
The key component is the startup’s Linux-based autopilot device, a small red box that is installed into all of a client’s drones. “This is responsible for flying the vehicle in a safe, reliable manner, and acts as hub for the components, so it can collect all that data and display that info to a user,” says Downey, Airware’s CEO, who researched and built drones throughout his time at MIT.
To customize the drones, customers use software to select third-party drone vehicles and components — such as sensors, cameras, actuators, and communication devices — configure settings, and apply their configuration to a fleet. Other software helps them plan and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud, where it’s aggregated and analyzed, and available to designated users.
If a company decides to use a surveillance drone for crop management, for instance, it can easily add software that stitches together different images to determine which areas of a field are overwatered or underwatered. “They don’t have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,” Downey says. “The entire industry can leverage that.”
Clients have trialed Airware’s platform over the past year — including researchers at MIT, who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations, search-and-rescue missions, and agricultural applications. Another UAV maker, Cyber Technology in Australia, is using the platform for drones responding to car crashes and other disasters, and inspecting offshore oilrigs.
Now, with its most recent $25 million funding round, Airware plans to launch the platform for general adoption later this year, viewing companies that monitor crops and infrastructure — with drones that require specific cameras and sensors — as potential early customers.
There is something primal in a mother's response to a crying infant. So primal, in fact, that mother deer will rush protectively to the distress calls of other infant mammals, such as fur seals, marmots and even humans. This suggests such calls might share common elements – and perhaps that these animals experience similar emotions.
Researchers – and, indeed, all pet owners – know that humans respond emotionally to the distress cries of their domestic animals, and there is some evidence that dogs also respond to human cries. However, most people have assumed this is a by-product of domestication.
However, Susan Lingle, a biologist at the University of Winnipeg, Canada, noticed that the infants of many mammal species have similar distress calls: simple sounds with few changes in pitch. She decided to test whether cross-species responses occur more widely across the evolutionary tree.
So, Lingle and her colleague Tobias Riede, now at Midwestern University in Glendale, Arizona, recorded the calls made by infants from a variety of mammal species when separated from their mother or otherwise threatened. They then played the recordings through hidden speakers to wild mule deer (Odocoileus hemionus) out on the Canadian prairies. They found that deer mothers quickly moved towards the recordings of infant deer, but also towards those of infant fur seals, dogs, cats and humans, all of which call at roughly the same pitch. Even the ultrasonic calls of infant bats attracted the deer mothers if Lingle used software to lower their pitch to match that of deer calls. In contrast, they found the deer did not respond to non-infant calls such as birdsong or the bark of a coyote (American Naturalist, DOI: 10.1086/677677).
The DNA of every organism on Earth is a right-handed double helix, but why that would be has puzzled scientists since not long after Francis Crick and James Watson announced the discovery of DNA's double-helical structure in 1953. It's a puzzle because no one has been able to think of a fundamental reason why DNA couldn't also be left-handed.
New research by University of Nebraska-Lincoln physicists and published in the Sept. 12 online edition of Physical Review Letters now gives support to a long-posited but never-proven hypothesis that electrons in cosmic rays -- which are mostly left-handed -- preferentially destroyed left-handed precursors of DNA on the primordial Earth.
The hypothesis, called the Vester-Ulbricht model, was proposed by Frederic Vester of the University of Saarbrucken in Germany and Tilo L.V. Ulbricht of the University of Cambridge in England in 1961 in response to the 1957 discovery that most of the electrons spewing from radioactive beta decay were left-handed.
Joan M. Dreiling and Timothy J. Gay of UNL focused circularly polarized laser light on a specially prepared crystal of gallium-arsenide to produce electrons whose spins were either parallel or anti-parallel to their direction of motion upon emission from the crystal -- essentially artificial beta rays. They then directed these electrons to strike target molecules of a substance called bromocamphor, which comes in both right- and left-handed varieties.
They found that at the lowest electron energies they studied, left-handed electrons preferentially destroyed left-handed molecules and vice versa. This sensitivity to molecular handedness has a mechanical analog: the inability of a left-handed bolt to screw into a right-handed nut. The molecular experiment proves the principle underlying the Vester-Ulbricht hypothesis.
"The circular polarization of the laser light effectively transferred to the spin (handedness) of the electrons emitted by the gallium-arsenide crystal," said Dreiling, a postdoctoral research assistant who received her doctorate from UNL in May. "We are able to reverse the spin-polarization of the electrons just by reversing the circular polarization of the light."
The effect they saw was quite small, they said -- like "looking for an electronic needle in a haystack," Gay said -- but they said they're highly confident in their result. "We have done several different checks with our experiment and I am totally confident that the asymmetry exists," Dreiling said. "The checks all came out showing that this asymmetry is real."
While some people have successfully 3D printed buildings, others have taken the same approach to the car manufacturing business, as a company has just come out with a car called the Strati that’s the first 3D-printed car in the world. Scientific Americanreveals that it took Local Motors only 45 hours to build the Strati, a two-seater “neighborhood” electric car that has a range of up to 120 miles and a maximum speed of 40 mph.
Interestingly, the company plans to start selling Stratis for anywhere between $18,000 to $30,000 later this year, as it further refines its 3D-printing procedure.
“We expect in the next couple of months [printing a complete car] to be below 24 hours and then eventually get it below 10 hours, [down from 45 hours currently]” Local Motors CEO John Rogers said. “This is in a matter of months. Today, the best Detroit or Germany can do is 10 hours on a [production] line, after hundreds of years of progress.”
The car’s design was chosen from over 200 proposals submitted by Local Motors’ online community and Rogers says that the main advantage of 3D printed cars is that local communities may adopt such procedures to build cars best fitted to the resources available to them.
“In the future, you’ll still have … your Detroits that make one product the same over a million units,” the exec said. “And then I think you’ll have examples of microfactories that do things profitably at lower volumes—10,000 units, 15,000 units per year—and show the mass factories what they ought to build next.”
Local Motors chose an electric engine for the Strati because an electric powertrain was simpler to construct. Another advantage the Strati has is that it’s made from thermoplastic using a “Big Are Additive Manufacturing (BAAM) machine,” which is a fully recyclable material, meaning that it can be easily “chopped up and reprocessed back into another car.”
Even so, while using 3D printing technology to build a car might lead to less wasted material, a lot of energy might actually be required to print such vehicles.
Via Tiaan Jonker, MARTIN'S Gonçalo Wa kapinga
Although cardiac pacemakers have saved countless lives, they do have at least one shortcoming – like other electronic devices, their batteries wear out. When this happens, of course, surgery is required in order to replace the pacemaker. While some researchers are looking into ideas such as drawing power from blood sugar, Swiss scientists from the University of Bern have taken another approach. They’ve developed a wristwatch-inspired device that can power a pacemaker via the beating of the patient’s own heart.
Bern cardiologist Prof. Rolf Vogel first came up with the idea four years ago, and it has been in development ever since. The resulting prototype device wasn’t just inspired by an auto-winding wristwatch, but actually incorporates the mechanism of a commercially-available model. Such watches rely on the user’s arm movements to wind a mechanical spring. Once that spring is fully wound, it then unwinds to power a micro-generator inside the watch.
In the case of the Bern device, it’s sutured onto the heart’s myocardial muscle instead of being worn on the wrist, and its spring is wound by heart contractions instead of arm movements. When that spring unwinds, the resulting energy is buffered in a capacitor. That capacitor then powers a pacemaker, to which it is electrically wired.
According to the research team, the system has demonstrated a mean output power of 52 microwatts when implanted in a live 60-kg (132-lb) pig – that’s more than enough for most modern pacemakers, which consume about 10 microwatts.
They now hope to further miniaturize the technology, make it more sensitive to the motion of the heart, and build both its energy-harvesting and capacitor functions into a pacemaker. This all-in-one setup would do away with the need for electrical leads, which can fail in conventional pacemakers.
The research was presented this Sunday at the ESC (European Society of Cardiology) Congress, by PhD candidate and team member Adrian Zurbuchen. A similar device is being developed at the University of Michigan.
Euler is easily the most prolific mathematician of all time. The range and volume of his output is simply staggering. He published over 850 papers, almost all of substantial length, and more than 25 books and treatises. In 1907 the Swiss Academy of Sciences established the Euler Commission with the charge of publishing the complete body of work consisting of all of his papers, manuscripts, and correspondence. This project, known as Opera Omnia, began in 1911 and is still ongoing. His scientific publications, not counting his correspondence, run to over 70 volumes, each between approximately 300 and 600 pages. Thousands of pages of handwritten manuscripts are still not in print. Euler was in constant communication with all the great scientists of his day, and his correspondence covers several thousand pages.
Euler's powers of memory and concentration were legendary. He could recite the entire Aeneid word-for-word. He was not troubled by interruptions or distractions; in fact, he did much of his work with his young children playing at his feet. He was able to do prodigious calculations in his head, a necessity after he went blind. The contemporary French mathematician Condorcet tells the story of two of Euler's students who had independently summed seventeen terms of a complicated infinite series, only to disagree in the fiftieth decimal place; Euler settled the dispute by recomputing the sum in his head.
Further reading: http://www.ams.org/bookstore/pspdf/euler-prev.pdf
Programming synthetic cells for tasks such as production of biofuels, environmental remediation, and treatments for human diseases. Researchers at Rice University and the University of Kansas Medical Center are making genetic circuits that can perform complex tasks by swapping protein building blocks.
The modular genetic circuits, which are engineered from parts of otherwise unrelated bacterial genomes, can be set up to handle multiple chemical inputs simultaneously with a minimum of interference from their neighbors.
The work, reported in the American Chemical Society journal ACS Synthetic Biology, gives scientists more options as they design synthetic cells for specific tasks, such as production of biofuels, environmental remediation, or treatments for human diseases.
The researchers are creating complex genetic logic circuits similar to those used to build traditional computers and electrical devices. In a simple circuit, if one input and another input are both present (AND gate), the circuit carries out its instruction. With genetic circuitry based on this type of Boolean logic, a genetic logic circuit might prompt the creation of a specific protein when it senses two chemicals — or prompt a cell’s DNA to repress the creation of that protein.
Simple circuits have become easier to create as synthetic biologists develop more tools, but they require more sophisticated tools for complex problems. Rice’s Matthew Bennett and his colleagues are intent upon following a path similar to that of computer programmers, whose capabilities grew from simple Pong to the immersive worlds of modern games.
The first definitive defeat for a classical computer by a quantum computer could one day be achieved with a quantum device that runs an algorithm known as “boson sampling,” recently developed by researchers at MIT.
Boson sampling uses single photons of light and optical circuits to take samples from an exponentially large probability distribution, which has been proven to be extremely difficult for classical computers.
The snag: how to generate the dozens of single photons needed to run the algorithm.
Now researchers at the Centre for Quantum Photonics (CQP) at the University of Bristol with collaborators from the University of Queensland (UQ) and Imperial College London say they have discovered how.
“We realized we could chain together many standard two-photon sources in such a way as to give a dramatic boost to the number of photons generated,” said CQP research leader Anthony Laing, a research fellow at the Centre for Quantum Photonics in the University of Bristol’s School of Physics.
Details of the research are in a paper published in Physical Review Letters.
Archaeologists set out Monday to use a revolutionary new deep sea diving suit to explore the ancient shipwreck where one of the most remarkable scientific objects of antiquity was found. The so-called Antikythera Mechanism, a 2nd-century BC device known as the world's oldest computer, was discovered by sponge divers in 1900 off a remote Greek island in the Aegean.
The highly complex mechanism of up to 40 bronze cogs and gears was used by the ancient Greeks to track the cycles of the solar system. It took another 1,500 years for an astrological clock of similar sophistication to be made in Europe.
A growing “dead zone” in the middle of the Arabian Sea has allowed plankton uniquely suited to low- oxygen water to take over the base of the food chain. Their rise to dominance over the last decade could be disastrous for the predator fish that sustain 120 million people living on the sea’s edge.
“These blooms are massive, appear year after year, and could be devastating to the Arabian Sea ecosystem over the long-term,” said the study’s lead author, Helga do Rosario Gomes, a biogeochemist at Lamont-Doherty.
Until recently, photosynthetic diatoms supported the Arabian Sea food chain. Zooplankton grazed on the diatoms, a type of algae, and were in turn eaten by fish. In the early 2000s, it all changed. The researchers began to see vast blooms of Noctiluca and a steep drop in diatoms and dissolved oxygen in the water column. Within a decade, Noctiluca had virtually replaced diatoms at the base of the food chain, marking the start of a colossal ecosystem shift.
Wouldn't it be great if you could just call up a supercomputer and ask it to do your data-wrangling for you? Actually, scratch that, no-one uses the phone anymore. What'd be really cool is if machines could respond to your queries straight from Twitter. It's a belief that's shared by Wolfram Research, which has just launched the Tweet a Program system to its computational knowledge engine, Wolfram Alpha. In a blog post, founder Stephen Wolfram explains that even complex queries can be executed within the space of 140 characters, including data visualizations.
In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun with the introduction of Tweet-a-Program. Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. TheTwitter bot will run your program in the Wolfram Cloud and tweet the result back to you. One can do a lot with Wolfram Language programs that fit in a tweet. It’s easy to make interesting patterns or even complicated fractals. Putting in some math makes it easy to get all sorts of elaborate structures and patterns.
The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population. There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. The Wolfram Language does really well with words and text and deals with images too.
As many stars as there are in our galaxy (100 - 400 billion), there are roughly an equal number of galaxies in the observable universe -- so for every star in the colossal Milky Way, there's a whole galaxy out there. All together, that comes out to the typically quoted range of between 10**22 and 10**24 total stars, which means that for every grain of sand on Earth, there are 10,000 stars out there.
The science world isn't in total agreement about what percentage of those stars are "sun-like" (similar in size, temperature, and luminosity) -- opinions typically range from 5 percent to 20 percent. Going with the most conservative side of that (5 percent), and the lower end for the number of total stars (10**22), gives us 500 quintillion, or 500 billion billion sun-like stars.
There's also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it's as high as 50 percent, but let's go with the more conservative 22 percent that came out of a recent PNAS study. That suggests that there's a potentially-habitable Earth-like planet orbiting at least 1 percent of the total stars in the universe -- a total of 100 billion billion Earth-like planets.
So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you're on the beach. Moving forward, we have no choice but to get completely speculative. Let's imagine that after billions of years in existence, 1 percent of Earth-like planets develop life (if that's true, every grain of sand would represent one planet with life on it). And imagine that on 1 percent of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe.
Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we'd estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy.
So where is everybody?
Welcome to the Fermi Paradox. There is something called "The Great Filter". The Great Filter theory says that at some point from pre-life to Type III intelligence, there's a wall that all or nearly all attempts at life hit. There's some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter. If this theory is true, the big question is, Where in the timeline does the Great Filter occur? This article gives different possibilities and scenarios.
Einstein is most famous for general relativity, which is really a theory of gravity. But his theory of special relativity has been just as important. Special relativity is all about how to interpret measurements: if you measure the speed of an object from a moving vehicle, how do I reconcile that number with a measurement I make from the side of the road? At low speeds this is a fairly simple task, but at very high speeds things start to get strange. This strangeness arises as a consequence of the speed of light being constant.
Tests of the validity of special relativity abound, but they've been limited to a few classes of objects. The ones done in the lab are usually very sensitive experiments performed on relatively slow-moving objects, while natural tests use the motion of the Earth or other astronomical objects.
Now, a German facility has measured time dilation very accurately. But in a twist, these measurements were performed on things moving at just under 40 percent of the speed of light in the laboratory. The researchers tested how clocks slow down when they are in motion. For example, if you are in motion relative to me, and I can see the watch on your hand, I should observe that it runs slightly slow compared to the one I'm wearing. Indeed, if you put an atomic clock in an airplane and fly it around the world, it will end up with a slightly different time than an identical clock that remained at the airport.
This time dilation is a consequence of a feature of physics called Lorentz invariance. Lorentz invariance is a way of saying that no matter where we are in the Universe, or how fast we are traveling, the Universe and its rules are basically the same.
The scientists verified in a very elegant experiment that special relativity and Lorentz invariance is true to one part in a billion. These results were also used to test some extensions to the Standard Model of physics, but these results were too inaccurate to provide much insight about the Standard Model. But there are competing models that may have much stronger deviations from Lorentz invariance. In these cases, the fact that these experiments didn't see any deviations will certainly be able to tell us something.
More importantly, though, the whole experiment is Earth-based, so we are not relying on any assumptions about astronomical objects. And even cooler, the experiment is in a regime where the objects actually have a speed that is quite high compared to normal lab experiments, which offers a whole new window on special relativity and Lorentz invariance.
The population of Earth is unlikely to stabilize this century, according to a new analysis published in the 19 September issue of the journal Science. The findings are contrary to past studies, which have predicted that the world population will peak around 2050 and then level off or decline.
The first flexible display device based on graphene has been unveiled by scientists in the UK, who say it is the first step on the road towards next generation gadgets that can be folded, rolled or crumpled up without cracking the screen.
Researchers at UT Arlington have created the first electronic device that can cool electrons to -228 degrees Celsius (-375F), without any kind of external cooling. The chip itself remains at room temperature, while a quantum well within the device cools the electrons down cryogenic temperatures. Why is this exciting? Because thermal excitation (heat) is by far the biggest problem when it comes to creating both high-performance and ultra-low-power computers. These cryogenic, quantum well-cooled electrons could allow for the creation of electronic devices that consume 10 times less energy than current devices, according to the researchers.
What, you may ask, is a quantum well? In essence, a quantum well is a very narrow gap between two semiconducting materials. Electrons are happily bouncing along the piece of semiconductor when they hit the gap (the well). Only electrons that have very specific characteristics can cross the boundary. In this case, only electrons with very low energy (i.e. cold electrons) are allowed to pass, while hot electrons are sent back from whence they came. The well is created by sandwiching a narrow-bandgap semiconductor between two semiconductors with a wider bandgap – it’s basically the quantum equivalent of the neck between the two bulbs of an hourglass.
Via Marty Koenig
With a new therapeutic product, researchers have managed to cure arthritis in mice for the first time. The scientists are now planning to test the efficacy of the drug in humans. Rheumatoid arthritis is a condition that causes painful inflammation of several joints in the body. The joint capsule becomes swollen, and the disease can also destroy cartilage and bone as it progresses. Rheumatoid arthritis affects 0.5% to 1% of the world's population.
Antibody–cytokine fusion proteins (immunocytokines) are innovative biopharmaceutical agents, which are being considered for the therapy of cancer and chronic inflammatory conditions. Immunomodulatory fusion proteins capable of selective localization at the sites of rheumatoid arthritis (RA) are of particular interest, as they may increase the therapeutic index of the cytokine payload. The F8 antibody recognizes the alternatively spliced extra domain A of fibronectin, a marker of angiogenesis, which is strongly overexpressed at sites of arthritis. In this study, scientists investigated the targeting and therapeutic activity of the immunocytokine F8-IL4 in the mouse model of collagen-induced arthritis. Different combination regimes were tested and evaluated by the analysis of serum and tissue cytokine levels. They were able to show that F8-IL4 selectively localizes to neovascular structures at sites of rheumatoid arthritis in the mouse, leading to high local concentrations of IL4. When used in combination with dexamethasone, F8-IL4 was able to cure mice with established collagen-induced arthritis. Response to treatment was associated with an elevation of IL13 levels and decreased IL6 plasma concentrations. A fully human version of F8-IL4 is currently being developed for clinical investigations and clinical trials in humans will hopefully start soon.
"As a result of combination with the antibody, IL-4 reaches the site of the disease when the fusion molecule is injected into the body," says pharmacist Teresa Hemmerle, who has just completed her dissertation in the group of Dario Neri, a professor at the Institute of Pharmaceutical Sciences. Together with Fabia Doll, also a PhD pharmacist at ETH, she is the lead author of the study. "It allows us to concentrate the active substance at the site of the disease. The concentration in the rest of the body is minimal, which reduces side-effects," she says.
Two prominent U.S. hospitals are preparing to launch trials with diabetics and chronic disease patients using Apple Inc's (AAPL.O) HealthKit, offering a glimpse of how the iPhone maker's ambitious take on healthcare will work in practice.
HealthKit, which is still under development, is the center of a new healthcare system by Apple. Regulated medical devices, such as glucose monitors with accompanying iPhone apps, can send information to HealthKit. With a patient's consent, Apple's service gathers data from various health apps so that it can be viewed by doctors in one place.
Stanford University Hospital doctors said they are working with Apple to let physicians track blood sugar levels for children with diabetes. Duke University is developing a pilot to track blood pressure, weight and other measurements for patients with cancer or heart disease.
The goal is to improve the accuracy and speed of reporting data, which often is done by phone and fax now. Potentially doctors would be able to warn patients of an impending problem. The pilot programs will be rolled out in the coming weeks.
Apple last week mentioned the trials in a news release announcing the latest version of its operating system for phones and tablets, iOS 8, but this is the first time any details have been made public. Apple declined to comment for this article.
Apple aims eventually to work with health care providers across the United States, including hospitals which are experimenting with using technology to improve preventative care to lower healthcare cost and make patients healthier.
Reuters previously reported that Apple is in talks with other U.S. hospitals. Stanford Children's Chief Medical Information Officer Christopher Longhurst told Reuters that Stanford and Duke were among the furthest along.
Longhurst said that in the first Stanford trial, young patients with Type 1 diabetes will be sent home with an iPod touch to monitor blood sugar levels between doctor's visits.
HealthKit makes a critical link between measuring devices, including those used at home by patients, and medical information services relied on by doctors, such as Epic Systems Corp, a partner already announced by Apple.
Medical device makers are taking part in the Stanford and Duke trials.
DexCom Inc (DXCM.O), which makes blood sugar monitoring equipment, is in talks with Apple, Stanford, and the U.S. Food and Drug Administration about integrating with HealthKit, said company Chief Technical Officer Jorge Valdes.
DexCom's device measures glucose levels through a tiny sensor inserted under the skin of the abdomen. That data is transmitted every five minutes to a hand-held receiver, which works with a blood glucose meter. The glucose measuring system then sends the information to DexCom's mobile app, on an iPhone, for instance.
Under the new system, HealthKit can scoop up the data from DexCom, as well as other app and device makers.
Data can be uploaded from HealthKit into Epic's "MyChart" application, where it can be viewed by clinicians in Epic's electronic health record.
Via Ray and Terry's
Astronomers using data from NASA’s Hubble Space Telescope and ground observation have found an unlikely object in an improbable place -- a monster black hole lurking inside one of the tiniest galaxies ever known.
The black hole is five times the mass of the one at the center of our Milky Way galaxy. It is inside one of the densest galaxies known to date -- the M60-UCD1 dwarf galaxy that crams 140 million stars within a diameter of about 300 light-years, which is only 1/500th of our galaxy’s diameter.
If you lived inside this dwarf galaxy, the night sky would dazzle with at least 1 million stars visible to the naked eye. Our nighttime sky as seen from Earth’s surface shows 4,000 stars.
The finding implies there are many other compact galaxies in the universe that contain supermassive black holes. The observation also suggests dwarf galaxies may actually be the stripped remnants of larger galaxies that were torn apart during collisions with other galaxies rather than small islands of stars born in isolation.
“We don’t know of any other way you could make a black hole so big in an object this small,” said University of Utah astronomer Anil Seth, lead author of an international study of the dwarf galaxy published in Thursday’s issue of the journal Nature.
Seth’s team of astronomers used the Hubble Space Telescope and the Gemini North 8-meter optical and infrared telescope on Hawaii’s Mauna Kea to observe M60-UCD1 and measure the black hole’s mass. The sharp Hubble images provide information about the galaxy’s diameter and stellar density. Gemini measures the stellar motions as affected by the black hole’s pull. These data are used to calculate the mass of the black hole.
Black holes are gravitationally collapsed, ultra-compact objects that have a gravitational pull so strong that even light cannot escape. Supermassive black holes -- those with the mass of at least one million stars like our sun -- are thought to be at the centers of many galaxies.
The black hole at the center of our Milky Way galaxy has the mass of four million suns. As heavy as that is, it is less than 0.01 percent of the Milky Way’s total mass. By comparison, the supermassive black hole at the center of M60-UCD1, which has the mass of 21 million suns, is a stunning 15 percent of the small galaxy’s total mass.
“That is pretty amazing, given that the Milky Way is 500 times larger and more than 1,000 times heavier than the dwarf galaxy M60-UCD1,” Seth said.
One explanation is that M60-UCD1 was once a large galaxy containing 10 billion stars, but then it passed very close to the center of an even larger galaxy, M60, and in that process all the stars and dark matter in the outer part of the galaxy were torn away and became part of M60.
The team believes that M60-UCD1 may eventually be pulled to fully merge with M60, which has its own monster black hole that weighs a whopping 4.5 billion solar masses, or more than 1,000 times bigger than the black hole in our galaxy. When that happens, the black holes in both galaxies also likely will merge. Both galaxies are 50 million light-years away.
A group of scientists in Chile has created* artificial biomembranes, mimicking those found in living organisms on silicon surfaces, a step toward creating bio-silicon interfaces, where biological “sensor” molecules can be printed onto a cheap silicon chip with integrated electronic circuits.
Described in The Journal of Chemical Physics from AIP Publishing, the artificial membranes have potential applications such as detecting bacterial contaminants in food, toxic pollution in the environment, and dangerous diseases .
The idea is to create a “biosensor that can transmit electrical signals through the membrane,” said María José Retamal, a Ph.D. student at Pontificia Universidad Católica de Chile and first author of the paper.
Lipid membranes separate distinct spaces within cells and define walls between neighboring cells — a functional compartmentalization that serves many physiological processes, protecting genetic material, regulating what comes in and out of cells, and maintaining the function of separate organs.
Synthetic membranes that mimic nature offer the possibility of containing membrane proteins — biological molecules that could be used for detecting toxins, diseases and many other biosensing applications.
More work is needed to standardize the process by which proteins are to be inserted in the membranes, to define the mechanism by which an electrical signal would be transmitted when a protein binds its target, and to calibrate how that signal is detected by the underlying circuitry, Retamal said.
* Retamal and her colleagues created the first artificial membrane without using solvents on a silicon support base. They chose silicon because of its low cost, wide availability and because its “hydrophobicity” (how much it repels water) can be controlled chemically, allowing them to build membranes on top.
Next they evaporated a chemical known as chitosan onto the silicon. Chitosan is derived from chitin, a sugar found in the shells of certain crustaceans, like lobsters or shrimp. Whole bags of the powder can be bought from chemical companies worldwide. They chose this ingredient for its ability to form a moisturizing matrix. It is insoluble in water, but chitosan is porous, so it is capable of retaining water.
Finally they evaporated a phospholipid molecule known as dipalmitoylphosphatidylcholine (DPPC) onto the chitosan-covered silicon substrate and showed that it formed a stable “bilayer,” the classic form of a membrane. Spectroscopy showed that these artificial membranes were stable over a wide range of temperatures.
It is difficult to find fault with a process that can create food from sunlight, water and air, but for many plants, there is room for improvement. Researchers have taken an important step towards enhancing photosynthesis by engineering plants with enzymes from blue-green algae that speed up the process of converting carbon dioxide into sugars. The results, published today in Nature1, surmount a daunting hurdle on the path to boosting plant yields — a goal that is taking on increasing importance as the world’s population grows.
“With the limited ability to increase land use for agriculture, there’s a huge interest in trying to improve yield across all the major crops,” says Steven Gutteridge, a research fellow at chemical firm DuPont’s crop-protection division in Newark, Delaware.
Researchers have long wanted to increase yields by targeting Rubisco, the enzyme responsible for converting carbon dioxide into sugar. Rubisco is possibly the most abundant protein on Earth, and can account for up to half of all the soluble protein found in a leaf.
But one reason for its abundance is its inefficiency: plants produce so much Rubisco in part to compensate for its slow catalysis. Some have estimated that tinkering with Rubisco and ways to boost the concentration of carbon dioxide around it could generate up to a 60% increase2 in the yields of crops such as rice and wheat. Plant geneticist Maureen Hanson of Cornell University in Ithaca, New York, and her colleagues decided to borrow a faster Rubisco from the cyanobacterium Synechococcus elongatus.
A team including Hanson and plant physiologist Martin Parry of Rothamsted Research in Harpenden, UK, shuttled bacterial Rubisco genes into the genome of the chloroplast — the cellular organelle where photosynthesis takes place — in the tobacco plant (Nicotiana tabacum), a common model organism for genetic-engineering research. In some of the plants the researchers also added a bacterial protein that is thought to help Rubisco to fold properly. In others, they added a bacterial protein that structurally supports Rubisco.
Both lines of tobacco were able to use the bacterial Rubisco for photosynthesis, and both converted CO2 to sugar faster than normal tobacco1. The work provides an important foundation for testing the hypothesis that a faster Rubisco can yield a more productive plant, says Donald Ort, a plant biologist at the University of Illinois at Urbana–Champaign.
Human herpesvirus 6, pictured above, is just one of numerous viruses found living in and on the bodies of healthy humans. The virus commonly causes illness in young children but is found in the mouths of some healthy young adults, where its presence indicates an active viral infection despite a lack of symptoms.
On average, healthy individuals carry about five types of viruses on their bodies, the researchers report online in BioMed Central Biology. The study is the first comprehensive analysis to describe the diversity of viruses in healthy people.
The research was conducted as part of the Human Microbiome Project, a major initiative funded by the National Institutes of Health (NIH) that largely has focused on cataloging the body's bacterial ecosystems. "Most everyone is familiar with the idea that a normal bacterial flora exists in the body," said study co-author Gregory Storch, MD, a virologist and chief of the Division of Pediatric Infectious Diseases. "Lots of people have asked whether there is a viral counterpart, and we haven't had a clear answer. But now we know there is a normal viral flora, and it's rich and complex."
In 102 healthy young adults ages 18 to 40, the researchers sampled up to five body habitats: nose, skin, mouth, stool and vagina. The study's subjects were nearly evenly split by gender. At least one virus was detected in 92 percent of the people sampled, and some individuals harbored 10 to 15 viruses.
"We were impressed by the number of viruses we found," said lead author Kristine M. Wylie, PhD, an instructor of pediatrics. "We only sampled up to five body sites in each person and would expect to see many more viruses if we had sampled the entire body."
Scientists led by George Weinstock, PhD, at Washington University's Genome Institute, sequenced the DNA of the viruses recovered from the body, finding that each individual had a distinct viral fingerprint. (Weinstock is now at The Jackson Laboratory in Connecticut.) About half of people were sampled at two or three points in time, and the researchers noted that some of the viruses established stable, low-level infections.
The researchers don't know yet whether the viruses have a positive or negative effect on overall health but speculate that in some cases, they may keep the immune system primed to respond to dangerous pathogens while in others, lingering viruses increase the risk of disease.
The modern European gene pool was formed when three ancient populations mixed within the last 7,000 years, Nature reports.
Blue-eyed, swarthy hunters mingled with brown-eyed, pale skinned farmers as the latter swept into Europe from the Near East. But another, mysterious population with Siberian affinities also contributed to the genetic landscape of the continent. The findings are based on analysis of genomes from nine ancient Europeans. Agriculture originated in the Near East - in modern Syria, Iraq and Israel - before expanding into Europe around 7,500 years ago.
Multiple lines of evidence suggested this new way of life was spread by a wave of migrants, who interbred with the indigenous European hunter-gatherers they encountered on the way. But assumptions about European origins were based largely on the genetic patterns of living people. The science of analysing genomic DNA from ancient bones has put some of the prevailing theories to the test, throwing up a few surprises.
In the new paper, Prof David Reich from the Harvard Medical School and colleagues studied the genomes of seven hunter-gatherers from Scandinavia, one hunter whose remains were found in a cave in Luxembourg and an early farmer from Stuttgart, Germany. The hunters arrived in Europe thousands of years before the advent of agriculture, hunkered down in southern refuges during the Ice Age and then expanded during a period called the Mesolithic, after the ice sheets had retreated from central and northern Europe.
Their genetic profile is not a good match for any modern group of people, suggesting they were caught up in the farming wave of advance. However, their genes live on in modern Europeans, to a greater extent in the north-east than in the south.
The early farmer genome showed a completely different pattern, however. Her genetic profile was a good match for modern people in Sardinia, and was rather different from the indigenous hunters.
But, puzzlingly, while the early farmers share genetic similarities with Near Eastern people at a global level, they are significantly different in other ways. Prof Reich suggests that more recent migrations in the farmers' "homeland" may have diluted their genetic signal in that region today.
Prof Reich explained: "The only way we'll be able to prove this is by getting ancient DNA samples along the potential trail from the Near East to Europe... and seeing if they genetically match these predictions or if they're different.