anti dogmanti
6.1K views | +0 today
anti dogmanti
discoveries based on the scientific method
Curated by Sue Tamani
Your new post is loading...
Your new post is loading...
Scooped by Sue Tamani!

Origin of intelligence and mental illness linked to ancient genetic accident | KurzweilAI

Origin of intelligence and mental illness linked to ancient genetic accident | KurzweilAI | anti dogmanti |
Like mice, humans with mutations in the DLG2 gene made significantly more errors than healthy control subjects in tests of visual discrimination acquisition...

Researchers have identified the moment in history when the genes that enabled us to think and reason evolved.

This point 500 million years ago provided our ability to learn complex skills, analyze situations and have flexibility in the way in which we think.

According to Professor Seth Grant of the University of Edinburgh, who led the research, intelligence in humans developed as the result of an increase in the number of brain genes in our evolutionary ancestors: a simple invertebrate animal living in the sea 500 million years ago experienced a “genetic accident,” which resulted in extra copies of these genes being made.

Mice and humans share limitations of higher mental functions’

This animal’s descendants benefited from these extra genes, leading to behaviorally sophisticated vertebrates — including humans.

The research team studied the mental abilities of mice and humans, using comparative tasks that involved identifying objects on touch-screen computers.

Researchers then combined results of these behavioral tests with information from the genetic codes of various species to work out when different behaviors evolved.

They found that higher mental functions in humans and mice were controlled by the same genes.

Genetic causes of brain disorders

The study also showed that when these genes were mutated or damaged, they impaired higher mental functions. “Our work shows that the price of higher intelligence and more complex behaviors is more mental illness,” said Professor Grant.

“This ground breaking work has implications for how we understand the emergence of psychiatric disorders and will offer new avenues for the development of new treatments,” said John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, one of the study funders.

The researchers had previously shown that more than 100 childhood and adult brain diseases are caused by gene mutations.

“We can now apply genetics and behavioral testing to help patients with these diseases”, said Dr Tim Bussey from Cambridge University, which was also involved in the study.

The study was funded by the Wellcome Trust, the Medical Research Council and European Union.

No comment yet.
Scooped by Sue Tamani!

This radical discovery could turn semiconductor manufacture inside out | KurzweilAI

This radical discovery could turn semiconductor manufacture inside out | KurzweilAI | anti dogmanti |
A completely new method of manufacturing the smallest structures in electronics could make their manufacture thousands of times quicker, allowing for cheaper semiconductors.

Instead of starting from a silicon wafer or other substrate, the idea is to grow gallium arsenide semiconductor structures from freely suspended nanoparticles of gold in a flowing gas. Semiconductor nanowires are key building blocks for the next generation of light-emitting diodes, solar cells, and batteries, according to Lund University researchers.

Behind the discovery is Lars Samuelson, Professor of Semiconductor Physics at Lund University, Sweden, and head of the University’s Nanometer Structure Consortium. He believes the technology will be ready for commercialization in two to four years. A prototype for solar cells is expected to be completed in two years.

“When I first suggested the idea of getting rid of the substrate, people around me said ‘you’re out of your mind, Lars; that would never work.’ When we tested the principle in one of our converted ovens at 400°C, the results were better than we could have dreamed of,” he says.


“The basic idea was to let nanoparticles of gold serve as a substrate from which the gallium arsenide semiconductors grow. This means that the accepted concepts really were turned upside down!”

Since then, the technology has been refined, patents have been obtained and further studies have been conducted. In the article in Nature, the researchers show how the growth can be controlled using temperature, time and the size of the gold nanoparticles.

Recently, they have also built a prototype machine with a specially built oven. Using a series of ovens, the researchers expect to be able to “bake” the nanowires, as the structures are called, and thereby develop multiple variants, such as p-n diodes. A further advantage of the technology is avoiding the cost of expensive semiconductor wafers.

“In addition, the process is not only extremely quick, it is also continuous. Traditional manufacture of substrates is batch-based and is therefore much more time-consuming,” adds Samuelson.


At the moment, the researchers are working to develop a good method to capture the nanowires and make them self-assemble in an ordered manner on a specific surface. This could be glass, steel or another material suited to the purpose. The reason why no one has tested this method before, in the view of Professor Samuelson, is that today’s method is so basic and obvious. Such things tend to be difficult to question.

However, the Lund researchers have a head start, thanks to their parallel research based on an innovative method in the manufacture of nanowires on semiconductor wafers, known as epitaxy — so the researchers have chosen to call the new method aerotaxy. Instead of sculpting structures out of silicon or another semiconductor material, the structures are instead allowed to develop, atomic layer by atomic layer, through controlled self-organization.

The breakthrough for these semiconductor structures came in 2002 and research on them is primarily carried out at Lund, Berkeley and Harvard universities.

The Lund researchers specialize in developing the physical and electrical properties of the wires, which helps create better and more energy-saving solar cells, LEDs, batteries and other electrical equipment that is now an integrated part of our lives.

No comment yet.
Scooped by Sue Tamani!

Google’s Searches for UnGoogleable Information to Make Mobile Search Smarter | MIT Technology Review

Google’s Searches for UnGoogleable Information to Make Mobile Search Smarter | MIT Technology Review | anti dogmanti |
The company wants to improve its mobile search services by automatically delivering information you wouldn’t think to search for online.

For three days last month, at eight randomly chosen times a day, my phone buzzed and Google asked me: “What did you want to know recently?” The answers I provided were part of an experiment involving me and about 150 other people. It was designed to help the world’s biggest search company understand how it can deliver information to users that they’d never have thought to search for online.

Billions of Google searches are made every day—for all kinds of things—but we still look elsewhere for certain types of information, and the company wants to know what those things are.

“Maybe [these users are] asking a friend, or they have to look up a manual to put together their Ikea furniture,” says Jon Wiley, lead user experience designer for Google search. Wiley helped lead the research exercise, known as the Daily Information Needs Study.

If Google is to achieve its stated mission to “organize the world’s information and make it universally accessible,” says Wiley, it must find out about those hidden needs and learn how to serve them. And he says experience sampling—bugging people to share what they want to know right now, whether they took action on it or not—is the best way to do it. “Doing that on a mobile device is a relatively new technology, and it’s getting us better information that we really haven’t had in the past,” he says.

Wiley isn’t ready to share results from the study just yet, but this participant found plenty of examples of relatively small pieces of information that I’d never turn to Google for. For example, how long the line currently is in a local grocery store. Some offline activities, such as reading a novel, or cooking a meal, generated questions that I hadn’t turned to Google to answer—mainly due to the inconvenience of having to grab a computer or phone in order to sift through results.

Wiley’s research may take Google in new directions. “One of the patterns that stands out is the multitude of devices that people have in their lives,” he says. Just as mobile devices made it possible for Google to discover unmet needs for information through the study, they could also be used to meet those needs in the future.

Contextual information provided by mobile devices—via GPS chips and other sensors—can provide clues about a person and his situation, allowing Google to guess what that person wants. “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Wiley.

Google is already taking the first steps in this direction. Google Now offers unsolicited directions, weather forecasts, flight updates, and other information when it thinks you need them. Google Glass—eyeglass frames with an integrated display could also provide an opportunity to preëmptively answer questions or provide useful information. “It’s the pinnacle of this hands-free experience, an entirely new class of device,” Wiley says of Google Glass, and he expects his research to help shape this experience.

Google may be heading toward a new kind of search, one that is very different from the service it started with, says Jonas Michel, a researcher working on similar ideas at the University of Texas at Austin. “In the future you might want to search very new information from the physical environment,” Michel says. “Your information needs are very localized to that place and event and moment.”

Finding the data needed to answer future queries will involve more than just crawling the Web. Google Now already combines location data with real-time feeds, for example, from U.S. public transit authorities, allowing a user to walk up to a bus stop and pull out his phone to find arrival times already provided.

Michel is one of several researchers working on an alternative solution—a search engine for mobile devices dubbed Gander, which communicates directly with local sensors. A pilot being installed on the University of Texas campus will, starting early next year, allow students to find out wait times at different cafés and restaurants, or find the nearest person working on the same assignment.

Back at Google, Wiley is more focused on finding further evidence that many informational needs still go unGoogled. The work may ultimately provide the company with a deeper understanding of the value of different kinds of data. “We’re going to continue doing this,” he says. “Seeing how things change over time gives us a lot of information about what’s important.”

Tom Simonite IT Editor, Software & Hardware

No comment yet.
Scooped by Sue Tamani!

Google has officially eaten the newspaper industry | KurzweilAI

Google has officially eaten the newspaper industry | KurzweilAI | anti dogmanti |

(Credit: Statista/Google) Newspapers have continued to churn out the same content while watching their advertisers steadily flee for sites like Craigslist, Yahoo, the Huffington Post/AOL, Facebook, and Google, says writer Will Oremus in Slate Future Tense. .


The chart above, from Statista’s Felix Richter, plots Google’s digital advertising revenue against the print advertising revenue of all U.S. newspapers and magazines. The Guardian‘s Roy Greenslade estimates that Google’s total revenue also now exceeds that of the entire U.S. newspaper industry even when you count digital ads.


No comment yet.
Scooped by Sue Tamani!

How to Tell a Good Website from a Crap Website

How to Tell a Good Website from a Crap Website | anti dogmanti |
When you find a science article on the web, how do you know whether it's reliable or not?


It's been said that searching for information on the Internet is like drinking from a firehose. There is a mind-boggling amount of information published that's freely available to anyone and everyone. The Internet grows so quickly that every time you open your web browser, you've got direct access to the largest compilation of information in history, bigger than all the books in all the libraries in all the world; and at current rates, it's growing by 5% every month. Search for information on any given subject, and you're presented with more options than anyone can know what to do with. So when the average person wants to learn some decent information, how can you tell whether the website you've found is giving you good info, or giving you crap? Today we'll find out.


We're going to look at three categories of tools for appraising the validity of the information presented on a website. First, we're going to go through some general rules of thumb, pertaining to the website's style of presentation, that most laypeople should be able to spot. Next, we're going to look at a handful of software tools designed to give you an objective assessment. And finally, we're going to quickly review the "How to Spot Pseudoscience" guide to give you a pretty darned good idea of any given piece of information you're curious about.


Style of Presentation

There actually is a certain amount of "judging a book by its cover" that makes sense, particularly for websites. Websites can be published by anyone, whether they have a large staff of editors and researchers behind them or not. Big slick presentations are found everywhere, from university websites to science journals to mass media consumer portals promoting who knows what. But there are important differences between a science article and a pseudoscience article, even on the slickest website, that you can learn to spot.


Often the most obvious is the list of references at the end of the article. If there isn't one, then you're probably reading a reporter's interpretation of the research, and should try to click through to find the original. If there are no references at all, then it's a big red flag that what you're reading is unlikely to be legitimate science research. If it's not referenced, pass and move on. A lack of references doesn't mean the article is wrong, it just means that there's a better, more original source out there that you haven't found yet.


If there are references, be aware that oftentimes, cranks will list references as well, so there are some things you need to look out for. Be wary of someone who cites himself. Be especially wary of a study or article that's cited, but once you click on it, you find that it actually says something different than what the author described. It's very important to look at what those citations are: Are they articles in legitimate science journals, or are they published in a journal dedicated to the promotion of something?


Many Google results will return not a page on a slick big-budget website, but on an obscure page. For example, university professors will often have a little website on the university's server, describing their research or whatever. Often, those little websites look terrible, because they're not made by a professional web person. A crank who churns out his own website might superficially look really similar. How do you know whether you're looking at an amateurish site made by a crank, or an amateurish site made by a real science expert?


One way is that real science professionals know that there are ways to establish proper credibility, and they generally follow those rules. The citation of sources is important here as well. A proper research scientists knows that he must list sources to be taken seriously. A crank often skips it, or cites himself, or makes vague references to famous names like Einstein (probably the only names he knows).

Grammatical errors are a case of where it's appropriate to judge a book by its cover. Bad spelling and grammar left uncorrected is a sign that you're probably reading the page of a crank, who works in isolation and has nobody double checking his work. A professor's personal website, however, is often checked over and corrected by undergrads or associates. Do be wary of bad grammar.


So we're dancing around the subject a bit of who is the author. First of all, if the author is anonymous, dismiss the article out of hand. If the author is a reporter, which is often the case, then you need to click through to find the lead researcher's name. If he's a legitimate scientist, he'll have plenty of publications out there, and it's easy to look him up by going to Google Scholar and typing in his name. This doesn't prove anything, but having publications in recognized journals gives the author more credibility than someone who doesn't. Be aware that most indexes like Google Scholar also list crap publications, even mass market paperback books that are not vetted in any way, so you do have to be careful about looking exactly at what those publications are.


If the website teases you with a bit of titillating information but then requires a purchase to get the rest of the story, you could be dealing with a crank sales portal, or you could be dealing with a paywall which is still (unfortunately) common for science journals.


Universities almost always have accounts that allow them past the paywalls. You should be able to easily tell whether you're looking at a paywall where researcher credentials can be input to download the full article, or whether you're looking at a sales page trying to pitch you on buying the book to learn "the secrets" or whatever it is. A journal paywall is a good indicator that you're probably looking at real science; the sales page is a good indicator that you're probably looking at crap.


A braggadocious domain name like or is just like a used car salesman calling himself Honest John. Websites like that are not typical of the way proper science reporting is done. The website should represent an actual, real-world organization, academic institution, or publication, and not be just some random web compilation.


Software Tools

It would be great if there was such a thing as a web browser plugin or something that would simply give you a red X or a green check to tell a layperson whether a website is reliable or crap. But despite a number of efforts to build just such a thing, no great headway has been made.


One good tool is the Quackometer, which uses an automated algorithm to scan a website's pages, looking at its use of language. It comes back with a score telling you how likely it is that the site is misusing scientific sounding language, and is promoting quackery, or whether it generally appears good. Obviously this is an imperfect solution; but when I've used it on sites that I know, I've found that its results are generally correct, with its biggest flaw being that it often gives a little too good of a score to sites that deserve lambasting.


The Web of Trust is a crowdsourced rating system that gives a trustworthiness score for sites. It's a browser plugin that gives you a little icon next to every link on the page, plus a bigger one for the page you're on, that ranges from green to yellow to red based on ratings given by users. In my experience, it's less useful for gauging the reliability of scientific articles on sites, and more useful for metrics like the site's customer service and security; more for detecting spam than bad reporting.


Rbutr is a browser plugin that lets users link articles that rebut whatever's written on the current page. So, if you're reading something that's been rebutted somewhere, rbutr will link you right to it. The downside is that it cuts both ways: it rebuts a bad article with a good, and rebuts that same good article with the bad. According to someone else. There's not really a way for the end user to know which is better, just that they rebut each other.


Somewhat surprisingly, online trustworthiness services, of which TRUSTe is the best known, allow sites to pay for a privacy certification that they can put on their websites. It turns out that sites who pay for these logos are actually more likely to not be very trustworthy; people with less honorable motives are often more highly motivated to convince you that they are honorable. And, in any case, site privacy has nothing to do with the quality of the site's articles. If you see some sort of a logo or certification on a website, it proves nothing whatsoever by itself. By no means should you assume that it makes the information likely to be good.


The best roundup of tools for assessing the validity of online data is Tim Farley's Skeptical Software Tools. You should keep it as a bookmark, and if anything new comes along for helping laypeople evaluate websites, Tim will be among the first to report on it.


How to Spot Pseudoscience

Skeptoid followers may recognize this list from episode 37, way back long ago. This is an abbreviated version that you can apply to the contents of a website. These common red flags don't prove anything, but they're characteristics of pseudoscience. Watch out any time you see these on a website:


Ancient knowledge, ancient wisdom, statements that ancient people believed or knew about this, or that it's stood the test of time. To test whether an idea's true, we test whether it's true; we don't ask if ancient people believed it.


Claims of suppression by authorities, an old dodge to explain away why you've never heard of this before. The biggest red flag of all is that somebody "Doesn't want you to know" this, or "What doctors won't tell you".


Anything that sounds too good to be true probably is. Miraculously easy solutions to complicated problems should always set off your skeptical radar.


Is the website dedicated to promotion or sales pertaining to a particular product or claim? If so, you're probably reading a sales brochure disguised as a research report.


Be especially aware of websites that cite great, famous, well-known names as their inspiration. Albert Einstein, Nikola Tesla, and Stephen Hawking are three of the most abusively co-opted names in history. Real research instead tends to cite current researchers in the field, names that few people have ever heard of. The famous names are mentioned mainly in sales pitches.


Always watch out for the all-natural fallacy, in its many guises. If a website trumps the qualities of being all-natural, organic, green, sustainable, holistic, or any other of the popular marketing buzzwords of the day, it's more likely that you're reading pseudoscience than science.


Does the article fit in with our understanding of the world? Is it claiming a revolutionary development or idea — free energy, super health — things everyone wants but that don't actually exist? Be skeptical.


Real research always cites weaknesses and conflicting evidence, which is always present in science. Pseudoscience tends to dismiss all such evidence. If a website claims that scientists or experts all agree on this new discovery, you're probably reading unscientific nonsense.


In general, the word "revolution" is something of an old joke in science fields, along with the phrases "scientists are baffled" and "what they don't want you to know". If the website promises to revolutionize anything, you're almost certainly dealing with a crank who has little connection with genuine science.


Anytime someone puts on their web page that they're smart, or that they are a renowned intellectual or thinker, they're not. Click your way elsewhere.


Finally, always run screaming from a website by One Guy with All the Answers. The claim to have solved or explained everything with a new, pioneering theory is virtually certain to be crankery.


So there you have it; it's neither perfect nor comprehensive, but it should give most laypeople a fair start on evaluating a website's quality of information. If nothing else, it shows what a difficult task this is, and highlights yet another reason why so many people believe weird things. Bad information is easy to sell, and not always so easy to spot.

Meagan Lucas's curator insight, October 2, 2014 11:05 AM

This isa good article that sites 3 quick and easy ways to spot red flags amongst the millions of online sources.

1. Style of Presentation 

- no references 

- always check cited items to make sure information is portrayed correctly 

- Google it!

2. Software Tools

- Quackometer..... Scans a website's use of language

- Rbutr.... A browser plugin that links articles that have previously been disclaimed 

- The Web of Trust..... crowdsourced rating system of the site's trustworthiness

3. Spot Pseudoscience

- claims of suppression by authorities.... Biggest flag!!!

- watch out for all-natural fallacy

- those that cite great, well-known, famous people as their inspiration (Albert Einstein & Stephen Hawking)

Scooped by Sue Tamani!

Cisco Blog » Blog Archive » How the Internet of Everything Will Change the World…for the Better #IoE [Infographic]

Cisco Blog » Blog Archive » How the Internet of Everything Will Change the World…for the Better #IoE [Infographic] | anti dogmanti |
As a futurist and technologist, I’m an optimist. I view technology through the lens of how it can help people.

From this perspective, there is no better time to be alive than now. That’s because we are entering an era where the Internet has the potential to dramatically improve the lives of everyone on our planet—from curing disease, to understanding climate change, to enhancing the way companies do business, to making every day more enjoyable.


Already, the Internet has benefited many individuals, businesses, and countries by improving education through the democratization of information, allowing for economic growth through electronic commerce, and accelerating business innovation by enabling greater collaboration.


So what will the next decade of the Internet bring?


From the Internet of Things (IoT), where we are today, we are just beginning to enter a new realm: the Internet of Everything (IoE), where things will gain context awareness, increased processing power, and greater sensing abilities. Add people and information into the mix and you get a network of networks where billions or even trillions of connections create unprecedented opportunities and give things that were silent a voice.


Cisco defines IoE as bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before—turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.

Within this definition, an important aspect of IoE (and how it differs from IoT) emerges—the concept of “network effects,” on which my Cisco IBSG colleague James Macaulay has done a lot of work.


As more things, people, and data become connected, the power of the Internet (essentially a network of networks) grows exponentially. This thinking (“Metcalfe’s law”) comes from Robert Metcalfe, well-known technologist and founder of 3Com, who stated that the value of a network increases proportionately to the square of the number of users. In essence, the power of the network is greater than the sum of its parts, making the Internet of Everything, incredibly powerful.


Given the tremendous anticipated growth of the Internet over the next 10 years, it is critical for business and government leaders, as well as citizens, to begin preparing for what is to come. Here are some questions to get you started:


How do I set priorities to match the opportunities that will exist in the connected world of IoE?
Given the impact the Internet already has had on my business, what happens when new categories of things are connected at exponential rates?
What are the potential benefits and risks of IoE for my business or government organization?
How should organizations be structured around information and processes?
How will governance, control, and responsibility change in an IoE world?

In my next blog, I will cover some of the ways IoE is already benefiting businesses, people, and governments, as well as how the Internet will be able to address some of humanity’s most pressing issues.


Let me know what you think. Is IoE just another buzzword, will it change the world, or is it somewhere in between?

You can also join the discussion at:

#IoE and #InternetofEverything

No comment yet.
Scooped by Sue Tamani!

Researchers battle storm’s wrath

Researchers battle storm’s wrath | anti dogmanti |
New York University lost crucial mouse colonies, but students and staff helped to save equipment and patients.

For Benjamin Bartelle, the first sign that Hurricane Sandy was no ordinary storm came when each of the lab’s windows popped open, scattering papers across the floor. It was about 7.30 p.m. on 29 October, and Bartelle was on the fifth floor of the Skirball Institute of Biomolecular Medicine, part of the New York University (NYU) Langone Medical Center in Manhattan. Outside, in exposed parts of the city, winds were gusting at up to 160 kilometres per hour as the storm made landfall.


Bartelle, a recent PhD finishing his last experiments in protein engineering, braced the windows shut with 20-litre water bottles. Soon after, an alarm sounded at the fish facility down the hall, and Jesus Torres-Vasquez, who studies blood-vessel formation in zebrafish, came up to check. That’s when the building went dark. Sixteen blocks to the south, a record storm surge had caused the East River to break its banks, flooding a substation and triggering a blackout across the downtown area. But the Langone Medical Center, also located alongside the river, was threatened in a more direct way.


As Hurricane Sandy battered the US eastern seaboard that night, the many universities, labs and research stations in its path would feel the effects of power outages, damaging winds and flooding. None was hit as badly as the Langone.


The medical centre was within the evacuation zone that had been declared the day before, but hospitals and nursing homes were exempt because of the risks of moving patients. The 705-bed Tisch Hospital and three connected research buildings at the Langone are equipped with backup generators and meet all safety standards, according to the NYU. Sandbags were stacked around the buildings in preparation, and maintenance workers were on call. Staff at underground mouse facilities would be working through the night to monitor tens of thousands of mice used in research projects from cancer to neurobiology.


At the Skirball, the backup power kicked in after a few minutes, but something was still amiss. Neurobiologist Wenbiao Gan and his lab staff took the lift down to the basement to find it more than ankle deep in water. They waded in to retrieve lasers and other equipment. When Bartelle saw them return with wet trouser legs, he looked out of the window. The other medical centre buildings were dark, including the Joan and Joel Smilow Research Center, a 13-storey glass and brick edifice that is also part of the Langone centre, and nearest to the river. If the Skirball was getting wet, the Smilow centre was in even bigger trouble; its basement, housing about 10,000 mice and rats, is almost 10 metres below water level. The flood waters had surged into that building so forcefully that animal-care workers had to evacuate. The mutant and transgenic mice housed in quarantine there were left to their fates.


Bartelle headed for a residence hall but was soon dragged into a different drama, when a member of the hospital staff came in shouting: “We have to evacuate the patients from Tisch Hospital! We need all the hands we can get!” By 9 p.m., hundreds of medical and graduate students had assembled in the hospital lobby. Under the direction of the New York City Fire Department, they scaled 16 flights of stairs and brought 215 patients down on plastic sleds. On the ground floor, the patients — some of them in a coma, others recovering from surgery — were transferred to gurneys and ambulances and on to other hospitals. The students were still working 12 hours later.


By then, it was clear that much of the Langone had flooded, with freezer outages and water damage affecting the labs of 30–50 principal investigators. Worst affected was the Smilow, where severe flooding in the basement disabled the pumps feeding fuel to backup generators on the roof. A leak also spilled diesel inside the animal facility, and all the mice there drowned or died from inhaling diesel fumes. Neurobiologist Gordon Fishell lost about 2,500 mice representing 40 genetic variants, which he had developed for studies of forebrain development over more than a decade.


As NYU officials tally the damage, they will inevitably have to address the issue of whether the disaster could have been avoided or minimized. “Putting animals (or electrical control equipment) in a basement within a stone’s throw from a tidal river is not a wise idea,” immunologist Alan Frey wrote in an e-mail to Nature after losing all of his mice, which were housed at the Smilow. At the Texas Medical Center in Houston in 2001, Tropical Storm Allison destroyed millions of dollars of equipment and killed thousands of lab animals ranging from mice to monkeys. In the aftermath, engineers constructed flood gates and moved animal facilities and crucial components of the power system out of the basement.


The Smilow, which opened in 2006, can withstand a storm surge of about 3.7 metres — 20% higher than that expected from a once-per-century flood, according to the NYU. Now that Sandy has overtopped those defences, officials say that they will be assessing what they can do differently in the future.


Bartelle, whose work was spared by the disaster, says that he won’t forget the efforts made that night to get patients out of harm’s way — especially by students and researchers at the Smilow who knew they were facing disastrous losses to their work. “Why does the tragedy happen to the person right next to you? They don’t deserve it any more than you do,” he says. “It’s going to be difficult moving forward for everybody.”

No comment yet.
Scooped by Sue Tamani!

In defence of pseudoscience › Opinion (ABC Science)

In defence of pseudoscience › Opinion (ABC Science) | anti dogmanti |

Believe it or not; pseudoscience provides the perfect launch pad for logical thought, says Dr Paul Willis.

Recently, I was watching an old MythBusters with my son. It was the one where they investigated the Hindenburg myth, that what caught fire was not the hydrogen but a thermite mixture in the paints used to dope the fabric covering of the behemoth airship. (They concluded that it was a mixture of both the thermite and the hydrogen).

Watching their models of the Hindenburg burn with varying intensities took me back to my high school days and a bout of paranormal fakery and investigations that developed my abilities to think rationally.

In hindsight I started out with hoaxes, pseudoscience and pulp-science as a training ground for rational investigation. And it hasn't surprised me that over the years as I've met thousands of scientists and science communicators, their ranks are filled with people who started thinking in the rational paradigms of science by dabbling in the pseudosciences.

It all started when a school mate whose name has long since been forgotten came to school one day with some colour photographs of the last minute of the Hindenburg as she crashed to the ground in that infamous fireball. We gathered around realising that there was something wrong — we'd never seen colour pics of the incident before and there was something about the pose that rang alarm bells in the backs of our minds.

They were, of course, hoaxes; he'd spent the weekend making a scale model of the airship then photographed it after setting it alight. But what he also ignited was a wave of hoaxing across the school that lasted for months. I got in on the craze hoaxing photos of UFOs and ghosts. It was all good fun and some of the results I remember were very convincing. But we were learning something important; that it is relatively easy to fake data and that the kudos of creating a convincing fake is intoxicating!

Testing claims
Up until that point I had dabbled in all kinds of pseudoscientific mumbo jumbo from UFOs, ghosts, Bigfoot, past life regressions, psychic abilities etc, etc. You name it and I was interested in it. But the hoaxing fad started me down the path of devising ways of testing the claims made by the paranormal, and questioning the evidence that had been put forward. I had already dispatched creationism as a corrupted belief system by conducting some simple investigations of the evidence for an old earth and the evolution of life that could be found all around me. And as I investigated all the other non-science and 'fringe' science that had held my attention to that point, one by one my interest in them dropped as they each revealed themselves to be based on bunkum, probable fraud and impossibilities.

I tested past life regressions by planting questions that could not have been answered by someone who was genuinely from another period ("You enter the room where you died in 1760, where is the light switch?" "On the left of the door").

I covered three identical buckets, only one with water in it, and got dowsers to tell me which one was full — they couldn't do it. I once taped a large quartz crystal under a chair that I got a crystal freak to sit on without knowing it; they detected nothing until I told them they were sitting on a crystal (I actually told them it was a garnet taped to the back of the chair and they started prattling on about how they could feel 'garnet energy' in their back and they were shocked that they were actually sitting on a large quartz crystal without realising it).

Conversely, I even had the science teacher at school convinced that his hand was tingling because of the radiation coming from a lump of uranium ore that a friend brought to school — the powers of suggestion!

Throughout my career as a scientist and science communicator I've been struck by how many of my colleagues also shared an early interest in the pseudosciences and how examining and testing those beliefs led them to appreciate rational thinking and dispatch the pseudosciences as implausibilities that are often riddled with fraud. In this way pseudoscience has been a scratching post for many young scientists upon which they can hone their logical abilities to investigate and test claims. In some cases this has been their introduction to science but often, as in my experience, I had a pre-existing interest in science and used the pseudosciences as a testing ground for my abilities as a budding rational thinker.

Thus I wish to praise the pseudosciences as useful in the ontogeny of so many of our scientific minds. Not as a constructive contribution to their knowledge but as a convenient foil to test their growing powers of reason.

No comment yet.
Scooped by Sue Tamani!

Dark matter pioneer nets PM's science prize › News in Science (ABC Science)

Dark matter pioneer nets PM's science prize › News in Science (ABC Science) | anti dogmanti |
A renowned astronomer who introduced the revolutionary concept of dark matter has been awarded one of Australia's top science prizes.
No comment yet.
Scooped by Sue Tamani!

Craig Venter Imagines a World with Printable Life Forms | Wired Science |

Craig Venter Imagines a World with Printable Life Forms | Wired Science | | anti dogmanti |
NEW YORK CITY — Craig Venter imagines a future where you can download software, print a vaccine, inject it, and presto! Contagion averted.

“It’s a 3-D printer for DNA, a 3-D printer for life,” Venter said here today at the inaugural Wired Health Conference in New York City.

The geneticist and his team of scientists are already testing out a version of his digital biological converter, or “teleporter.”

Why should you care? Well, because the machine has “really good anti-viral software,” he quipped.

His team is working through scenarios where they have less than 24 hours to make a new vaccine with this gadget.

He recalled working with Mexico City Mayor Marcelo Ebrard during the H1N1 outbreak in 2009. They couldn’t get the virus out of the metropolis because authorities wouldn’t allow it, he said. That delayed efforts to stem the spread of the virus, and thousands of people died.

Had they been able to digitize it, they could have e-mailed it, and “it could have gone around the world digitally,” allowing researchers to study it and to build a vaccine more quickly, Venter said.

Venter is not the first to try to print biological ware. Scientists have tried to print blood vessels, organs and even burgers.

But whether regulators will allow this futuristic approach to public health is another story. “Regulation will be an interesting aspect of this,” Venter conceded. “We get a lot of spam e-mail. People making fake drugs and selling them for profit. It’s a nasty world out there,” he said.

Mistaking an American Express bill for a scam and deleting it might decrease your credit rating, but downloading, printing and injecting a dangerous retrovirus masquerading as a vaccine is potentially life-threatening. Perhaps printable life technologies might spur the development of better spam filters or e-mail validation software as well.

If Venter’s printer becomes widely available, scientists and engineers would also have to ensure that molecules are printed accurately. Small changes could tweak the structure and make a printed protein work in a way they didn’t intend.

Venter is also experimenting with synthetic life, taking DNA from one type of cell, injecting it into another, and letting that “genetic software” reprogram its host. What that means in the context of DNA desktop manufacturing isn’t clear either, especially when it comes to questions of privacy.

Venter isn’t concerned. “Privacy with medical information is a fallacy,” Venter said. “If everyone’s information is out there, it’s part of the collective.”

He joked that he’s been beaming his genome into space for years, and perhaps the real fear is that an army of genetically engineered Craig Venters would come back to take over the planet.

But the reality is that the debate over whether a consumer has the right to know and own their genetic data is a very real one. Many in the scientific establishment, including the government, want to keep genetic data in the hands of experts, said Dr. Eric Topol in a following session at the conference.

“Many doctors … don’t like the idea of Aunt Betty mucking around with her macular degeneration alleles,” said geneticist Misha Angrist of the Institute for Genome Science and Policy at Duke University in an interview with Wired before the conference. “Of course, if we continue to extol the virtues of willful ignorance, then we will never stop thinking of our own genomes as the bogeyman.”

No comment yet.
Scooped by Sue Tamani!

Ant slaves trapped in their oppressors' nests covertly kill off the offspring they are left to care for in acts of rebellion that are part of an evolutionary ant "arms race" (Wired UK)

Ant slaves trapped in their oppressors' nests covertly kill off the offspring they are left to care for in acts of rebellion that are part of an evolutionary ant "arms race" (Wired UK) | anti dogmanti |
According to a study published by a team of biologists, ant slaves trapped in their oppressors' nests covertly kill off the offspring they are left to care for in acts of rebellion that are part of an evolutionary ant "arms race"...


The study, published in the journal Evolutionary Ecology, reveals that earlier recorded instances of the behaviours were not isolated acts, but a symptom of a common tendency by enslaved Temnothorax longispinosus worker ants to rebel against their Protomognathus americanus oppressors by means of sabotage.


Lead author on the paper Susanne Foitzik of Johannes Gutenberg University Mainz witnessed the acts in ant populations located in the US in West Virginia, New York and Ohio.


The sabotage resulted in an average survival rate among the captors' offspring of just 45 per cent -- in ordinary circumstances, around 85 percent of the pupae (a life stage of ants that follows the larval stage) should survive. Instances of the slaves neglecting and tearing apart the vulnerable pupae -- alone, or in gang attacks -- were recorded.


The study deduced that, since the workers cannot reproduce, the clever tactic is designed to weaken the parasitic colony, thus giving opposing colonies a fighting chance. It is a militaristic tactic, rather than a brood defence.


"Growth of social parasite nests is reduced, which leads to fewer raids and likely increases fitness of neighbouring related host colonies," write the paper's authors.


Species such as the Protomognathus americanus, a notorious slavemaker in North America that relies on its subjects to survive, have driven neighbouring populations to devise counter measures that ensure their own survival. No longer could the common worker ant idly sit by and allow the slavemaker to reduce it to a species of day care providers -- what is referred to as "brood parasitism", where slaves are forced to tend to their captors' young.


"Parasite pressure has led to the development of defensive strategies in hosts," says the study, "which, in turn, resulted in the evolution of counter-adaptations in parasites, a process which may lock both species in a coevolutionary dynamic, potentially escalating in an evolutionary arms race."


The ants are enslaved when their colony is attacked and expelled, or their brood stolen. The workers continue with their usual behaviour, despite now being in the Protomognathus americanus' master nest -- they continue to feed and clean the larvae, and the stolen brood become new slaves. When the captors' larvae start to pupate, however, something is triggered in the slaves.


"Probably at first the slaves cannot tell that the larvae belong to another species," Foitzik said in a statement. "The pupae, which already look like ants, bear chemical cues on their cuticles that can apparently be detected."


In West Virginia, New York and Ohio, 27, 49 and 58 per cent of the pupae survived, respectively -- the study explains that the variations are most likely due to varying defensive and offensive tactics developed by different colonies. For instance, the New York host colony was more aggressive. Prior studies have focused on these factors -- how colonies evolve to protect themselves from attack. However, the curious question of what a worker ant that cannot reproduce can do to target a stronger host community that holds all the cards, is what drove Foitzik's investigation.


"Based on theoretical considerations it was long thought that defense behaviours of enslaved workers are unlikely to evolve, because slaves cannot escape and reproduce, hence no behaviour could increase their direct fitness," explains the paper.


This study proves that when a Temnothorax longispinosus looks as though it has been defeated and entirely assimilated to its new life, it still has a few tricks up its sleeve. By targeting its captors' brood, the colony is weakened and thus conducts fewer raids on its neighbours, which could be relatives to the Temnothorax longispinosus workers.

No comment yet.
Rescooped by Sue Tamani from Geography Education!

Where Does Your Water Come From?

Where Does Your Water Come From? | anti dogmanti |

This interactive map documents where 443 million people around the world get there water (although the United States data is by far the most extensive).  Most people can't answer this question.  A recent poll by The Nature Conservancy discoverd that 77% of Americans (not on private well water) don't know where their water comes from, they just drink it.  This link has videos, infographics and suggestions to promote cleaner water.  This is also a fabulous example of an embedded map using ArcGIS Online to share geospatial data with a wider audience.  


Tags: GIS, water, fluvial, environment, ESRI, pollution, development, consumption, resources, mapping, environment depend, cartography, geospatial. 

Via Seth Dixon
Nic Hardisty's comment, October 15, 2012 9:01 AM
I was definitely unaware of where my drinking water came from. This is nice, user-friendly map... Hopefully it gets updated regularly, as it will be interesting to see how these sources change over time.
Bonnie Bracey Sutton's curator insight, July 1, 2013 3:55 PM

water is a resource we all depend on. Some of my best studies were on local Chesapeake Bay issues.

Scooped by Sue Tamani!

The meaning of life › Photos (ABC Science)

The meaning of life › Photos (ABC Science) | anti dogmanti |
Science photographer Malcolm Ricketts' stunning images of plants and animals are the result of almost thirty years documenting the work of University of Sydney scientists.


The craft of scientific photography was very different when Malcolm Ricketts started work at Sydney University nearly 30 years ago.

"When I started I used a 4 x 5 inch camera, there were even still a few glass plates lying around that were in the stock.

"All the graphs and illustrations were all done by a school artist in Letraset, then photographed, then multiple copies made and sent off to the editors and reviewers.


"And the same for photographs. Everything was done in black and white," says Ricketts, who documents the work of scientists at the University's School of Biological Sciences.


Today, all Rickett's photographs — both general and photomicroscopy — are taken with digital cameras, and most photographs are in colour except those taken on electron microscopes, which are black and white and artificially coloured.

And while the physics of taking a photo hasn't changed, significant changes in microscopy techniques over the last decade have enabled plant and molecular scientists to capture cellular detail in high resolution using green fluorescent proteins. (See photo#2 and photo#10 in photo gallery above)

Information, information, information

Capturing a good scientific image is very different to taking commercial and artistic photographs that hit you between the eyes, says Ricketts, who studied photography at TAFE after completing a biology degree.


Detail is critical.


"The photographs back up what the scientists describe."

This work is especially important in areas such as fluorescence microscopy where scientists are identifying structures, chromosome banding, and work on gene signalling and silencing in tobacco plants by Professor Peter Waterhouse (See photo #2 in photo gallery).


"You're conveying the maximum information of what the person who wants the image is trying to obtain. So if its cellular detail you go for the maximum detail. You're not looking for that advertising interocular jolt unless you are producing a PR image."


"Ultimately you go for your information, your detail and your art and pleasantness of an image."


One of Rickett's favourite images of Sydney Harbour coral Plesiastrea versipora (See photo #4 in photo gallery) — illustrates his point. The coral is coloured green by algae, so it's not as conventially beautiful as other images (See photo #3 in the photogallery), but the shot focuses on a single polyp.

"It's similar to orange soft coral, but it's got a little bit more detail in it."


Capturing the moment

Photographing an image that reflects what the scientist is trying to illustrate takes patience, says Ricketts.


A great example of this is a photo of anarchist bees laying eggs in honeycomb cells (See photo #11 in photo gallery), which Ricketts took for Professor Ben Oldroyd, who studies bee behaviour.


"That's a four second opportunity over the space of four or five hours of just concentrating looking down the camera waiting for an anarchist bee to wander around, shove its bum into a honeycomb cell and lay an egg, and then it's gone."


But science photography is a dying craft, he says.


"The science these days doesn't need the pictures as much as it did in the past.


"More and more and more science is becoming so molecular that scientists have lots of other evidence to back up the story.

"There would be very few scientific photographers left."


Ricketts images appear as part of a new exhibition The Meaning of Life at the Macleay Museum which chronicles some of the Australia's most significant advances in the biological sciences in the last 50 years.


The exhibition is open until 8 March 2013.

No comment yet.
Scooped by Sue Tamani!

Waterloo researchers create ‘world’s largest functioning model of the brain’ | KurzweilAI

Waterloo researchers create ‘world’s largest functioning model of the brain’ | KurzweilAI | anti dogmanti |

A team of researchers from the University of Waterloo have built what they claim is the world’s largest simulation of a functioning brain.

The purpose is to help scientists understand how the complex activity of the brain gives rise to the complex behavior exhibited by animals, including humans.

The model is called Spaun (Semantic Pointer Architecture Unified Network). It consists of 2.5 million simulated neurons. The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate.

Spaun uses this network of neurons to process visual images to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks.


Spaun functional architecture. Thick black lines indicate communication between elements of the cortex; thin lines indicate communication between the actions election mechanism (basal ganglia) and the cortex. Boxes with rounded edges indicate that the action selection mechanism can use activity changes to manipulate the flow of information into a subsystem. The open-square end of the line connecting reward evaluation and action selection denotes that this connection modulates connection weights. (Credit: Chris Eliasmith et al./Science)

While the claim appears to be misleading, since IBM Research – Almaden actually recently simulated 530 billion neurons and 100 trillion synapses on a supercomputer, the Waterloo researchers explain that “although impressive scaling has been achieved, no previous large-scale spiking neuron models have demonstrated how such simulations connect to a variety of specific observable behaviors,” the researchers say in a Science paper.

Human-like multitasking

“The model can perform a wide variety of behaviorally relevant functions. We show results on eight different tasks that are performed by the same model, without modification.

“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner — how the brain coordinates the flow of information between different areas to exhibit complex behavior,” said Professor Chris Eliasmith, Director of the Center for Theoretical Neuroscience at Waterloo, Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.

Unlike other large brain models, Spaun can perform several tasks. All inputs to the model are 28 by 28 images of handwritten or typed characters. All outputs are the movements of a physically modeled arm that has mass, length, inertia, etc.

Researchers can show patterns of digits and letters to the model’s eye, which it then processes, causing it to write its responses to any of eight tasks. And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behavior., the researchers suggest.

“Spaun provides a distinct opportunity to test learning algorithms in a challenging but biologically plausible setting,” say the researchers in Science. “More generally, Spaun provides an opportunity to test any neural theory that may be affected by being embedded in a complex, dynamical context, reminiscent of a real neural system.”

“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”

In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.

Professor Eliasmith has written a book on the research: How To Build A Brain will be available soon.


No comment yet.
Scooped by Sue Tamani!

Study tests whether ants can take the heat › News in Science (ABC Science)

Study tests whether ants can take the heat › News in Science (ABC Science) | anti dogmanti |
Temperature increases expected due to climate change are likely to change the foraging behaviour of the humble ant.

This could have a knock-on effect for global biodiversity, scientists report today.

The common meat ant (Iridomyrmex purpureus) lives in temperate regions of southeastern Australia in such great numbers they have the highest biomass of any animal species in the country.

These ants provide an abundant food source for other species, and play a crucial role in predation, seed dispersal, pollination and nutrient recycling.

Until recently, very little research had addressed how they might be affected by a rapidly changing climate.

Entomologist Nigel Andrew, from the University of New England in Armidale, led a review of more than 1700 studies from 1985 to 2012 into the climate change response of various insect groups.

The review reveals that the response of ants to climate change was significantly under-represented, prompting Andrew and his team conduct their own study.

The team observed how 1500 meat ant workers responded to artificial rises in temperature.

Pushed over the threshold
As ectotherms, ants cannot regulate their internal body heat, so must exchange heat with their surroundings to maintain an optimal temperature.

Andrew and colleagues found that an increase of just 2°C in the ant's body temperature was the difference between a fully functioning ant and one that was disoriented and constantly falling over - their critical thermal limit being around 46°C. At just 4°C above their thermal limit, the ants could not move at all.

High levels of activity on top of the nest were observed up to 42°C, but ants would run faster back to their insulated nest to recover as temperatures continue to rise.

When the ground temperature, which during the hottest part of the day can be 40°C hotter than air temperature, reached 50°C the ants would reduce their activity. At 63°C ground temperature, the foraging stopped altogether.

The results will be reported today at the Australian Entomological Society Conference in Hobart.

Changing behaviours
The researchers suggest that rising temperatures could cause meat ants to reduce their daytime foraging, instead foraging at earlier or later parts of the day when they are less efficient, or when their nocturnal predators are more active.

Andrews says other risks to their survival could lie within the nest.

"Are the queen ants digging their nests deeper to deal with the climate? And if the workers are being stressed out, they might not be bringing enough food back. They still need to forage for hundreds of thousands of individuals," says Andrews.

He adds that recovery times for worker ants could also eat into their foraging work and lifespan. "[Their survival] depends on how adaptable they are."

The study concludes, "Ants are one of the most ubiquitous animals in terrestrial ecosystems and play crucial roles in ecosystem functioning on all continents except Antarctica."

"Therefore understanding how ants will respond to climate variation is of fundamental importance in understanding, and sustaining, global biodiversity."

No comment yet.
Scooped by Sue Tamani!

Nanoscale probes light up when they detect specific diseases | KurzweilAI

Nanoscale probes light up when they detect specific diseases | KurzweilAI | anti dogmanti |
Nanostructures called BRIGHTs seek out biomarkers on cells and then beam brightly to reveal their locations.

Washington University researchers have developed nanoscale probes that bind to biomarkers of disease and, when hit by an infrared laser, light up to reveal their location.


The probes, called BRIGHTs (Bi-layered Raman-Intense Gold nanostructures with Hidden Tags), comprise 20 nm. diameter gold nanoparticles covered with molecules called Raman reporters, which are in turn covered by a thin shell of gold that spontaneously forms a dodecahedron (a polyhedron with 12 regular pentagonal faces).

Raman reporters are molecules whose atoms respond to a probe laser by scattering light at characteristic wavelengths. The shell and core create an electromagnetic hotspot (concentrated energy) in the gap between them that boosts the reporters’ emission by a factor of nearly a trillion.

BRIGHTs shine about one hundred billion times more brightly than isolated Raman reporters and about 20 times more intensely than the next-closest competitor probe, says Srikanth Singamaneni, PhD, assistant professor of mechanical engineering and materials science in the School of Engineering & Applied Science at Washington University.


Singamaneni’s lab has worked for years with Raman spectroscopy, a spectroscopic technique that is used to study the vibrational modes (bending and stretching) of molecules.

Laser light interacts with these modes and the molecule then emits light at higher or lower wavelengths that are characteristic of the molecule.

Spontaneous Raman scattering, as this phenomenon is called, is by nature very weak, but 30 years ago scientists accidently stumbled on the fact that it is much stronger if the molecules are adsorbed on roughened metallic surfaces. Then they discovered that molecules attached to metallic nanoparticles shine even brighter than those attached to rough surfaces.

The intensity boost from surface-enhanced Raman scattering, or SERS, is potentially huge. “It’s well-known that if you sandwich Raman reporters between two plasmonic materials, such as gold or silver, you are going to see dramatic Raman enhancement,” Singamaneni says.


Since different Raman reporter molecules respond at different wavelengths, Singamaneni says, it should be possible to design BRIGHTS targeted to different biomolecules that also have different Raman reporters and then monitor them all simultaneously with the same light probe.

And he and Gandra would like to combine BRIGHTS with a drug container, so that the containers could be tracked in the body; the drug would be released only when it reached the target tissue, thus avoiding many of the side effects patients dread.

No comment yet.
Scooped by Sue Tamani!

Future generations will thank us: World's largest network of marine reserves now law

Future generations will thank us: World's largest network of marine reserves now law | anti dogmanti |

16 November 2012.
The Australian Government has set in law the largest network of marine reserves on earth and shown the world that Australia means business when it comes to protecting our oceans, says Australia's peak marine conservation group, the Australian Marine Conservation Society (AMCS). The government today formally proclaimed 44 marine reserves in a network now covering a third of Australia's ocean territory. 


Today we have witnessed one of the most significant days in Australia's environmental history," said Fiona Maxwell, AMCS Marine Campaigner.


The government has bequeathed a legacy by setting in law the world's largest network of marine reserves. Future generations will thank us for our foresight in taking the action needed to protect the richest oceans on the planet.


Australia has an ocean territory twice the size of our land. We are islanders, coastal people. We love the beach, are proud of our coral reefs, and love to get out on the water, boating, swimming, surfing, diving and fishing. It's a part of our lifestyle. Our oceans provide us with food, oxygen and a place to work and play. Today's announcement will help us keep it that way,"

continued Ms Maxwell. 


Places like the Coral Sea - the 'jewel in crown of the reserve network' - will now be safeguarded from damaging activities such as oil and gas exploration.


The Coral Sea is a global biodiversity hot spot, recognised for the number and diversity of large ocean predators such as sharks, tunas, marlin, swordfish and sailfish. Protecting this special part of Australia will provide a safe haven for marine life and a globally significant ocean legacy for generations to come.


Maxwell continued, "A national network of marine reserves has been in the making since the 1990s but the process is not quite over. The Government will soon be determining how they will manage these important marine areas and AMCS will be advocating improvements to zoning in a number of critical areas, adequate resources and best practices management and surveillance activities. In now developing management plans for the newly proclaimed reserves, the government needs to fully protect critical areas that have been left vulnerable to the impacts of mining and unsustainable fishing practises,"


This is not only an important day for ocean conservation but also the future of our fisheries. We know that marine reserves help manage and protect marine wildlife and their habitats but they also provide significant benefits to fisheries," concluded Maxwell.


The public has overwhelmingly supported this process. Over 500,000 people have expressed their support for the creation of a network of marine reserves in Australia.



Our oceans - worth protecting

Oceans cover over 70% of our planet and contain a richer diversity of life than on land.

Our oceans connect every continent and shape every coast. They control our climate and produce half of the breathable oxygen on Earth.

85% of all global fish stocks are overfished, recovering from historic depletion or fished to their limit (UN FAO, 2010).
A quarter of the world's coral reefs are destroyed and two thirds are in serious trouble.


Australia's oceans

Australia has the third largest marine jurisdiction on Earth, giving us a duty to manage our seas responsibly.
The Global Census of Marine Life has demonstrated that Australia's oceans hold the greatest diversity of marine life in the world.
While Australia claims to manage our fisheries better than most, we still have a poor track record, with 13 Commonwealth-managed fish stocks either overfished or subject to overfishing.
Oil and gas exploration remain a major threat to precious marine areas like Ningaloo and the Rowley Shoals off the Western Australian coast.


The science is proven

The science supporting the effectiveness and benefits of marine reserves is clear. In 2010 over 150 Australian scientists wrote an open letter to the Prime Minister outlining the vital role that marine reserves can play in restoring the health of our oceans and their wildlife. The proposed reserve network would also provide long-term benefits to tourism and recreation.

AMCS supports the Australian government's policy of providing fair financial assistance to fishing businesses directly affected by the creation of marine reserves.


This is great news for everyone, not just Aussies!

Sue Tamani

No comment yet.
Scooped by Sue Tamani!

How Science Can Build a Better You

How Science Can Build a Better You | anti dogmanti |
How far would you go to modify yourself using the latest medical technology?


IF a brain implant were safe and available and allowed you to operate your iPad or car using only thought, would you want one? What about an embedded device that gently bathed your brain in electrons and boosted memory and attention? Would you order one for your children?


In a future presidential election, would you vote for a candidate who had neural implants that helped optimize his or her alertness and functionality during a crisis, or in a candidates’ debate? Would you vote for a commander in chief who wasn’t equipped with such a device?


If these seem like tinfoil-on-the-head questions, consider the case of Cathy Hutchinson. Paralyzed by a stroke, she recently drank a canister of coffee by using a prosthetic arm controlled by thought. She was helped by a device called Braingate, a tiny bed of electrodes surgically implanted on her motor cortex and connected by a wire to a computer.


Working with a team of neuroscientists at Brown University, Ms. Hutchinson, then 58, was asked to imagine that she was moving her own arm. As her neurons fired, Braingate interpreted the mental commands and moved the artificial arm and humanlike hand to deliver the first coffee Ms. Hutchinson had raised to her own lips in 15 years.


Braingate has barely worked on just a handful of people, and it is years away from actually being useful. Yet it’s an example of nascent technologies that in the next two to three decades may transform life not only for the impaired, but also for the healthy.

Other medical technologies that might break through the enhancement barrier range from genetic modifications and stem-cell therapies that might make people cognitively more efficient to nano-bots that could one day repair and optimize molecular structures in cells.


Many researchers, including the Brown neuroscientist John Donoghue, leader of the Braingate team, adamantly oppose the use of their technologies for augmenting the nonimpaired. Yet some healthy Americans are already availing themselves of medical technologies. For years millions of college students and professionals have been popping powerful stimulants like Adderall and Provigil to take exams and to pull all-nighters. These drugs can be highly addictive and may not work for everyone. While more research is needed, so far no evidence has emerged that legions of users have been harmed. The same may be true for a modest use of steroids for athletes.


Which leads us to the crucial question: How far would you go to modify yourself using the latest medical technology?


Over the last couple of years during talks and lectures, I have asked thousands of people a hypothetical question that goes like this: “If I could offer you a pill that allowed your child to increase his or her memory by 25 percent, would you give it to them?”


The show of hands in this informal poll has been overwhelming, with 80 percent or more voting no.

Then I asked a follow-up question. “What if this pill was safe and increased your kid’s grades from a B average to an A average?” People tittered nervously, looked around to see how others were voting as nearly half said yes. (Many didn’t vote at all.)

“And what if all of the other kids are taking the pill?” I asked. The tittering stopped and nearly everyone voted yes.


No pill now exists that can boost memory by 25 percent. Yet neuroscientists tell me that pharmaceutical companies are testing compounds in early stage human trials that may enable patients with dementia and other memory-stealing diseases to have better recall. No one knows if these will work to improve healthy people, but it’s possible that one will work in the future.


More intriguing is the notion that a supermemory or attention pill might be used someday by those with critical jobs like pilots, surgeons, police officers — or the chief executive of the United States. In fact, we may demand that they use them, said the bioethicist Thomas H. Murray. “It might actually be immoral for a surgeon not to take a drug that was safe and steadied his hand,” said Mr. Murray, the former president of the Hastings Center, a bioethics research group. “That would be like using a scalpel that wasn’t sterile.”


HERE is a partial checklist of cutting-edge medical-technology therapies now under way or in an experimental phase that might lead to future enhancements.


More than 200,000 deaf people have had their hearing partially restored by a brain implant that receives sound waves and uses a minicomputer to process and deliver them directly into the brain via the cochlear (audio) nerve. New and experimental technologies could lead to devices that allow people with or possibly without hearing loss to hear better, possibly much better.


The Israel-based company Nano Retina and others are developing early-stage devices and implants that restore partial sight to the blind. Nano Retina uses a tiny sensor backed by electrodes embedded in the back of the eye, on top of the retina. They replace connections damaged by macular degeneration and other diseases. So far images are fuzzy and gray-scale and a long way from restoring functional eyesight. Scientists, however, are currently working on ways to mimic and improve eyesight in people and in robots that could lead to far more sophisticated technologies.


Engineers at companies like Ekso Bionics of Richmond, Calif., are building first-generation exoskeletons that aim to allow patients with paralyzed legs to walk, though the devices are still in the baby-step phase. This summer the sprinter Oscar Pistorius of South Africa proved he could compete at the Olympics using artificial half-leg blades called Cheetahs that some worried might give him an advantage over runners with legs made of flesh and blood. Neuroscientists are developing more advanced prosthetics that may one day be operated from the brain via fiber optic lines embedded under the skin.


For years, scientists have been manipulating genes in animals to make improvements in neural performance, strength and agility, among other augmentations. Directly altering human DNA using “gene therapy” in humans remains dangerous and fraught with ethical challenges. But it may be possible to develop drugs that alter enzymes and other proteins associated with genes for, say, speed and endurance or dopamine levels in the brain connected to improved neural performance.


Synthetic biologists contend that re-engineering cells and DNA may one day allow us to eliminate diseases; a few believe we will be able to build tailor-made people. Others are convinced that stem cells might one day be used to grow fresh brain, heart or liver cells to augment or improve cells in these and other organs.


Not all enhancements are high-tech or invasive. Neuroscientists are seeing boosts from neuro-feedback and video games designed to teach and develop cognition and from meditation and improvements in diet, exercise and sleep. “We may see a convergence of several of these technologies,” said the neurologist Adam Gazzaley of the University of California at San Francisco. He is developing brain-boosting games with developers and engineers who once worked for Lucas Arts, founded by the “Star Wars” director George Lucas.


Which leads to another question: How far would you go to augment yourself? Would you replace perfectly good legs with artificial ones if they made you faster and stronger? What if a United States Agency for Human Augmentation had approved this and other radical enhancements? Would that persuade you?


Ethical challenges for the coming Age of Enhancement include, besides basic safety questions, the issue of who would get the enhancements, how much they would cost, and who would gain an advantage over others by using them. In a society that is already seeing a widening gap between the very rich and the rest of us, the question of a democracy of equals could face a critical test if the well-off also could afford a physical, genetic or bionic advantage. It also may challenge what it means to be human.


Still, the enhancements are coming, and they will be hard to resist. The real issue is what we do with them once they become irresistible.

David Ewing Duncan is a journalist who has contributed to the science section of The New York Times.

No comment yet.
Scooped by Sue Tamani!

The most important education technology in 200 years | KurzweilAI

The most important education technology in 200 years | KurzweilAI | anti dogmanti |
Four of the 19 Coursera courses on AI and robotics (credit: Coursera) Education is about to change dramatically, says Anant Agarwal, who heads edX, a $60 million MIT-Harvard effort to stream a college education over the Web, free, with plans to teach a billion students, Technology Review reports.


“Massive open online courses,” or MOOCs, offered by new education ventures like edX, Coursera, and Udacity, to name the most prominent (see “The Crisis in Higher Education”) will affect markets so large that their value is difficult to quantify.


A quarter of the American population, 80 million people, is enrolled in K–12 education, college, or graduate school. Direct expenditures by government exceed $800 billion. Add to that figure private education and corporate training.


At edX, Agarwal says, the same three-person team of a professor plus assistants that used to teach analog circuit design to 400 students at MIT now handles 10,000 online and could take a hundred times more.


Coursera, an alliance between Stanford and two dozen other schools, claims that it had 1.5 million students sign up.


Changing the world


The rise of the MOOCs means we can begin thinking about how free, top-quality education could change the world.


Khan’s videos are popular in India, and the MOOC purveyors have found that 60 percent of their sign-ups are self-starters from knowledge-hungry nations like Brazil and China. Nobody knows what a liberal application of high-octane educational propellant might do. Will it supersize innovation globally by knocking away barriers to good instruction? Will frightened governments censor teachers as they have the Web?


The eventual goal isn’t to stream videos but to perfect education through the scientific use of data. Just imagine software that maps an individual’s knowledge and offers a lesson plan unique to him or her.

No comment yet.
Scooped by Sue Tamani!

Woman’s Ear Regrown In Her Forearm

Woman’s Ear Regrown In Her Forearm | anti dogmanti |
In one of the more bizarre, but nevertheless miraculous, feats of modern medicine, doctors at Johns Hopkins University were able to give a woman her ear back by first growing it beneath the skin of her forearm.


Sherrie Walter visited her dermatologist because of a pain in her left ear. It turned out that the pain was being caused by basal cell carcinoma. By October 2011 the cancer had spread to her ear canal, making the amount of tissue that needed to be removed extensive. In a 16-hour procedure, doctors removed, not just her entire ear, but neck glands, lymph node tissues, and a portion of her skull.

Dr. Patrick Byrne, associate professor in otolaryngoilogy-head and neck surgery at Hopkins, explained to Walter that prosthetic ears don’t always stay fixated to the head – that, sometimes, they fall off. And given the extent of the procedure, her situation was worse. “Sherrie’s skull bone had been removed,” he told ABC News, “so the only way of attaching a prosthetic would be through tape and glue. We both agreed that wasn’t an option.”

The second option was to reconstruct the ear. But the skin needed for reconstruction comes from the face and neck, and the initial surgery had already removed most of the skin from those areas. So Byrne suggested a procedure seemingly out of a science fiction movie. They would reconstruct an incomplete ear using the skin they had, then plant the ear beneath the skin of her forearm. There the ear could survive and be given the time to grow. Cartilage from Walter’s rib cage, and skin and arteries from other areas of her body were placed in her arm. The skin was left to grow around the tissue. After being in her arm for four months, the new ear was taken out and reattached this past March. Since the reattachment the team has been sculpting the skin and cartilage so that the shape matched the right ear. The sculpting process ended Oct. 2, the last of a series of surgeries spanning 20 months.

Walter with the regrown and reattached ear before doctors had a chance to sculpt it to match the other ear.

According to the Baltimore Sun, the procedure is believed to be the most complicated ear reconstruction ever performed in North America.

The bizarre procedure had actually been conceived of a while ago. But, according to Dr. Byrne, they were waiting for the patient who was the right age, in good health and had a “good support system.” Walters turned out to be that patient.

The procedure did have its share of difficulties, however, both for the patient and for doctors. The skin over Walter’s forearm was initially too tight to implant the ear. To loosen the skin, a balloon filled with saline was placed beneath the skin to stretch it out. It was a painful process for Walter who had to wear the balloon for several weeks.

Basal cell carcinoma is the most common form of cancer in the United States. In 2006 an estimated 2,152,500 people were treated for nonmelanoma skin cancers, three-quarters of which are basal cell carcinomas. To put that in perspective, the number of cases for all other types of cancer that year was estimated at 1.4 million. Until we have a miracle patch cure, reconstructive surgery is going to continue to be extremely important for people with skin cancer. Thanks to daring physicians like Dr. Byrne, they now have one more option.

No comment yet.
Scooped by Sue Tamani!

Was Hurricane Sandy caused by climate change? › News in Science (ABC Science)

Was Hurricane Sandy caused by climate change? › News in Science (ABC Science) | anti dogmanti |
Experts say they cannot give a black-or-white answer for one of the most complex issues in meteorology.
No comment yet.
Scooped by Sue Tamani!

Scientists unearth 'ostrich' dinosaurs › News in Science (ABC Science)

Scientists unearth 'ostrich' dinosaurs › News in Science (ABC Science) | anti dogmanti |
Ostrich-like dinosaurs roamed the Earth millions of years ago using feathers to attract a mate or protect offspring rather than for flight.
No comment yet.
Scooped by Sue Tamani!

Harvard launches two free online courses, more than 100,000 sign up worldwide | KurzweilAI

Harvard launches two free online courses, more than 100,000 sign up worldwide | KurzweilAI | anti dogmanti |
Harvard University’s first two courses on the new edX digital education platform launched this week, as more than 100,000 learners worldwide began taking dynamic online versions of CS50, the College’s popular introductory computer science class, and PH207, a Harvard School of Public Health course in epidemiology and biostatistics.


For Marcello Pagano, a professor of statistical computing who is co-teaching PH207x, the potential to teach so many students at once is amazing. “I figure I’d have to teach another 200 years to reach that many students in person,” he said.


In May, Harvard and the Massachusetts Institute of Technology (MIT) announced the launch of the not-for-profit educational enterprise edX, which features learning designed specifically for interactive study via the Web.


Since then, Harvard has established HarvardX, a University-based organization that supports Harvard faculty as they develop content for the edX platform.


Although online courses have been around for years, the professors working with the platform say that HarvardX forces them to get creative about crafting more-active learning environments.


“I figure I’d have to teach another 200 years to reach that many students in person,” said Harvard Professor Marcello Pagano after learning that more than 100,000 people worldwide had signed up for the two Harvard courses being taught on the edX platform.


Not streaming lectures


Rather than just broadcasting full lectures on the Internet, the HarvardX classes incorporate short video-lesson segments, along with embedded quizzes, immediate feedback, student-ranked questions and answers, online laboratories, and student-paced learning.


Certificates of mastery will be available for those motivated and able to demonstrate their knowledge of the course material.


“This is the future,” Pagano said. “What you have in classrooms today, I think of as a play. What we have now, with HarvardX, is the movie. You can swap out scenes, edit, perfect it. This is the way we communicate now.


“They can stop a lecture in the middle and ponder a concept,” he said. “They can replay if they don’t understand something, and they can speed up when they grasp something quickly. You can’t do that in a lecture hall with 100 students. This is much more individualized.”


For CS50x instructor David Malan, director of educational innovation and manager of pedagogical innovation, being able to produce short videos on key concepts means that students get a more consistent, polished experience than might work in a lecture hall. The library of lectures also liberates him to explore other concepts and go more in-depth in his on-campus class.


“I don’t see the lecture or section going away,” he said. “Rather, students can choose the learning process that works best for them, and have the option of exploring other topics even if we don’t have the time to cover them in class.”


Improving instructional methods


But students aren’t the only ones who stand to learn through edX. The platform will provide a trove of data for Harvard and MIT researchers, who will study patterns of student achievement in the hope of making course material and methods more effective for students both on and off campus. Concrete data from large samples will be a lot easier to read than the puzzled expressions an instructor might see in a lecture hall.


“It’s very exciting to have tools to give us insights into patterns of behavior,” Malan said. “The data will allow us to make statistically significant inferences that aren’t always possible with smaller samples.”


Members of the HarvardX leadership team are enthusiastic about the edX partnership’s potential to transform pedagogy in classrooms and living rooms in Massachusetts, across the nation, and around the globe.


“As well as expanding access to high-quality, online learning content for new communities of learners, we believe that edX will strengthen the on-campus learning experience,” said Dean Michael D. Smith of the Faculty of Arts and Sciences (FAS) and a member of the edX board of directors. “For instance, if you sit in the back of a classroom today, you’ll see that students are already using technology to learn.

EdX gives faculty the tools to think in new ways about the role technology plays in their teaching and creates new opportunities for research that can form the basis of more effective teaching and learning methods.”

No comment yet.
Scooped by Sue Tamani!

Cryonics, avatars or medicine: a transhumanist's dilemma (Wired UK)

Cryonics, avatars or medicine: a transhumanist's dilemma (Wired UK) | anti dogmanti |
Life-extending technologies are getting more lab time and investment than ever before, and with experts in the field proclaiming the knowledge is just a few decades away, you'll want to be around for it...


Over the past decade, the main areas of research -- brain emulation, regenerative medicine and cryonics -- have gradually been departing the realms of science fiction and making a name for themselves in scientific journals.


Back in 2009, when Avatar suggested that people could one day upload their brain to an invincible body-double, it seemed like something only James Cameron could dream up. Then a student in Israel controlled a robot with his mind from 2,000km away.


In 2009 Aubrey de Grey announced -- to more than a few raised eyebrows -- that the first person to live to 1,000 thanks to regenerative medicine was probably already alive -- and by 2012 a four-year old became the first person to receive a life-saving blood vessel made from her own cells.


And around about the same time the horrendous 1997 film Batman & Robin painted cryonics as a field best reserved for psychotic villains, Gregory Fahy and William Rall announced the development of the first cryoprotectant able to vitrify the human body slowly enough that ice crystals don't form and cause tissue damage. spoke with leading proponents of each field to find out if we could be convinced to fork out £50,000 to have our brains put on ice. (Wired and Tired by Luke Robert Mason, director of Virtual Futures and advisor to Humanity Plus).

No comment yet.
Scooped by Sue Tamani!

Behind the Nobel Prize: the stem cell revolution - The Drum Opinion (Australian Broadcasting Corporation)

Behind the Nobel Prize: the stem cell revolution - The Drum Opinion (Australian Broadcasting Corporation) | anti dogmanti |
While they might not be household names, Gurdon and Yamanaka have helped kick-start an entire new field of regenerative medicine, writes Tim Dean.


Regenerative medicine, as they say, is a growth industry. Although it's still in its embryonic stages.


Puns aside, many see a not-too-distant future where the ability to grow bespoke cells to replace damaged or diseased tissue is another key tool in our medical kit. One day we might even be able to clone entire organs for transplantation should our original ones fail.


That said, one day we may also be able to clone entire humans too, a prospect that is as existentially and ethically troubling as it is scientifically intriguing.


The technical and regulatory hurdles to overcome are not insubstantial, and there are still some gnarly ethical issues to manage, but the tremendous therapeutic potential of stem cells means there are many people beavering away to make regenerative medicine a reality.


And this vision would have been impossible without the contributions made by two pioneering scientists, who were awarded the Nobel Prize in Physiology and Medicine last week.


The first is Sir John Gurdon, who helped overturn some established scientific dogma of his day and in the process opened the door to the possibility of creating stem cells - and cloning whole organisms. Before delving into his key discovery from half a century ago, it's worth stepping back and reflecting on the state of play at the time to reinforce how genuinely revolutionary Gurdon's discovery was.


We know that human beings are made up of well over 200 different specialised cell types - everything from skin cells, bone cells, neurons, liver cells, immune cells and so on. Their diversity is truly startling.


Yet we all start out as just a single fertilised egg cell: a zygote. Incredibly, we manage to transform from that single cell into a fully developed human being, with all the right specialised cells in all the right places. And at the core of (almost) every one of our 50 trillion-odd cells is a nucleus containing an identical genetic blueprint.


It'd be like giving 1,000 workers in a corporation copies of every job description in the company without telling them which one is theirs, and expecting them to spontaneously figure out where they should work. Only scaled up by a few million times.


By the 1960s, scientists had already gone a long way towards understanding how a zygote develops into a whole organism. They had also uncovered some of the processes that enable cells to differentiate, transforming from early 'pluripotent' (from the Latin plurimus, meaning 'very many,' and potens, meaning 'having power') stem cells to increasingly more specialised types.

However, a series of experiments from the 1950s indicated that the specialisation process was a one-way street: once a cell had differentiated, it could never be turned back. The experiments looked solid, so that became the dogma and few even thought to question it.


In stepped John B Gurdon, who was blessed with two characteristics that make for truly great scientists: a healthy maverick streak; and a reverence for the empirical method.

His Nobel Prize-winning experiment was actually conducted in the late 1950s when he was a graduate student, and wasn't published until 1962 - precisely 50 years ago. He began with the observation that the nucleus of most cells contain the genetic blueprint for the whole organism, and wondered what would happen if you put that blueprint in an appropriate egg cell.


So he took egg cells from the frog species, Xenopus laevis, and stripped them of their nucleus. He then took the nucleus from an intestinal cell from an adult frog, and placed it in the enucleated egg. The end result, remarkably enough, was a bunch of wriggling little tadpoles, each one genetically identical to its adult 'parent,' which had donated the intestinal cell.


Gurdon had created the world's first clones by means of a technique called somatic cell nuclear transfer (SCNT), and in the process had rewritten the textbook on cellular development and differentiation.


Gurdon's discovery would eventually lead to the cloning of more complex organisms, including the infamous Dolly the sheep in 1997, and since then a menagerie of other species, including mice, cows, pigs, wolves and African wildcats. It could, in principle, also be used to clone a human, although such an experiment has been outlawed in most countries around the world, including here in Australia.


However, it is legal to use SCNT to create new early-stage embryos from human eggs, a process called 'therapeutic cloning'. Embryonic stem cells can then be gathered from the embryo to be used for research or, potentially, therapeutic treatments. There are strict guidelines around how such embryos can be created and how they can be used, but the creation and destruction of human embryonic material causes consternation to many and is an ongoing ethical issue.


Thus 1962 proved to be a big year in stem cell biology, for more reasons than one: in that year Gurdon's key paper was published; and Shinya Yamanaka was born.


Decades later, in the early 2000s, Yamanaka added another piece to the cellular differentiation puzzle by figuring out how to take fully differentiated adult cells and step them back to an earlier undifferentiated state.


Yamanaka and his team discovered that while all cells are built upon the same genetic blueprint, specific 'transcription factors' govern which bits of the blueprint are called upon to direct the cell. By meddling with these transcription factors, he could effectively turn back the clock, transforming adult cells into pluripotent cells - known as induced pluripotent stem cells (iPSCs).


Where Gurdon's method required an egg cell, and the harvesting of stem cells from an early stage embryo, one benefit of Yamanaka's approach is that (virtually) any old adult cell would do the trick, thus bypassing one potential ethical and regulatory hurdle in producing stem cells for research or therapeutic applications.


Together these two discoveries opened the door to the possibility of producing pluripotent stem cells on demand in order to treat injury or disease. However, we're still a fair way from seeing any stem cell therapies employing either technique become available at our local clinic. Before that happens there are some substantial hurdles to overcome.


For one, SCNT might now be a well established process, but it is still rather inefficient: it took 440 attempts to produce Dolly. The process also has a tendency to produce abnormalities in the cloned organism, although the technology is constantly improving. There's also the ethical problem of requiring eggs and the destruction of embryos to harvest stem cells.


iPSCs avoid the ethical concerns of SCNT because it uses adult cells and doesn't involve embryos, but it's still a process in its infancy, so to speak. Producing a population of healthy iPSCs is still a tremendous technical challenge, and it appears as though iPSCs are prone to forming tumours. Clearly, more work must be done before iPSCs are ready for the clinic.


What iPSCs can be used for right now, though, is gaining a better understanding of disease. Basically, you can take a diseased cell - say a malfunctioning insulin-producing beta cell - and revert it to a pluripotent state. You can then 'recapitulate' the disease and see how it unfolds, hopefully pinpointing where the cell begins to malfunction. You can also use iPSCs for drug development, applying various potentially therapeutic agents to them to see how they respond, all in the lab rather than in the body.


A final stage would be to use the process to produce healthy cells, say by growing happy new beta cells, which could be transplanted back into the patient - although even here there are challenges in making sure the new cells aren't rejected by the immune system. Yet, presuming the technical barriers can be overcome, regenerative medicine could radically transform many areas of healthcare.


As a rule of thumb (Peace Prizes not withstanding) the Nobel committee doesn't hand out its gongs lightly. It often takes decades for a discovery to be deemed of sufficiently lasting impact to get the nod. In fact, it's a rarity for a researcher to receive a Nobel so soon after publishing their breakthrough research, as has Yamanaka.


While they might not be household names, Gurdon and Yamanaka have helped kick-start an entire new field of regenerative medicine. Science being the fickle and unpredictable process it is, it might take a decade or more before stem cell technology translates into real therapeutic benefits, but whether it's us, our children or our grandchildren, we'll owe Gurdon and Yamanaka a great deal of thanks that a Nobel Prize can only partly express.


Tim Dean is a science journalist and editor of Australian Life Scientist magazine.

No comment yet.