OMICs for R&D
458 views | +3 today
Your new post is loading...
Your new post is loading...
Scooped by Chatu Jayadewa!

Crowdsourcing and Crowdfunding Explained

A short animated film by, narrated by's Founder, Carl Esposti, describing the four different ways Crowdsourcing works and...
No comment yet.
Rescooped by Chatu Jayadewa from Digital Health!

What separates the mobile health app "millionaires" from the rest? - MedCity News

What separates the mobile health app "millionaires" from the rest? - MedCity News | OMICs for R&D |

A global survey focusing on mobile health apps shows the majority of those companies and developers who produce them are dissatisfied with the reception their apps receive on the market, and say performance falls short of their goals. The report by German market research company research2guidance also indicated a changing profile of the developers and businesses behind these apps, along with their priorities.The survey also sought to explore some of the distinguishing characteristics of successful mobile health app

Via Alex Butler
Denise Silber's curator insight, November 26, 2015 5:18 AM

As you know, many mobile apps for health fall short of their creators' expectations in terms of downloads. Recent study shows that distributing via hospital more successful than reaching out to the individual.

Scooped by Chatu Jayadewa!

Solving The Problem Of Too Much Cancer Genomics Data

Solving The Problem Of Too Much Cancer Genomics Data | OMICs for R&D |

Cancer is not one disease – that’s why the effort to find a cure has been so challenging. Each type of cancer is different, and tumors vary genetically not just from type to type but from patient to patient. To better understand the molecular abnormalities that drive cancers, scientists have turned to next-generation sequencing (NGS), new technology that sequences DNA much more quickly and cheaply than previously used sequencing techniques.

Genomic sequencing generates a lot of data. In some cases, this data has been used to inform doctors about which cancer drug might work better for a certain patient. But the technology is still in its infancy, and genomic sequencing produces so much information that physicians don’t yet know how to best analyze and interpret it. That’s why seven major U.S. and European cancer institutions are banding together to aggregate the massive amounts of data produced by patients’ sequencing tests in an effort to improve treatment decisions. A year and a half after the idea was hatched, the project already has more than 15,000 genomic profiles.

Announced by the American Association for Cancer Research (AACR), the pilot project is dubbed GENIE – short for Genomics, Evidence, Neoplasia, Information, Exchange – and is the result of 18 months of discussions with cancer centers that are amassing genomic data at a faster rate than they know what to do with. The initiative is the brainchild of Dr. Charles Sawyers, a researcher at Memorial Sloan Kettering Cancer Center in New York City and past president of AACR.

“We’ve got amazing technology being used in the clinic ahead of our knowledge of how to use it. So the solution is to gather evidence as fast as possible on all these experiences happening at cancer centers,” Sawyers said in an interview.

Several projects on the research side have made it their mission to understand the various molecular aberrations that drive cancer progression, most notably the Cancer Genome Atlas and the International Cancer Genome Consortium. But Sawyers said GENIE is unique in that it’s attempting to translate those basic science findings to a clinical setting to spot trends in cancer treatment and survival rates. To do that, GENIE will pool certified sequencing data from the participating institutions into a single registry and link these data to certain clinical outcomes, which are measures of a patient’s health or quality of life. (See this infographic developed by AACR for more details.)

Each institution will have exclusive access to its own data for a six-month period before the rest of the consortium gains access. This proprietary period allows researchers at GENIE member institutions to submit any significant discoveries they gleaned from tumor sequencing data for publication. Then, after an additional six months, all data generated by the project will be made available to the broader research community, with the first release of data tentatively scheduled for Nov. 6, 2016.


Ultimately, the GENIE registry could be used in a few significant ways. For one, it could help confirm or refute that specific mutations predict patient responses to certain drugs, revealing which patients are likely to do better or worse over time. The registry could also help identify new patient populations or expand existing patient groups that could benefit from drugs already approved by U.S. Food and Drug Administration. In addition, the project could pinpoint new drug targets and markers of disease, or biomarkers.

The current participating institutions include: The Center for Personalized Cancer Treatment in the Netherlands; Dana-Farber Cancer Institute in Boston; Institut Gustave Roussy in France; Johns Hopkins University’s Sidney Kimmel Comprehensive Cancer Center in Baltimore; Memorial Sloan Kettering Cancer Center in New York; Princess Margaret Cancer Centre in Canada; and Vanderbilt-Ingram Cancer Center in Nashville.

Sawyers said these initial seven cancer centers were chosen because they have invested in NGS technology and are already performing routine testing for cancer patients. “We’re all generating this data anyway, why shouldn’t we put it on the table and mine it for its potential value?” Sawyers said.

So far, the seven institutions have collected more than 17,000 genomic profiles. That number will grow as new patients are seen at each cancer center and subsequently added to the registry. Sawyers hopes to collect 100,000 genomic records within five years.

The database will include patients who are undergoing regular treatment for metastatic disease – that is, cancer that has spread from its original site to another part of the body. Patients have to give their consent in order for their data to be included in the registry. Sawyers said patients should not be concerned with privacy issues though, because the raw sequence data won’t be made public for anyone to see. Instead, what will be housed in the registry is a limited data set that will not include identifying patient information. A second layer of information, which includes gender, age, location and general health history, would only be available to qualified researchers that request it.

As it grows, GENIE promises to help scientists make big breakthroughs with Big Data. Benefiting “the greater good,” as Sawyers puts it, is pretty good motivation.

No comment yet.
Scooped by Chatu Jayadewa!

Time for pharma to think big on genomics research

Time for pharma to think big on genomics research | OMICs for R&D |
Sir John Chisholm, executive chair of Genomics England, is very clear about what the project he is overseeing is – and what it is not.

“We are not a clinical trial or a research project. We are formally classified, including by Government, as a ‘transformational project’,” he tells Pharmafocus.

It’s this theme of transformation that runs through our meeting with him, at the Genomics England office in central London, in the run-up to the project’s launch. Sir John’s task is to transform several traditionally conservative aspects of the NHS, R&D and the pharma industry in one swoop.

“We are on the leading edge of the transformation of the NHS, from one that is largely mediated by ‘greybeards’ that have been in the business for years,” he tells Pharmafocus.

“When the digital revolution goes through an industry it changes things; because then you have

digitally-driven expertise and you can capture knowledge and then dispense it in a very orderly way

– this for sure will transform the health industry, and indeed the pharma industry that supports it.”

As its head, Sir John is leading Genomics England – which was set up and established as a company

by the Department of Health – in its promise to deliver a flagship project that will sequence 100,000 whole genomes from NHS patients and their families by the end of 2017.

At its inception in 2013 – ten years after the completion of the Human Genome Project - health secretary Jeremy Hunt promised the venture would “kickstart the development of a UK genomics industry.” The project is a collaboration with the National Institute for Health Research, NHS England, Public Health England and Health Education England.

Two years later and Sir John, a former chair of the Medical Research Council who was appointed to lead Genomics England after a career in engineering and the defence industry, says the time is right for patients to start seeing the fruits of these kinds of Big Data projects.

We are, it seems, at a stage where it might just be financially feasible to gather together enough genomes to make a major effort to begin to understand the health significance of our genes and their regulation.

Delivering on the promise

“The distinctive feature of the 100,000 Genomes Project is that it’s not just genomes; they are fully integrated with routine clinical data drawn from the NHS. We are fortunate to be able to draw on these as we have very strong genomic science here in the UK.”

This unique combination is of particular interest to the pharma industry, Sir John notes. “It’s well understood in the pharma world that getting hold of routine clinical data is not so easy. So this tackles two big problems for pharma: potentially providing access to 100,000 genomes, and putting that together with quality clinical data.”

The UK is not the only country with ambitious Big Data plans, although Genomics England does eclipse them in scale – it is around twice as large as any other database. For example in the US, President Obama has pledged to spend $215 million on a personalised medicine initiative, which includes commitments to scale up efforts to identify genomic drivers in cancer.

But Sir John says the availability of NHS records offers the UK a significant head start in this area.

“Obama won’t be able to deliver it without buy-in from different people, healthcare insurers, or going back to patients and getting consent – it’s a nightmare if you don’t have a single system to operate from.

We have the NHS and very strong genomic science in the UK, the golden triangle and in Edinburgh.” This has already been tapped into, in order to create 11 genomic medicine centres that have so far sequenced around 3,000 full human genomes, at the rate of about 60 a day, with a focus on cancers, rare diseases and some pathogens.

Importance of commercial gains

“Our aim is to generate patient benefit from the pertinent findings we intend to make from our analyses. Another aim, which is just as important, is to use the project to generate industrial and commercial activity.” This was a clear directive from the start of the project, and makes collaboration with pharma – who can deliver commercial gains from the outputs of the project – a necessity, not just a nice-to-have.

“The project has to be used to generate and ferment industrial and commercial activity. That was very much part of the remit from the very beginning. The knowledge that we derive from what we’re doing must have a good chance of being translated into either therapeutics or diagnostics, with material financial gain.”

In March, Genomics England announced that it will partner with 10 pharma and biotech companies to develop new diagnostics and treatments for people with rare diseases and cancers, using the genomic sequences gathered through the 100,000 Genomes Project.

The companies – including AbbVie, Alexion Pharmaceuticals, AstraZeneca, Biogen, GSK, Roche, Takeda and UCB, will work in a public-private partnership called the GENE Consortium to analyse the DNA sequences and find new research targets.

The companies are working on a pilot programme to look at the first 5,000 linked datasets, and received the first tranche of the data in August. The collaboration will be crucial in driving the project forward and delivering tangible results – in the form of new drugs and diagnostics, Sir John says.

“The companies in the consortium bring a huge knowledge of candidate molecules, understanding of the regulatory environment, and pharmacological demand that wouldn’t naturally come to us; the things that would add the greatest value to the health community.

“Having people from pharma, who live the translation model, engaged right at the start will enable us to shape what we’re doing, to be maximally translatable and useful in terms of therapies and diagnostics.”

Changing the pharma business model

However, not all of the industry is geared up to take advantage of a Big Data project of Genomics England’s size and scale. And while there are big, global companies in the GENE Consortium, many of them may lack the nimbleness, or perhaps willingness, to be able to seize upon a radically different business model.

A 2013 report by McKinsey and Company found that nearly a third of drugs under clinical development were associated with a genomic or proteomic marker, a rise of 50% in over the previous two years. But within the top 15 pharma companies, there was a significant variation in investment in biomarker research.

The leaders allocated 3-4% of their R&D spend to these biomarkers, compared to 0.5% in the lowest spenders – an eight-fold difference. It also found that – with the exception of a few companies – the organisational effectiveness needed to maximise this way of working was poor, with companies organised in ad hoc, patchwork manner. This is something Sir John recognises.

“Not many pharma companies have been constructed around the model of the potential for genomic medicine,” he says. “It doesn’t fit neatly into most pharma companies’ business model – the idea that it’s feasible to get a molecular diagnosis for a patient. It’s not something that companies have previously based their models on or has formed part of their thinking.

“At executive level, pharma companies are all aware that yesterday’s business model has to change,” he adds. “There’s no shortage of understanding that this is something that really has to be explored. But do companies – today – have a blueprint of how to make that happen, how to organise their business and the skills they are going to have to have? Of course they don’t. This change is going to affect the whole of the industry, as well as patients and regulators. But pharma companies can only go as fast as their customers want and is right for them; there’s work to do on all sides.”

Need for change in genomic medicine

The change in the system will also extend to drug regulators, and the NHS – which Sir John feels still has plenty of work to do to meet another of the Project’s aims: to introduce genomic medicine into the health service. Time will tell whether it is ready, and of course there will be problems.

He says: “To say that everyone in the NHS is already an expert in genomic medicine would be an outrageous claim, but those parts of the system that are most closely aligned – the NHS genetics service, for example – are well engaged. The genomics centres exist now where they didn’t before. That in itself is quite a revolutionary step, although to be fair they are only just now getting themselves up to speed.

“It probably wasn’t the most popular announcement ever made, because it involves a change. But the Five-Year Forward View [published by Sir Simon Stevens on the future of the health service] is formed around the idea that technologies like genomic medicine will provide an information base for the NHS to provide more personalised diagnoses.”

Short-circuiting pharma R&D efforts

After changing minds in the pharma industry and the NHS on the value of genomics and big data, Sir John has drug regulators on his list of people to convert to a new way of thinking, to embrace the genomics revolution.

He envisions a time when regulators require fewer or different regulatory trials to the ‘gold standard’ randomised controlled trials typically demanded of pharma companies today, to trials that focus instead on evaluating the accuracy of a molecular diagnosis.

“We are working with regulators so they might understand the model better. Clearly if you can understand enough at the molecular level to have worked out how to treat a condition – because we know what molecule, enzyme, or protein we have to target – and you can bring that out of the dataset, then you’re short-circuiting the whole process.”

He feels that by ‘short-circuiting’ the drug R&D process in this way, pharma companies who adapt to a new business model could drastically reduce their R&D costs – and ultimately reduce drug prices by taking advantage of ‘potentially vast economies of scale’.

Although most companies are not geared up for the change, the one example he does cite as an exception is Alexion’s Soliris (eculizumab) – commonly dubbed ‘the world’s most expensive drug’. Soliris was developed according to a molecular diagnostics model yet costs £330,000 for each of the estimated 200 people with the extremely rare disease, atypical Haemolytic Uraemic Syndrome (aHUS), who will be treated every year.

“Alexion have produced a therapy for a very rare disease, therefore all their patients have a molecular diagnosis for their disease. In a sense that’s a harbinger of how you might imagine the future unfolding.

“But we’re a very long way from being there at the moment. The reason drugs are so expensive is that they take years to get to market; and you’re needing to cover the cost of all that. The cost of Soliris is high, but it’s a one-off. It did not fall out of a huge programme like we’re doing, it didn’t fall out of system set up in the way that Genomics England is, designed to provide that level of insight.”

So will pharma companies be able to use the genomes to streamline R&D and ultimately deliver cheaper drugs? There will be future opportunities for companies to add expressions of interest in joining the GENE consortium, which points towards greater collaboration between Genomics England and the industry.

But Sir John is under no illusions about the amount of effort it will take him and his colleagues to deliver not only 100,000 genomes, but the transformation in genomics and R&D that the project has promised. Indeed, as he says, 100,000 genomes may not even be enough to create a revolution in the industry, and it may require input and consolidation from other databases globally. However this is the best start the industry and the NHS will ever have, he says.

“There’s a reason no one’s done this; that’s because it’s bloody difficult and we are bumping up against challenges every day. No one knows what the data we are collecting means at the moment; that’s the consequence of working at the edge of science. But what I do know, and I’m very  confident of, is that by the time we’ve done 100,000 we will know a hell of a lot more than we know at the moment.”

No comment yet.
Scooped by Chatu Jayadewa!

Xenon Pharmaceuticals Achieves Milestone in Genentech Collaboration to Discover Novel Pain Targets Nasdaq:XENE

BURNABY, British Columbia, Sept. 21, 2015 (GLOBE NEWSWIRE) -- Xenon Pharmaceuticals Inc. (Nasdaq:XENE), a clinical-stage biopharmaceutical company, today announced that the Company has achieved a milestone in its pain genetics discovery collaboration with Genentech, a member of the Roche Group, triggering a milestone payment. Xenon and Genentech have successfully discovered and identified a novel pain target by leveraging Xenon's Extreme Genetics™ platform based on the study of rare phenotypes of individuals who have either an inability to perceive pain or have non-precipitated spontaneous severe pain.
No comment yet.
Scooped by Chatu Jayadewa!

Pharma company publicly released preclinical data from more than 50 of its medicines in order to find new drug combinations

Pharma company publicly released preclinical data from more than 50 of its medicines in order to find new drug combinations | OMICs for R&D |

Pharma company AstraZeneca has publicly released preclinical data from more than 50 of its medicines in order to find new drug combinations for cancer treatments. The data AstraZeneca released will be used in a competition it created in partnership with the DREAM Challenge, a non-profit, collaborative community that runs crowdsourcing efforts for biology.

People who participate in the challenge will develop computer models that identify the properties of drugs that make them powerful when combined. Anyone who has the training or expertise to work with these models is invited to participate. The winners of AstraZeneca’s challenge will be able to submit their prediction for publication in the journal Nature Biotechnology.

Other organizations that partnered with AstraZeneca for the challenge, which is called AstraZeneca-Sanger Drug Combination Prediction DREAM Challenge, include the Wellcome Trust Sanger Institute, the European Bioinformatic Institute, and Sage Bionetworks.

“AstraZeneca has a deep and broad oncology development program assessing combinations of immunotherapies and small molecules to address the significant unmet need across a wide range of cancers,” Susan Galbraith, head of the Oncology Innovative Medicines Unit at AstraZeneca, said in a statement. “This open innovation research initiative complements our own efforts brilliantly and we are delighted that the findings could be published for the benefit of the global scientific community.”

Combining cancer therapies can often be more effective than monotherapy, AstraZeneca explained, and this increases the possibility that a patient will overcome drug resistance.

AstraZeneca has released approximately 10,000 tested combinations that measure whether a drug can destroy cancer cell lines from different tumors, like colon, lung, and breast cancer tumors. The Wellcome Trust Sanger Institute is making genomic data available to DREAM Challenge participants for all of these cell lines as well.

This isn’t the first time AstraZeneca has invested in crowdsourcing for its medicines. In April, Boston-based patient network PatientsLikeMe announced a five-year research collaboration deal with AstraZeneca. As part of the deal, AstraZeneca will have full access to PatientsLikeMe’s global network, and the company will use the data to shape future medicine development and work to improve outcomes in different therapeutic areas, with an initial focus on respiratory disease, lupus, diabetes and oncology.

No comment yet.
Scooped by Chatu Jayadewa!

Crowdsourcing the cloud to find cures for rare and “orphaned” diseases

Crowdsourcing the cloud to find cures for rare and “orphaned” diseases | OMICs for R&D |
Volunteer computers—and tablets and phones—could cure diseases big pharma won't.
No comment yet.
Scooped by Chatu Jayadewa!

40 Examples of Open Innovation & Crowdsourcing

40 Examples of Open Innovation & Crowdsourcing | OMICs for R&D |
We can call it open innovation, crowdsourcing or co-creation – or something else. In short, it is about bringing external input to an innovation process and this is no longer a buzzword.
No comment yet.
Scooped by Chatu Jayadewa!

How do Specialized Intermediaries Facilitate Creative Crowdsourcing? | Innovation Management

How do Specialized Intermediaries Facilitate Creative Crowdsourcing? | Innovation Management | OMICs for R&D |

One of the most obvious benefits of crowdsourcing is its ability to stimulate creativity and accelerate innovation on a global scale. Leading companies such as Dell, Starbucks or Frito-Lay have pioneered this trend by building platforms (respectively IdeaStorm, MyStarbucksIdea and Doritos Crash The SuperBowl) that connect them to a crowd of passionate individuals. These success-stories paint a very positive picture of crowdsourcing, but the reality is that connecting with the crowd is not as easy as it seems. In this post, we will present the advantages and drawbacks of using crowdsourcing to source creative ideas, and explain how specialized intermediaries can help companies by providing crowds, platforms and experience.

Published: August 13, 2012 | 0 Comments and 0 Reactions
In: Enabling Factors

The concept of crowdsourcing is well-documented and (fairly) well-defined. One of its applications is creative crowdsourcing, by which organizations outsource some ideation tasks to an undefined and large crowd via the internet. Today, these creative tasks are very often linked to new product development, product design, or video production, as on eYeka, a company that is leveraging peoples’ everyday and artistic creativity. In a way, even scientific challenges, like those on Innocentive, appeal to peoples’ creativity, that is “scientific creativity”.

Creative crowdsourcing happens when an organization uses the internet to externalize the execution of a creative task to a crowd.

Crowdsourcing has both advantages and drawbacks. Academic research has extensively covered the benefits and the risks of this type of open externalization:

Benefits: Accessing a large talent pool at affordable cost, lowering in-house investments, getting fresh ideas from outside, getting fast and authentic insights from individuals, benefiting from the wisdom of the crowd…Risks: Uncertain participation from the crowd, potentially low quality output, divulging internal and/or confidential information, internal resistance against knowledge from outside the organization (“not-invented here” effect), PR-risk associated to reactions from the crowd…

We can see that there are risks in addressing a problem or a task to a crowd. Players in the field are increasingly underlining these risks. Johann Füller recently talked about some of the dangers of crowdsourcing in a blog post, in which he states that crowds can get out of control when they feel the project is unfair, when there is a lack of trust or even manipulation within a community. As Jeff Howe said in 2007…

“90% of everything is crap. If you want something that’s current and what’s hot, [crowdsourcing intermediaries] are good for that.”

In line with Howe’s and Füller’s statements, I think that crowdsourcing intermediaries can play a fundamental role in making a crowdsourcing project a success. Discussions with fellow doctoral colleagues and my personal experience tell me that specialized intermediaries facilitate creative crowdsourcing in three ways: they curate a community, they provide a platform, and they contribute with their experience and know-how. Here is why:

Crowdsourcing intermediaries curate communities

Crowdsourcing creative ideas is basically issuing a creative brief to a crowd of potential participants. But what if no one responds? You fail to stimulate the crowd’s creativity and you end up having invested a lot of money for nothing. To reduce this risk, specialized crowdsourcing intermediaries curate “their own crowds” and stimulate them with a constant stream of challenges. As individuals might be part of several different communities (for example, a designer can participate in different innovation projects on Innocentive, eYeka and Atizo), their experience allows them to assess which of these communities provides most value to them. They also get used to using these platforms over time, thus are motivated and efficient to participate!

No comment yet.
Scooped by Chatu Jayadewa!

Crowdsourcing a Cure for Breast Cancer - $100 million and Game on at GE - Forbes

Crowdsourcing a Cure for Breast Cancer - $100 million and Game on at GE - Forbes | OMICs for R&D |
GE has followed up its highly successful Ecomagination campaign with one called healthymagination and its first target is breast cancer. O’Reilly Media wrote about it here. I find it hard to commend GE highly enough. They’ve teamed up with 4 leading VC firms (Kleiner Perkins Caufield & Beyers, Mohr Davidow Ventures, Venrock, and MPM Capital.) to help speed any new ideas in early detection and cure through to market. But for any developer, data geek or designer with a new perspective you’ve got 28 days left.

So far there are 37 ideas up on the site and by and large they look as though they come from the existing diagnostics and treatment community. Hopefully that will change and more people from the developer community will pitch in.

A more open ended call might help people outside the core area of oncology and diagnostics to gear up and get involved. The crowd might not have the specialist reach of the oncology profession but there’s something in the cancer experience that really begs for new perspectives.

So the use of a crowsourced model is the most inspiring part of the GE initiative, and any criticism is just picking over the detail.

And there’s more.

GE have set up a Facebook page where women can talk about the detection process. That part of the challenge is there to help designers get a better insight into what women go through before detection and onwards. It’s a crowdsourced user-experience document and the idea to help designers to optimise the patient journey.

As part of a more social approach to cancer GE is also aggregating online patient conversation around cancer.

But at the heart of the initiative is detection and cure.

Here’s how GE describes the core research challenge:

GE’s first healthymagination Challenge is an open call to action for oncology researchers, businesses, students, and healthcare innovators to submit ideas that will accelerate innovation in breast cancer. Through the Challenge, GE and its venture capital partners will award up to $100 million to fund these breakthrough ideas that advance early detection, more accurate diagnoses and targeted treatment of breast cancer.

For people who’ve been through the cancer mill this ought to be an inspirational moment. But perhaps that depends on your view of alternatives to the current oncology regime. There’s an important debate waiting to be had about how our current peer review and esteem systems exclude novel approaches to cure.

There’s also an opportunity that might go missing – to fully involve patients in the search and discovery process.

Patients are experts at what it is like to experience cancer and at whether or not lifestyle choices may have contributed to onset. I still can’t help feeling that this data is overlooked. And that a truly elastic approach to innovation would incorporate patient experience not just in redesigning mammography but also in understanding recovery, longevity and cure.

Crowdsourcing has the power to bring totally new perspectives to the battle. With luck it will create a breakthrough and if it does it will change how we set about addressing major societal problems. My only reservation is that November 20th deadline. I hope GE think again about that.

No comment yet.
Scooped by Chatu Jayadewa!

The Top Five Crowdsourcing Mega-trends

The Top Five Crowdsourcing Mega-trends | OMICs for R&D |
I had my eyes opened to the massive growth of the crowdsourcing industry at a SXSW panel earlier this year.  Ever since then, I have been looking for an opportunity to bring more information on this trend to {grow}. I’m fortunate today to have an expert on the subject, David Bratvold, provide a guest post:

If you’re not yet familiar with crowdsourcing, it’s a new work process that involves getting a crowd of people to help with a task typically performed by one employee or contractor.  Imagine needing a new logo for your business.  Rather than hire a freelancer, agency, or in-house designer, with crowdsourcing you can post your need and several designers will compete and create a custom logo just for you.

While this is a common example, today crowdsourcing extends far beyond simple graphic design and can be broken down into four main subcategories:

Microtasks -
Taking a project and breaking it into tiny bits as seen on Amazon’s Mechanical Turk (“the online marketplace for work.”).  Each crowd worker can only see his little bit of the project. You could hire one person to label 1,000 photos or hire 1,000 people to each label 1 photo.Macrotasks -
Similar to microtasks, however, workers can see more, if not all, of the project and can get involved with any portions they are knowledgeable in.  This form is most common with solving complex problems such as the X-Prize or seeking out a better recommendation algorithm for Netflix.Crowdfunding -
Getting a crowd to help fund your cause or project.  It’s unique because you set a monetary goal and deadline and you must get fully funded by your deadline or you’ll get nothing. Here is a list of 13 crowdfunding sites.Crowd Contests -
Asking a crowd for work and only providing compensation to the chosen entries. Commonly seen in design sites like 99designs, and the graphic design example in the opening paragraph.

(For a more thorough explanation, read “What is Crowdsourcing.”)

As the early stages of crowdsourcing continue to gain momentum, there are a few megatrends worth keeping your eye on.

1) Curated Crowds

The bigger your crowd doesn’t necessarily mean better output when it comes to crowdsourcing.  This has been made apparent with the early days of crowdsourcing design sites.  A design contest yielding 1,000 designs can become simply unmanageable.  If you offer a prize large enough, any monkey with a crayon could contribute.  I’m not saying a large crowd produces bad results, I’m simply stating there will be bad among the good.  Luckily, there are almost always a lot of great designs, but it takes extra time to sift out the bad.

Sites like Genius Rocket have begun shifting to a curated crowd model.  Anyone can request to join their crowd, however, they must prove they’re talented before being able to participate in some projects, or even at all.   LogoTournament has been silently curating their crowd since the early days.

2) Quality Improvements

As microtasking gains in adoption, more crowdsourcing platforms are seeing success with adding an extra level of quality control on top of the basic input – output model made popular by MTurk.  If you’ve used MTurk, you’re fully aware the results you get may be less than correct.  Sites like & Microtask have added extra redundancy and QA checks to ensure high levels of accuracy.  If a client requests it, can maintain perfect accuracy when needed.  As this option becomes more available, people will be demanding 99.9%-100% accuracy, considering it doesn’t incur a lot of extra expense.

3) The Standardization of Crowdsourcing

As it’s been pointed out, crowdsourcing is not an industry, it’s currently an undefined space.  The current leaders in crowdsourcing are working to define this space and standardize as much as we can.  Groups like the Crowdsortium are for players within crowdsourcing to discuss what’s going on.  Daily Crowdsource, along with David Alan Grier, are leading the pack towards standardization.  Grier has been pushing for a trade association for quite some time, and recently has begun publicly discussing it.  Daily Crowdsource, Grier, and other leaders are working to define the official taxonomy of crowdsourcing.  All these recent motions are to help standardize crowdsourcing in order to ensure a healthy future.

4) Corporate Acceptance

Crowdsourcing isn’t just a fad for early adopters.  In fact, several Fortune 100 corporations have taken a big step into crowdsourcing.  General Electric is leading the charge with multiple million dollar open innovation projects. Others like General Motors, Procter & Gamble, and PepsiCo continue to execute crowdsourcing projects (not just one-off publicity stunts).  Amazon even built one of the largest
crowdsourcing platforms.  It’s not often a new process is adopted so quickly by large corporations, but this will make it easier for other Fortune 100 corporations to begin crowdsourcing, which will trickle down to smaller corporations.

5) Early Adoption

Although you may be familiar with the term, crowdsourcing is still in the early adoption phase.  A very small percentage of people are familiar with everything  crowdsourcing can do.  Sure, any tech geek can name 99designs, but can you list 10 other uses of crowdsourcing?  Were you aware you could build a car, stress test your website, or volunteer your “waiting in line” minutes to a charity all with the help of crowdsourcing?

No comment yet.
Scooped by Chatu Jayadewa!

Why cloud could make crowdsourcing the norm for scientists

Why cloud could make crowdsourcing the norm for scientists | OMICs for R&D |

If you’re tired of hearing about cloud computing and big data, you might want to wear earplugs for the next year or so. These two trends are only going to get hotter, in large part because they’re also becoming ideal bedfellows. This is especially true in the world of science, where the cloud provides an ideal platform for crowdsourcing scientific problems across the whole world of researchers, giving them access to data sets and the computing resources to analyze them.

Generally speaking, we’ve already seen how crowdsourcing can be an effective method for solving big data problems. The Netflix Prize challenge in 2009 attracted more than 50,000 participants trying to improve Netflix’s Cinematch algorithm, and today we have Kaggle — an entire company dedicated to hosting competitions for companies trying to crowdsource their own analytical challenges. And it’s the cloud, with its centralized nature, virtually unlimited and on-demand resources, that makes it possible to have so many people access and work with the same data sets at the same time.


It’s true, of course, that big data doesn’t necessarily connote scientific workloads, but scientific workloads do increasingly rely on big data techniques. Some refer to data as the fourth paradigm of science because the sheer amount of data available and the new technologies and techniques for working with it are fundamentally changing how scientists go about their research. This has been going on for quite a while, actually, hence the massive research networks connecting supercomputers and research centers across the world. Researchers needed a way to transfer massive data sets to their peers to run on their systems, so they built networks such as the National LambdaRail, XSEDE and CERN’s Large Hadron Collider network.

However, while this arrangement might work fine for researchers working on projects for national labs or universities, who also happen to have time reserved on supercomputing systems, it’s not entirely democratic. Enter cloud computing. Now, anyone can have access to supercomputer-like processing power and, equally important, centralized data sets that don’t require a 40 Gbps connection to download. Companies such as DNAnexus rely on the cloud to host massive genomic data sets on which scientists can collaborate, and also to power those scientists’ computations on the data.


And although companies such as DNAnexus focus more on collaboration than on crowdsourcing, the tools for crowdsourcing are in place. Today, for example, I read about a company, Life Technologies, which makes semiconductor chips that actually carry out a variety of genome-sequencing workloads. Life is hosting a competition within its online communityto improve the speed, scalability and accuracy of chips. Contestants will have access to the raw data as well as cloud-based resources for running computations.

Critics can call cloud computing overblown until they’re blue in the face — they might even be right when it comes to certain business applications — but there’s no denying the effects it could have in the scientific world. By giving virtually anybody access to relevant scientific data sets and the resources necessary to analyze them in a timely manner, cloud computing could result in real answers to some previously perplexing questions.

No comment yet.
Scooped by Chatu Jayadewa!

Just Published New England Journal of Medicine Paper From Geisinger and Regeneron Highlights Value Of Integrating Genetic and EHR Data on DNAnexus

Just Published New England Journal of Medicine Paper From Geisinger and Regeneron Highlights Value Of Integrating Genetic and EHR Data on DNAnexus | OMICs for R&D |

Traditionally, clinical genetic studies have involved deliberate recruitment of patients with specific medical conditions, a process that tends to be lengthy and cumbersome, and generally must be repeated anew for each disease researchers want to study. Moreover, once the patients are finally recruited, the researchers still need to collect and analyze the data on each of these subjects.

Imagine how useful it would be to leverage the knowledge that already exists in a large health system, so that after you designed a study, and decided on the characteristics of patients you wanted to include, you could identify matching patients (and controls) immediately – essentially at the push of a button.

Furthermore, imagine that each of these patients already had rich genetic data, already sitting in an integrated database alongside information from each patient’s electronic health record (EHR).

This is the happy situation that Geisinger Health System and the Regeneron Genetics Center have deliberately created, powered by the DNAnexus platform. De-identified EMR data from consented Geisinger patients participating in Geisinger’s MyCode Community Health Initiative is integrated with whole exome sequencing data from these same patients (an effort known as the “DiscovEHR Project”) and used to drive medical discovery and inform clinical care (see this slide deck, and this front-page New York Times article).

The power (and really, the genius) of this approach was apparent in a paper published this week by researchers from Regeneron and Geisinger in the New England Journal of Medicine (NEJM), revealing a genetic variant that appears to result in reduced levels of triglycerides and a lower risk of coronary artery disease. These results dovetailed with another nice paper published in the same issue of the NEJM by a large academic collective.

In the Regeneron/Geisinger paper, researchers were able to use the genetic information in their integrated database to rapidly identify patients with a suspicious mutation, and use the EHR data to evaluate a range of parameters, including lipid levels and coronary artery disease status, in both patients with mutations as well as in appropriate controls. The Regeneron group also performed subsequent studies in several animal models to further substantiate the biological findings suggested by the human studies.

Not only do these findings point to a potential drug target, but the work represents just one of many similar studies that could be done with equal ease using the approach Geisinger and Regeneron have established. If the researchers want to look at a different gene, or a different condition, the basic process would be almost identical. Moreover, as the partnerships adds more and more patients (I’ve heard Regeneron founder and President George Yancopoulos say he is aiming for half a million) with associated EHR data and sequenced exomes, the power of such studies will only increase.

This approach also highlights the insights that might be achieved through integrative data efforts such as the President’s Precision Medicine Initiative, if executed in a similarly effective fashion.

The Geisinger/Regeneron collaboration is a brilliant vision for medical science and for drug discovery, and there are a number of key success factors that we shouldn’t take for granted.

First, on the Geisinger side, the foundational aspect of this entire effort is Geisinger’s trusted relationship with its patients, and Geisinger’s demonstrated commitment to treating patients as partners. Geisinger was at the leading edge of Open Notes (sharing physician notes with patients), for example.   Geisinger has put considerable thought into the process of patient consent, and also has ensured most patients who join the discovEHR cohort are recontactable.

Gesinger also was an early adopter of EHRs; consequently, Geisinger’s EHRs harbor unusually good longitudinal data, and often contain data from several generations of family members. Geisinger also systematically reviews and curates the EMR data used in clinical studies, to ensure adequate quality.

Regeneron, for it’s part, has a clear vision for the use of genetics in drug discovery, which in their hands seems to be a very deliberate, very dynamic process. Regeneron researchers aren’t randomly collecting information, stirring it in a pot, and asking a computer to sort it all out. To the contrary, they are pursuing an approach that seems generally hypothesis-oriented, evaluating either specific candidates genes and variants (as they did here), then looking at the phenotypes, or they are looking at specific phenotypes of interest, and asking whether there are particular genetic patterns to be found.

Two additional important elements of Regeneron’s strategy that may not be immediately obvious are: (a) the exceptional team of data scientists they’ve brought together to prosecute the analytics, and (b) their ability to quickly pressure-test suggestive results by rapidly creating both targeted antibodies and relevant mouse models – both of which were utilized in the work described in the recent NEJM paper.

Finally, of course, the success of this approach relies upon a powerful and secure, and intuitive platform – DNAnexus — where the data integration can occur, where distributed stakeholders can collaborate, and where a range of analyses can occur.

At DNAnexus, we feel privileged to contribute so foundationally to such great integrative science, and look forward to the next discovery – and to the one after that.

No comment yet.
Scooped by Chatu Jayadewa!

Companion Diagnostics? For Cancer Care, We Need Better Ones

Companion Diagnostics? For Cancer Care, We Need Better Ones | OMICs for R&D |

Click on title to view complete article

No comment yet.
Scooped by Chatu Jayadewa!

The Role of Metagenomic Analysis in Drug Development

The Role of Metagenomic Analysis in Drug Development | OMICs for R&D |
Chatu Jayadewa's insight:

In recent years Next Generation Sequencing instrumentation has become much more accurate, efficient and affordable, enabling researchers to collect, sequence and analyse genomes with increasing ease. Consequently researchers are developing novel techniques for exploring the link between the human microbiome and disease, with the aim of utilising metagenomic analysis as a means to further the drug development process.

Metagenomic modelling of the human microbiome is an emerging area of research that will require global collaboration to collect the required volume of genetic data required for analysis. “Presently we only have a limited understanding of the relationship between microbiome flora, the human immune system and environmental factors,” notes Evolution Director Dr. Frank Rinaldi. “Metagenomics will greatly aid in this understanding, and scientists are hopeful that further research will result in therapeutic applications for a broad range of diseases.”

Dr. Rinaldi also notes that “the number of genes in our intestinal microbiome is 150 times greater than the number in the human genome. Despite this challenge the human microbiome represents one of the most exciting areas of research this decade, and will inevitably lead to the creation and proliferation of novel treatments for conditions that challenge in humanity in the 21st century. It will help to shine a light on the stark rise in autoimmune disorders that have occurred over the past 50 years, including type 1 diabetes, rheumatoid arthritis, multiple sclerosis, celiac disease and asthma. Collectively these disorders are at pandemic levels globally.”

Beyond autoimmune disorders, current literature supports an association between human obesity and an increased prevalence of mental health disorders such as depression, dementia, anxiety and adult learning deficiencies. Whilst direct causation has not been established, researchers seek to further elucidate the nature of this relationship with the subsequent aim of developing novel dietary or pharmacological options.

As methods for studying the human microbiome are refined, metagenomics researchers are progressing towards developing predictive models that will outline the composition and function of microbial communities. Dr. Elhanan Borenstein, a researcher at the University of Washington, has ambitious plans for the collected metagenomics data. “Ultimately, this will allow us to design microbiomes, to tailor specific microbiome interventions, and to test interventions intended to change the microbiome.”

Commercial entities such as QuantiHealth have also identified the opportunities presented by metagenomic analysis of the human microbiome. Considering the combination of CEO Zhao Bowen’s genomic expertise and the company’s status as an early mover within the metagenomics space, QuantiHealth have established themselves as the frontrunners in what will soon become a highly-competitive commercial space.

Follow Evolution Global Talent Attraction on Twitter, Facebook and LinkedIN to keep up-to-date with news and trends from the biotechnology, biosciences, medical device, IT and Intellectual Property industries.

No comment yet.
Scooped by Chatu Jayadewa!

‘Your Genome Isn’t Really Secret,’ Says Google Ventures’s Bill Maris - Bloomberg Business

‘Your Genome Isn’t Really Secret,’ Says Google Ventures’s Bill Maris - Bloomberg Business | OMICs for R&D |
The venture capitalist wants to extend human life expectancy, and he says fears over privacy and the security of DNA data shouldn’t stand in the way.
No comment yet.
Scooped by Chatu Jayadewa!

Garvan Institute, Genomics England Partner on Genomic Medicine Data Management

NEW YORK (GenomeWeb) – The Garvan Institute of Medical Research and Genomics England are planning to share resources and expertise in order to advance genomic medicine in Australia and the UK, the organizations said today.

Under the agreement, Australia-based Garvan and Genomics England will work on making genomic information more accessible, meaningful, and usable, in particular by developing better ways to capture clinical information and integrate it with genomic data. In addition, they will build new genomics databases and establish ethical and legal frameworks around the use of genomic information. The partners will also design educational resources about clinical genomics for health professionals and the public.

Genomics England is running the 100,000 Genomes Project, which will sequence the genomes of 100,000 patients in the UK, focusing on cancer and rare diseases.

Three years ago, the Garvan Institute established the Kinghorn Centre for Clinical Genomics, which it says is the largest sequencing center in the southern hemisphere.

"This partnership will allow us to share tools and approaches to harmonize datasets in Australia and the UK," said Mark Caulfield, chief scientist at Genomics England, in a statement. "Our aim is to support other countries in establishing similar programs to the 100,000 Genomes Project around the world."

No comment yet.
Rescooped by Chatu Jayadewa from Analytics & Social media impact on Healthcare!

Crowdsourced methods to identify big data roadblocks to transform healthcare

Crowdsourced methods to identify big data roadblocks to transform healthcare | OMICs for R&D |

The Bipartisan Policy Center, Heritage Provider Network, and the Advisory Board Company have announced a new national competition to develop ways to use big data to transform care delivery and solve some of the most pressing problems facing healthcare providers today. 

“Hospitals, health plans, physician practices, and post-acute care providers are being asked to provide higher quality care while lowering costs,” says the Care Transformation Prize website. “Successfully doing so requires access to and analysis of large data sets to predict, identify interventions for, and assess cost and quality outcomes for patient populations. Most health care organizations, however, have little knowledge or expertise on how to leverage and analyze the clinical data sets now being developed as the result of an increasingly digitized U.S. health care system.”The program will offer at least three quarterly prizes of $100,000 to the teams that develop the best solutions to selected challenges, including data analytics and data use.  The three as yet unannounced questions will be posed and answered over the next sixteen months.  Interested teams can register for the challenge here.

No comment yet.
Scooped by Chatu Jayadewa!

Crunching Complex Data to verify and analyze complex systems biology data.

Crunching Complex Data  to verify and analyze complex systems biology data. | OMICs for R&D |

A global research team led by scientists from Philip Morris International (PMI) and IBM Research are preparing to launch their second of four challenges over three to four years, part of a larger project designed to harness the wisdom of the broader science community and the power of high-performance computing to verify and analyze complex systems biology data.

During the second quarter, the project—Systems Biology Verification Industrial Methodology for Process Verification in Research or SBV IMPROVER—will launch its Species Translation Challenge. Scientists will work to tackle a longstanding challenge in preclinical science: Translating insights gleaned from rodent models into greater knowledge of how humans function.

“We’re asking the question, ‘Which bit of the rodent biology can actually be translated into the human biology?’” Manuel Peitsch, Ph.D., vp, biological systems research with PMI Research & Development, told GEN. “We are talking about rat bronchial epithelial cells versus human bronchial epithelial cells, rat aortic endothelium vs. human aortic endothelium, etc. By comparing the behavior of the biological networks in those systems, we expect to come back with a translatability factor between rodent and human.”

Among key questions to be answered in the species challenge: Which gene expression regulatory processes, such as biological pathways and functions, are translatable and therefore predictable between species? Which are too divergent? How much translatability is there between species, and how can that be quantified? Also, how well can mathematical models using gene expression data predict protein phosphorylation and cytokine responses?

Researchers will pursue answers by looking across all available pathways, though it has yet to be decided whether the species challenge will narrow its focus to a few areas, J. Jeremy Rice, Ph.D., a research staff member who focuses on functional genomics and systems biology at IBM’s T.J. Watson Research Center, told GEN.

“In this challenge, there will be gene expression data. There will be data on phosphoprotein levels. There will also be some data on cytokines. And we’ll look at responses toward a panel of different stimuli,” Dr. Rice said. “The real opportunity here is that the quality of the datasets will be great. Their uniformity and consistency is something that’s not available anywhere else, and it’s going to get a lot of people excited.”

As in last year’s challenge, SBV IMPROVER aims to analyze complex systems biology data quicker and more accurately than traditional peer review methods, thus enhancing the data’s reliability while lowering development costs. SBV IMPROVER consists of about 20 researchers combined from PMI and IBM, as well as biopharma giants Roche and Merck & Co.; smaller biopharmas like software developer and biodata curator Nebion, and biomarker-based diagnostics developer Selventa; and seven academic researchers.

PMI hopes data from the project can enhance efforts to produce safer cigarettes, which FDA labels “modified-risk tobacco products.” “Part of the challenge of demonstrating risk reduction of a tobacco-based product is that conventional cigarettes take 30 to 40 years to wait for disease manifestation. We’re trying to find an approach where with one year’s clinical study, we can actually say something about that risk,” Dr. Peitsch said, in part by pursuing potential biomarkers of disease onset.

SBV IMPROVER’s third challenge will entail construction and verification of a biological network describing chronic obstructive pulmonary disease (COPD). The fourth will involve verifying the identification of biomarkers of disease onset in a translatable manner using animal and human data on early-onset COPD.

“We will have a biological network for the mouse as well as for the human COPD, with all the subnetworks reaching into the confines of inflammation and cell stress,” Dr. Peitsch said, as well as the reversibility of changes caused by smoking: “We’ll be uniquely posed to identify biomarkers than could be used later on in clinical trials for modified-risk tobacco products.”

For IBM Research, best-known for its Jeopardy!-winning natural-language computer system Watson, SBV IMPROVER follows other IBM-led efforts to crunch systems biology data via crowdsourcing. These include the ongoing Dialogue on Reverse Engineering Assessment and Methods (DREAM), which explores how theory and experiment interact in the study of cellular network inference and quantitative model building.

In the first challenge, completed last year, 54 teams worldwide established predictive signatures on unlabeled gene expression data sets in four disease areas: COPD, lung cancer, multiple sclerosis, and psoriasis. Participants developed prediction models using public data in the four disease areas, then applied their models on blinded samples generated by the challenge organizers.

“People working in isolation tend to think their algorithms are better than the rest. When you put them up head-to-head, in a real double-blind comparison, you find out that not everybody can be better than the rest,” Dr. Rice said. “These problems are very deep, they’re complex, and it’s really not feasible to do verification on very simple things. You really need the ability to reach out to the community and see how it can solve complex problems. Then you get a good grasp for what the state of the art is, and what can be done with today’s technology.”

No comment yet.
Scooped by Chatu Jayadewa!

Crowdfunding A Treatment For The Cancer That Killed Steve Jobs

Crowdfunding A Treatment For The Cancer That Killed Steve Jobs | OMICs for R&D |
When Steve Jobs died last year at age 56, we heard a lot about the pancreatic cancer that killed him. What few news reports mentioned is that the Apple founder had a neuroendocrine tumor--a kind of tumor that affects hormone-producing cells.
No comment yet.
Scooped by Chatu Jayadewa!

Games That Solve Real Problems: Crowdsourcing Biochemistry - Forbes

Games That Solve Real Problems: Crowdsourcing Biochemistry - Forbes | OMICs for R&D |
Adrien Treuille, an assistant professor of computer science at Carnegie Mellon University, creates online challenges that tap gamers to solve complex scientific problems. Players of Foldit, an Internet video game he co-developed as a biochemistry postdoc at the University of Washington, recently solved in under three weeks a protein-folding problem that had stumped scientists for more than a decade. Knowing how a polypeptide folds into a three-dimensional protein structure is key to identifying its role in disease and targeting drugs. By determining the structure of a protein that replicates an AIDS-like virus found in monkeys, the Foldit players provided new insights for the design of antiretroviral drugs.

Another game, EteRNA, developed by Treuille’s graduate student Jeehyung Lee and introduced in early 2010, invites gamers to cross over from the virtual world to reality. Their best new designs for ribonucleic acids (RNAs)—molecules fundamental to life—are tested in a Stanford lab. The endgame? The first large-scale library of synthetic RNAs, which are expected to lead to new ways to control living cells and cure diseases. Techonomy contributor Adrienne Burke spoke with Treuille during the recent PopTech conference in Camden, Maine.

People refer to Foldit as a crowdsourcing tool. What made you think crowdsourcing would work for such a complicated scientific problem?

When we created Foldit, we really didn’t know if it was going to work. The concept was to engage the brains of gamers to solve biochemistry problems. You might call it crowdsourcing, but the complexity of the task is far beyond anything demonstrated before in crowdsourcing. Before Foldit, crowdsourcing largely meant engaging people in simple tasks, things that humans are reflexively good at. Foldit and EteRNA ask players to solve highly complex puzzles. The incredible thing is that it works! These games have radically confounded my own understanding of what “expertise” is, and who in society has it.

Can someone with no biochemistry education play?

Of course! My games are designed to teach players the relevant science. I think that they work partially because I didn’t know anything about molecular biology when I created them. I tried to make games that would make sense, even to me. When I joined David Baker’s biochemistry lab as a postdoc, I went to talk to him about creating a game, and he said, “We’ll teach you everything you need to know about protein folding.” Once you get immersed in that world, it’s something that can be understood. The goal with Foldit and EteRNA was to create interfaces that make proteins and RNAs like toys that you can pick up and play with without a PhD. We tell players, “By advancing and testing hypotheses about when RNAs correctly fold in vitro, you are helping scientists understand the mysteries surrounding RNA folding and eventually paving the way towards new, complex, and medically useful biomolecules out of RNA.”


What made you think that nonscientists wanted to contribute?

For years, the Baker Lab has been using volunteers’ idle computers to run computationally intensive protein-structure prediction tasks through a program called Rosetta@home. After a year watching proteins fold on their screensavers, volunteers could see the mistakes the computer was making and would write to the lab suggesting tweaks.

The idea behind Foldit is to train people to understand when a protein is well folded and to work collaboratively with computers to push it in that direction. Players are helping computers fold proteins.

EteRNA is a more complicated game, isn’t it?

While Foldit is about searching for needles in a haystack, EteRNA is about discovering new things that weren’t known before. RNAs build proteins and carry out important functions. But, unlike proteins, RNAs can be readily synthesized. That was the basis for a new idea: that we could build a game around high-throughput experimental science, not just computational science.

Tutorials teach players the basics of RNA folding, and then they are asked to solve hundreds of practice challenges before they’re allowed to go into the virtual EteRNA Lab. The game challenges players to design molecules, and then vote on each other’s designs, agreeing on which are most likely to work. The top designs are synthesized in a Stanford lab. You can submit your RNA and see a week later what it looks like in the real world. Your score is based on how well your RNA folds into the target shape. The game’s tagline is “played by humans, scored by nature.” These are the most complex things we’ve asked nonexperts to do on the Internet, and the players are systematically and soundly outperforming all of the standard RNA-folding computer algorithms.

Who are these nonexpert EteRNA players?

We have about 30,000 players all over the world, of all ages. I’ve seen a player write, “I have to leave, my Mom’s telling me to do my homework.” This month we had about 3,000 returning visitors, and 1,000 elite players in the lab. Some players are anonymous, but some post their profile and a photo. The fourth-ranked player in the world is an IBM software product manager in Minnesota with an MBA from the University of Chicago. He’s been playing since January 2011. Number nine is a high school biology teacher and chiropractor.

Most players are not scientists, but a lot of them probably should have been. Like scientists, they’ve made up their own esoteric language to talk about things that are not in the ordinary experience of man. And like scientists, they play around to figure out solutions and modify each other’s designs.

Our elite players have been discovering rules of RNA folding that were completely unknown to the scientific community, catalyzing a change in our understanding of what an expert is. There’s no financial reward for designing an RNA molecule. The reward is a high score and social recognition within the community. We’re working on a method to have the players create and publish their own scientific hypotheses in an open access journal.

You’ve suggested that the success of EteRNA has broad implications for global productivity.

There’s a huge global inefficiency in connecting people to the tasks they’re best at. Put someone in their dream job and they’ll be ten times more productive than they were doing a job that wasn’t a good fit. Games allow us to identify people who are good at a task and let them to do it. On a global scale, this could allow us to increase productivity by identifying people’s skills. I think we’ve just scratched the surface.

What’s next?

There’s another game coming, but it’s top secret!

Techonomy 2011 will explore citizen science in a session entitled “Democratizing DNA and the BioPunk Revolution.” For more information about the conference visit You can also follow Techonomy on Twitter and Facebook.

No comment yet.
Scooped by Chatu Jayadewa!

Crowdsourcing reveals life-saving potential in global health research

Crowdsourcing reveals life-saving potential in global health research | OMICs for R&D |

A growing trend in collaborative health research is creating potentially life-saving global partnerships between pharmaceutical companies, academic researchers, disease advocates and even the general public, who are drawn into the world of science through crowdsourcing.

Dwindling money for research and development, and waning donor patience have forced global health players to change how they innovate new products and processes.

"For years, pharmaceutical companies and research institutes … have contributed to fighting neglected tropical diseases, but often independently or through smaller partnerships," said Don Joseph, chief executive of the California-based NGO BIO Ventures for Global Health, which encourages biotechnology firms to develop drugs, vaccines and diagnostics for neglected diseases.

Finding an elusive disease solution independently could mean individual glory, but also long-term research and development commitments and higher financial risk. "Generally, drug development is expensive, takes a long time and most things don't work," Joseph said. Risks have grown exponentially, with clinical trial costs rising by an estimated 70% between 2008 and 2011. Partnerships help spread the burden.

"The challenge is to create projects that are simple and allow a streamlined process for organisations to participate," Joseph told IRIN. "[Open innovation partnerships could] significantly reduce trial and error, and lead neglected disease researchers to that 'Eureka moment' more quickly and effectively."

Partners – who might once have been competitors – are increasingly sharing expertise, intellectual property and financing. Henry Chesbrough, executive director of the programme in open innovation at the University of California, coined the term "open innovation" in 2003 to describe this shift. "The prevailing logic was … if you want something done, do it yourself," Chesbrough said in 2011. "This new logic of open innovation turns that completely on its head."

Researchers are realising that in the race to discover the next big cure, strength lies in numbers. "Competitive advantage now comes from having more people working with you than with anyone else," Chesbrough said.

Global health initiatives

"We have been encouraged by the willingness of industry to consider and participate creatively in open innovation initiatives for neglected diseases and other devastating illnesses," said Joseph.

The Re:Search project, a partnership launched in 2011 between BIO Ventures and the World Intellectual Property Organisation (Wipo), which comprises 185 UN member states, calls for a more global interpretation of intellectual property to spur health innovation and development, and the collaboration of biotechnology firms, pharmaceutical companies and academia.

For example, the project will make it easier for a researcher in Tanzania to connect with pharmaceutical giants for additional biomedical information, resources and detailed product knowhow, Joseph said. Such information has often been carefully guarded because of intellectual property rights, but transparency between partners will be the key.

Crowdsourcing science

To meet health challenges more quickly and with tight budgets, an increasing number of organisations are turning to crowdsourcing competitions to outsource innovation to the general public.

In 2009, the international scientific journal Nature teamed with InnoCentive to use online crowdsourcing to invite solutions and proposals to medical and scientific problems. InnoCentive began hosting global health challenges in 2006, linking organisations looking for solutions with problem-solvers who can earn tens of thousands of dollars. The organisations give prizes for winning solutions in return for the intellectual property rights.

In 2008, a challenge by the Global Alliance for TB drug Development (TB Alliance) to simplify the manufacturing processes of an advanced-stage TB drug earned the two winning problem-solvers $20,000 (£12,750) each for their ideas.

The electronics company Nokia recently partnered with the California-based educational NGO X Prize Foundation, to offer $2.25m to encourage the innovative use of digital tools, particularly mobile health applications.

"This competition will enable us to realise the full potential of mobile-sensing devices, leading to advances in … [the] technology, which can play a major role in transforming the lives of billions of people around the world," said Nokia's executive vice-president and chief technology officer, Henry Tirri. Sensing technologies detect disease and measure health indicators such as temperature and blood pressure.

Product development partnerships

In the 1990s, decades before crowdsourcing was applied to humanitarian response, product-development partnerships (PDPs) tried to accelerate the development of technologies to fight TB, Aids, malaria and neglected diseases. The TB Alliance, a PDP launched in 2000, says there are more than 140 partnerships projects either being developed or in the process of investigating drugs, diagnostics and vaccines for neglected diseases.

Among these, the Gavi Alliance, formerly known as the global alliance for vaccines and immunisation, aims to get more vaccines to poorer countries, and the EU's innovative medicines initiative is developing new drugs and tests for diseases, including TB.

No comment yet.
Scooped by Chatu Jayadewa!

Crowdsourcing Innovation: How To Make Sure You Spot The Best Ideas

Crowdsourcing Innovation: How To Make Sure You Spot The Best Ideas | OMICs for R&D |

The collective is a powerful thing. Crowdsourcing competitions can draw huge results, but the most valuable end game might not be free inspiration for a multi-million-dollar Super Bowl spot or a best-selling gadget. Rather, the most valuable prize might be the insights lurking in the collection of the submissions. The wisdom of the crowds isn’t just about picking a single winner.

How can you reap insights from the collection of ideas? Often in innovation, we welcome novelty and dismiss sameness. However, rather than thinking about repetition in the submitted ideas as being a nuisance, it is important to recognize that the repetition contains clues. Likewise, even if the ideas are relatively mundane, or at least not earth-shattering, the company can learn from that too. Collectively, the submitted ideas are a window into consumers’ minds.

It wasn’t just awhite60, the My Starbucks Idea (MSI) community member who posted “Please keep the Caramel Brulée Latte” in December 2011, who drove Starbucks’ decision to keep the flavor going past the holiday season. Starbucks surely also noticed that the idea had nearly 4,500 net votes and more than 50 similar requests, spanning four years. The community has spoken.

Likewise, Starbucks may very well have had gluten-free foods on the horizon. But hearing repeated, unsolicited requests in forums like MSI is easy research. The idea doesn’t have to strike like a bolt from the blue to be really valuable.

But how do you sense patterns that might matter to the direction of your business when it’s not immediately obvious? Napkin Labs, a company I advise, worked with a large electronics provider on a project to find out how people consumed television online. For this particular brainstorming session, it wasn’t incredibly important to emerge with a magic bullet type of idea to merge TV and online viewing experiences. It was more about hearing what customers thought about merging the Internet with their physical TV sets.

Consumers continually talked about the fact that they watched TV on their computers or tablets, but didn’t really feel the need to surf the web on their TV. A few expressed a desire to keep some browsing habits, like checking Facebook, more private, rather than display them on TV in front of the entire living room. A collection of sentiments like this led the electronics provider to revisit how it was approaching the Internet-meets-TV revolution.

As in any market research, we have to be careful in drawing conclusions. Using crowdsourced ideas, we need to pay attention to the composition of the community. Are these people a representative sample of consumers? No. The participants will tend to be more engaged and more rabid fans of the brand than the average customer. We also have to be on the lookout for factions of the community who might organize and mobilize participation for their cause. (Barack Obama’s crowdsourced press conference fell into that trap.) Companies aren’t going to play detective on the source of activity on their site. Simple rules of reason will have to do: If my rabid fans want to drink Caramel Brulee Lattes all year, maybe I should let them.

In traditional product development practice, there is a separation between gathering data from customers about their needs--via interviews, observations, focus groups, and surveys--and proposing new product concepts. The classic example of this separation is, "People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!" (Theodore Levitt’s example from Harvard Business Review, 1960). In that view, managers guiding innovation should consult the users for what needs to happen, not rely on the users themselves to dream big about how it can happen. Crowdsourcing in innovation allows us rethink this separation. We can ask the crowd directly to provide the solutions, then work backward to infer what is on their minds.

One final example comes from my own teaching. One year I asked my students to suggest ways that technology could be used to improve the classroom experience. There were clear clusters in the ideas: texting-facilitated classroom interaction, game show formats, online study aids, and dozing prevention. (I tried not to take the last one too personally!) Out of the hundreds of submissions, there were very few that were truly unlike any of the others. But that lack of novelty doesn’t mean the ideas don’t have value. The clustering in the students’ proposed solutions contained information about their underlying needs for interaction, engagement, challenge, and stimulation. The students have spoken: Now I just have to see if the university will install heavy-eyelid detectors and seats with electric shocks.

Laura Kornish is an Associate Professor of Marketing at the Leeds School of Business at CU Boulder. She is on sabbatical this semester, immersing herself in Boulder startups as a “professor-in-residence” at Napkin Labs and Red Idea Partners.

No comment yet.
Scooped by Chatu Jayadewa!

Why Crowdsourcing and Open Innovation Can Rule Big Data

Why Crowdsourcing and Open Innovation Can Rule Big Data | OMICs for R&D |
Big Data, this is a term that you will hear over and over again for many years to come. In a recent article, we discussed several strategic technologiesthat would dominate 2013 and our prediction was simple, if Big Data is big in 2012, it’s even bigger in 2013.

Let’s Start Small in Big Data

Possibly you’re questioning the term “Big Data” and wonder what it represents, so let’s start small. You can find definitions through a Google search of course, but I happen to appreciate this Quora submission by David Vellante that states:

“…data that is too large to process and manage using conventional database management technologies. Big data has numerous attributes in addition to its large size, including it is typically unstructured and often dispersed.”

With the emergence of web 3.0 – chip enabled every “thing” – the data we humans are set to collect will dwarf the amazing amount of data we already collect. There are industries that are interested in this data and I think they innately make sense.

Healthcare companies want more real-time data on your wellness.

Apparel companies yearn for more real-time data on your performance.

Retailers want to study emotional responses that trigger consumption.

The list goes on and the hyper-connected future will deliver to these companies streams of real-time data like never before.

Making Sense of Big Data

This is the “Golden Goose” of Big Data. Collecting the data will become easier and easier but understanding 2 important things – How to effectively filter it all & What to do with the data once collected – aka how to monetize the data in some way – will be where the winners in Big Data leave the losers in the electronic dust.

Algorithms will power the Big Data revolution. The beauty of algorithms is, they are never about a final solution. Instead they are focused on the innovation of better, faster, more productive, more efficient and the era of Big Data will be all about “better”. Filtering information better, pairing what seemed to be completely disperse data sets in a unique way that breeds a better understanding of behavior. The study of Big Data is the pursuit of getting better at understanding. Advanced algorithms will power this pursuit.

No comment yet.
Scooped by Chatu Jayadewa!

Scientists crowdsource atmospheric data from smartphones to improve weather forecasts

Scientists crowdsource atmospheric data from smartphones to improve weather forecasts | OMICs for R&D |
Meteorologists may become better at their jobs thanks to your smartphone.

Atmospheric scientists at the University of Washington are collecting data from people using PressureNet, a free app that works on certain Android devices.

Pressure sensors installed on those devices can estimate the phone’s elevation and location, but scientists can also take advantage of those sensors to measure the amount of atmospheric pressure. Those measurements provide more precise information about pressure changes and readings, which in turn can help better predict storms.

Cliff Mass, a UW professor of atmospheric sciences, has been working with Canadian-app company Cumulonimbus to develop PressureNet. As Mass details in his blog, the UW is now acquiring the data every hour and plotting the location of the smarphone pressure observations hourly.


“I think this could be one of the next major revolutions in weather forecasting, really enhancing our ability to forecast at zero to four hours,” Mass said in a press release.

Mass will spend the next few months aggregating data and then comparing the smartphone data results to traditional forecasts to see if there are real differences. The project is funded by Microsoft and the National Weather Service.

These are the devices that have pressure sensors: 

Galaxy NexusGalaxy S3Galaxy NoteGalaxy Note IINexus 4Nexus 10Xoom
No comment yet.