healthcare technology
118.6K views | +1 today
healthcare technology
The ways in which technology benefits healthcare
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by nrip!

How big data is beginning to change how medicine works

How big data is beginning to change how medicine works | healthcare technology |

The face of medical care is rapidly changing thanks to major advancements in the capture, proliferation, and analysis of medical data. Technologies like the electronic health records (EHRs) and personal health records (PHRs) are drastically improving the way data is aggregated and shared.

Now the hope is that big data analytics will help to make sense of seemingly endless streams of medical information.

As many doctors are painfully aware, outcome-oriented care is no longer a buzzword but a reality. The Center for Medicare and Medicaid Services has started to implement a program where payments are based on the ability of providers to meet key National Quality Strategy Domains (e.g. care criteria). Public payers are testing this new methodology, and private payers are expected to soon follow.

These big data analytics applications can also be relevant for the FDA, which may want to see how drugs perform in a non-test environment to ensure the appropriate patient populations are receiving the drug. I also expect pharmaceutical companies to actively scour this data to track drug efficacy post-release or identify markets that could “benefit” from increased penetration.

I am eager to see how the data evolution improves outcomes for doctors and patients.

No comment yet.
Scooped by nrip!

Can Computing Keep up With the Neuroscience Data Deluge?

Can Computing Keep up With the Neuroscience Data Deluge? | healthcare technology |

When an imaging run generates 1 terabyte of data, analysis becomes the problem

Today's neuroscientists have some magnificent tools at their disposal. They can, for example, examine the entire brain of a live zebrafish larva and record the activation patterns of nearly all of its 100,000 neurons in a process that takes only 1.5 seconds.

The only problem: One such imaging run yields about 1 terabyte of data, making analysis the real bottleneck as researchers seek to understand the brain.

To address this issue, scientists at Janelia Farm Research Campus have come up with a set of analytical tools designed for neuroscience and built on a distributed computing platform called Apache Spark. In their paper in Nature Methods, they demonstrate their system's capabilities by making sense of several enormous data sets. (The image above shows the whole-brain neural activity of a zebrafish larva when it was exposed to a moving visual stimulus; the different colors indicate which neurons activated in response to a movement to the left or right.)

The researchers argue that the Apache Spark platform offers an improvement over a more popular distributed computing model known as Hadoop MapReduce, which was originally based on Google's search engine technology. 

The researchers have made their library of analytic tools, which they call Thunder, available to the neuroscience community at large. With U.S. government money pouring into neuroscience research for the new BRAIN Initiative, which emphasizes recording from the brain in unprecedented detail, this computing advance comes just in the nick of time. 

more at

No comment yet.
Scooped by nrip!

Big Data Peeps At Your Medical Records To Find Drug Problems

Big Data Peeps At Your Medical Records To Find Drug Problems | healthcare technology |

It's been tough to identify the problems that only turn up after medicines are on the market. An experimental project is now combing through data to get earlier, more accurate warnings.

No one likes it when a new drug in people's medicine cabinets turns out to have problems — just remember the Vioxx debacle a decade ago, when the painkiller was removed from the market over concerns that it increased the risk of heart attack and stroke.

To do a better job of spotting unforeseen risks and side effects, the Food and Drug Administration is trying something new — and there's a decent chance that it involves your medical records.

It's called Mini-Sentinel, and it's a $116 million government project to actively go out and look for adverse events linked to marketed drugs. This pilot program is able to mine huge databases of medical records for signs that drugs may be linked to problems.

The usual system for monitoring the safety of marketed drugs has real shortcomings. It largely relies on voluntary reports from doctors, pharmacists, and just plain folks who took a drug and got a bad outcome.

"We get about a million reports a year that way," says Janet Woodcock, the director of the FDA's Center for Drug Evaluation and Research. "But those are random. They are whatever people choose to send us."

No comment yet.
Scooped by nrip!

Bringing Big Data Analytics To Health Care

Bringing Big Data Analytics To Health Care | healthcare technology |

Big data offers breakthrough possibilities for new research and discoveries, better patient care, and greater efficiency in health and health care, as detailed in the July issue of Health Affairs. As with any new tool or technique, there is a learning curve.

Here are some guidelines to help take full advantage of big data's potential:

Acquire the “right” data for the project, even if it might be difficult to obtain.

Many organizations – both inside and outside of health care – tend to stick with the data that’s easily accessible and that they’re comfortable with, even if it provides only a partial picture and doesn’t successfully unlock the value big data analytics may offer. But we have found that when organizations develop a “weighted data wish list” and allocate their resources towards acquiring high-impact data sources as well as easy-to-acquire sources, they discover greater returns on their big data investment.

Ensure that initial pilots have wide applicability.

Health organizations will get the most from big data when everyone sees the value and participates. Too often, though, initial analytics projects may be so self-contained that it is hard to see how any of the results might apply elsewhere in the organization.

Before using new data, make sure you know its provenance (where it came from) and its lineage (what’s been done to it).

Often in the excitement of big data, decision-makers and project staff forget this basic advice. They are often in a hurry to immediately start data mining efforts to search for unknown patterns and anomalies. We’ve seen many cases where such new data wasn’t properly scrutinized – and where supposed patterns and anomalies later turned out to be irrelevant or grossly misleading.

Don’t start with a solution; introduce a problem and consult with a data scientist.

Unlike conventional analytics platforms, big data platforms can easily allow subject-matter experts direct access to the data, without the need for database administrators or others to serve as intermediaries in making queries. This provides health researchers with an unprecedented ability to explore the data – to pursue promising leads, search for patterns and follow hunches, all in real time. We have found, however, that many organizations don’t take advantage of this capability.

Health organizations often build a big data platform, but fail to take full advantage of it. They continue to use the small-data approaches they’re accustomed to, or they rush headlong into big data, forgetting best practices in analytics.

It’s important to aim for initial pilots with wide applicability, a clear understanding of where one’s data comes from, and an approach that starts with a problem, not a solution. Perhaps the hardest task is finding the right balance.

No comment yet.
Scooped by nrip!

Where Will Healthcare's Data Scientists Find The Rich Phenotypic Data They Need?

Where Will Healthcare's Data Scientists Find The Rich Phenotypic Data They Need? | healthcare technology |

The big hairy audacious goal of most every data scientist I know in healthcare is what you might call the Integrated Medical Record, or IMR, a dataset that combines detailed genetic data and rich phenotypic information, including both clinical and “real-world” (or, perhaps, “dynamic”) phenotypic data (the sort you might get from wearables).

The gold standard for clinical phenotyping are academic clinical studies (like ALLHAT and the Dallas Heart Study).  These studies are typically focused on a disease category (e.g. cardiovascular), and the clinical phenotyping on these subjects – at least around the areas of scientific interest — is generally superb.  The studies themselves can be enormous, are often multi-institutional, and typically create a database that’s independent of the hospital’s medical record.

Inevitably, large, prospective studies can take many years to complete.  In addition, there’s generally not much real world/dynamic measurement.

The other obvious source for phenotypic data is the electronic medical record (EMR).  The logic is simple: every patient has a medical record, and increasingly, especially in hospital systems, this is electronic – i.e. an EMR.  EMRs (examples include Epic and Cerner) generally contain lab values, test reports, provider notes, and medication and problem lists.  In theory, this should offer a broad, rich, and immediately available source of data for medical discovery.

DIY (enabled by companies such as PatientsLikeMe) represents another approach to phenotyping, and allows patients to share data with other members of the community.  The obvious advantages here include the breadth and richness of data associated with what can be an unfiltered patient perspective – to say nothing of the benefit of patient empowerment.  An important limitation is that the quality and consistency of the data is obviously highly dependent upon the individuals posting the information.

Pharma clinical trials would seem to represent another useful opportunity for phenotyping, given the focus on specific conditions and the rigorous attention to process and detail characteristic of pharmaceutical studies.  However, pharma studies tend to be extremely focused, and companies are typically reluctant to expand protocols to pursue exploratory endpoints if there’s any chance this will diminish recruitment or adversely impact the development of the drug.

No comment yet.
Scooped by nrip!

EHR + Geography = Population Health Management

EHR + Geography  = Population Health Management | healthcare technology |

Duke University Medicine is using geographical information to turn electronic health records (EHRs) into population health predictors. By integrating its EHR data with its geographic information system, Duke can enable clinicians to predict patients' diagnoses.

According to Health Data Management, Sohayla Pruitt was hired by Duke to run this project; “I thought, wow, if we could automate some of this, pre select some of the data, preprocess a lot and then sort of wait for an event to happen, we could pass it through our models, let them plow through thousands of geospatial variables and [let the system] tell us the actual statistical significance,” Pruitt says. “Then, once you know how geography is influencing events and what they have in common, you can project that to other places where you should be paying attention because they have similar probability.”

iHealth Beat explains that the system works by using an automated geocoding system to verify addresses with a U.S. Postal Service database. These addresses are then passed through a commercial mapping database to geocode them. Finally, the system imports all U.S. Census Bureau data with a block group ID. This results in an assessment of socioeconomic indicators for each group of patients.

“When we visually map a population and a health issue, we want to give an understanding about why something is happening in a neighborhood,” says Pruitt. “Are there certain socioeconomic factors that are contributing? Do they not have access to certain things? Do they have too much access to certain things like fast food restaurants?”

Duke is working to develop a proof of concept and algorithms that would map locations and patients. They are also working on a system to track food-borne illnesses.

No comment yet.
Scooped by nrip!

Can Mobile Technologies and Big Data Improve Health?

Can Mobile Technologies and Big Data Improve Health? | healthcare technology |

After decades as a technological laggard, medicine has entered its data age. Mobile technologies, sensors, genome sequencing, and advances in analytic software now make it possible to capture vast amounts of information about our individual makeup and the environment around us. The sum of this information could transform medicine, turning a field aimed at treating the average patient into one that’s customized to each person while shifting more control and responsibility from doctors to patients.

The question is: can big data make health care better?

“There is a lot of data being gathered. That’s not enough,” says Ed Martin, interim director of the Information Services Unit at the University of California San Francisco School of Medicine. “It’s really about coming up with applications that make data actionable.”

The business opportunity in making sense of that data—potentially $300 billion to $450 billion a year, according to consultants McKinsey & Company—is driving well-established companies like Apple, Qualcomm, and IBM to invest in technologies from data-capturing smartphone apps to billion-dollar analytical systems. It’s feeding the rising enthusiasm for startups as well.

Venture capital firms like Greylock Partners and Kleiner Perkins Caufield & Byers, as well as the corporate venture funds of Google, Samsung, Merck, and others, have invested more than $3 billion in health-care information technology since the beginning of 2013—a rapid acceleration from previous years, according to data from Mercom Capital Group. 

Paul's curator insight, July 24, 2014 12:06 PM

Yes - but bad data/analysis can harm it

Pedro Yiakoumi's curator insight, July 24, 2014 1:48 PM

Vigisys's curator insight, July 27, 2014 4:34 AM

La collecte de données de santé tout azimut, même à l'échelle de big data, et l'analyse de grands sets de données est certainement utile pour formuler des hypothèses de départ qui guideront la recherche. Ou permettront d'optimiser certains processus pour une meilleure efficacité. Mais entre deux, une recherche raisonnée et humaine reste indispensable pour réaliser les "vraies" découvertes. De nombreuses études du passé (bien avant le big data) l'ont démontré...

Scooped by nrip!

Genetic researchers have a new tool in API-controlled lab robots

Genetic researchers have a new tool in API-controlled lab robots | healthcare technology |

A life-sciences-as-a-service startup called Transcriptic has opened its APIs to the general public, allowing researchers around the world offload tedious lab work to robots so researchers can spend more of their time analyzing the results.

Using a set of APIs, researchers can now command Transcriptic’s purpose-built robots to process, analyze, and store their genetic or biological samples, and receive results in days.

The high concept idea, says Founder and CEO Max Hodak, is cloud computing for life sciences — only with “robotic work cells” instead of servers on the other end. “We see the lab in terms of the devices that make it up,” he said, meaning stuff like incubators, freezers, liquid handlers and robotic arms to replace human arms.

And although Transcriptic’s technology is complex, the process for getting work done is actually pretty simple. Researchers write code to tell the robots exactly what to do with the samples (right now, the company focuses on molecular cloning, genotyping, bacteria-growing and bio-banking), and then they send their samples to the Transcriptic lab.

Alternatively, Transcriptic’s robotic infrastructure can also synthesize samples for users.

And although Transcriptic’s technology is complex, the process for getting work done is actually pretty simple.

Researchers write code to tell the robots exactly what to do with the samples (right now, the company focuses on molecular cloning, genotyping, bacteria-growing and bio-banking), and then they send their samples to the Transcriptic lab. Alternatively, Transcriptic’s robotic infrastructure can also synthesize samples for users.

When the job is done, researchers get their results. That process can take anywhere from a day to weeks, Hodak explained, in part because the company’s operation is still pretty small and in part because “cells only grow and divide so quickly.”

more at

No comment yet.

Would you like me to help you?

Please fill this short form and I will get in touch with you