Simon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.
“How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.
“In physics, you typically have one kind of data and you know the system really well,” said DeDeo. “Now we have this new multimodal data [gleaned] from biological systems and human social systems, and the data is gathered before we even have a hypothesis.” The data is there in all its messy, multi-dimensional glory, waiting to be queried, but how does one know which questions to ask when the scientific method has been turned on its head?
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
Eyre-Walker A, Stoletzki N (2013) The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations. PLoS Biol 11(10): e1001675. http://dx.doi.org/10.1371/journal.pbio.1001675
CIO — BOSTON—The increasing digitization of healthcare data means that organizations often add terabytes' worth of patient records to data centers annually.
At the moment, much of that unstructured data sits unused, having been retained largely (if not solely) for regulatory purposes. However, as speakers at the inaugural Medical Informatics World conference suggest, a little bit of data analytics know-how can go a long way.
It isn't easy, namely because the demand for healthcare IT skills far outpaces the supply of workers able to fill job openings, but a better grasp of that data means knowing more about individual patients as well as large groups of them and knowing how to use that information to provide better, more efficient and less expensive care.
Feature: 13 Healthcare IT Trends and Predictions for 2013
Here are six real-world examples of how healthcare can use big data analytics.
1. Ditch the Cookbook, Move to Evidence-Based Medicine
Cookbook medicine refers to the practice of applying the same battery of tests to all patients who come into the emergency department with similar symptoms. This is efficient, but it's rarely effective. As Dr. Leana Wan, an ED physician and co-author of When Doctors Don't Listen, puts it, "Having our patient be 'ruled out' for a heart attack while he has gallstone pain doesn't help anyone."
Dr. John Halamka, CIO at Boston's Beth Israel Deaconess Medical Center, says access to patient data—even from competing institutions—helps caregivers take an evidence-based approach to medicine. To that end, Beth Israel is rolling out a smartphone app that uses a Web-based- drag-and-drop UI to give caregivers self-service access to 200 million data points about 2 million patients.
Analysis: Is Healthcare IT Interoperability (Almost) Here?
Admittedly, the health information exchange process necessary for getting that patient data isn't easy, Halamka says. Even when data's in hand, analytics can be complicated; what one electronic health record (EHR) system calls "high blood pressure" a second may call "elevated blood pressure" and a third "hypertension." To combat this, Beth Israel is encoding physician notes using the SNOMED CT standard. In addition to the benefit of standardization, using SNOMED CT makes data more searchable, which aids the research query process.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.