Your new post is loading...
Your new post is loading...
As the world's data grow exponentially, organizations across all sectors, including government and not-for-profit, need to understand, manage and use big, complex data sets—known as big data, a term spreading through forward-thinking business dialogue and building job market demand. Data science is the answer to optimizing (or, for starters, simply dealing with) big data. However, the iSchool’s distinct perspective approaches data science with a view of the full data life cycle, going beyond what most discuss as data analytics. iSchool Offering Open Online Course In Applied Data Science Earlier this year, the iSchool successfully ran its first introductory-level MOOC—the popular term for free, massive, open, online courses. Its next offering, called Applied Data Science: An Introduction, introduces similar content in a new format: self-guided learning. Participants will be able to work their way through the course at their own pace without the constraints of deadlines. This course is free of charge and a select number of participants will be invited to try it on October 28. Successfully completed MOOC coursework results in a certificate of completion. For more information, please complete the form below: Questions? Contact iMOOC@syr.edu.
Karen du Toit's insight:
A MOOC on Big Data management! Course starts on 28 October. Apply now!
Slides of talk at DataWeek 2012 by Guillaume Decugis, Co-Founder & CEO of Scoop.it.
From introduction of presentation:
"We engineers love data and algorithms. They help create amazing things. But if and when we forget that people create data and that data can be improved by people, we will miss the promise of Big Data. It's time we all thought of this not as social vs algorithm but as humanrithm."
"Curation starts when Saerch stops working" - Clay Shirky
View full presentation here:
Via Giuseppe Mauriello
"Tim Wray explores the new frontiers of curated collections (from a museum perspective), and in doing so, he analyzes the concept of "landscapes", a possible emerging metaphor for how large sets of relevant information items could be better organized for viewing, even outside the specific museum setting.
His goal in doing this is one of finding out how to build effective interfaces that reveal and unravel narratives within collections. How can that be designed into the collection?
Tim Wray is particularly interested in this research, because he is also the brain behind a new and upcoming app called A Place for Art, and which has likely lots to do with art exploration and discovery.
The key point he makes in this interesting article (part of a longer series) is the illustration of the two concepts of "containers" and "landscapes", and about how they closely relate to the organization and access of curated collections.
In Tim Wray's view, the future, especially when we look at large collections, is in the increased adoption of "landscapes" organizing approaches versus the ever-present "container" approach we use for most collections today.
He writes: "I hint at the necessary shift from the former to the latter as a mechanism for providing context for objects, and how landscapes – combined with engaging interaction designs and the notion of pliability – can used as a way of providing immersive experiences for museum collections."
I think that Tim's ideas reflect a growing critical issue for anyone who attempts to curate large collections of information items: having an organization and navigation system that helps the newcomer, find and discover what it may interest him the most.
I myself feel quite frustrated by the absence of curation tools that truly allow me to organize and make accessible / discoverable large lists of information items in more effectives ways than the typical list, table or grid.
But I am positive that the future of curation will inevitably revolve around those who will find, invent and design new and effective ways to do so.
P.S.: Tim Wray is a PhD student that looks at how computational methods and interaction design can be used to create beautiful, engaging experiences for museum collections."
Very Interesting. Must-read for app designers. 9/10
Via Robin Good
The widening gap between the amount of data the world produces and the capacity to store it is giving a new lease of life to the humble cassette tape.
"Although consumers have abandoned the audio cassette in favour of the ubiquitous iPod, organisations with large amounts of data, from patient records to capacity-hungry video archives, have continued to use tape as a cheap and secure storage medium.
Researchers at IBM are trying to keep this 60-year old technology relevant for at least the next decade and they are getting help from rising energy costs, which are forcing companies to look for cheaper alternatives to stacks of power-hungry hard drives.
Evangelos Eleftheriou and his colleagues at IBM Research in Zurich, Switzerland, have developed a cassette just 10cm by 10cm by 2cm that can hold about 35 terabytes of data, the equivalent of a library with 400km of bookshelves.
"It is really the greenest storage technology," Eleftheriou told Reuters. "Tape at rest, consumes literally zero power."
Curated by Beth Kanter
"The advice is from 1962 study and has been updated for today's daily battle with digital overload. The techniques are very much still valid.
1. Omission – The concept is simple: you can’t consume everything, so just ignore some. This is a bit dangerous since some of the omitted information might be the most critical. Imagine that the email you ignored was the one where your most important client alerts you to a new opportunity.
2. Error – Respond to information without giving due consideration. While a seemingly poor strategy, this is more common than you might think; I mean, who hasn’t reacted to an email, report, or telephone call without thinking through all the consequences because of time constraints or lack of attention?
3. Queuing – Putting information aside until there is time catch up later. An example is processing email early in the morning, before the business day begins, or reading important reports late at night.
4. Filtering – This is similar to omission except filtering employs a priority scheme for processing some information while ignoring others. Automated tools are particularly well suited to help filter information. Recommendation engines, search tools, email Inbox rule engines and Tivo are all good examples of tools that can help filter and prioritize information.
5. Employing multiple/parallel channels – Doling out information processing tasks; for example, assigning the tracking of Twitter feeds to one person and blog coverage to another person on your team.
6. Approximation – Processing information with limited precision. Skimming is an example of approximation. Like omission and error, you can process more information by approximating, but you run the risk of making critical mistakes
7. Escaping from the task – Making this someone else’s problem. While it sounds irresponsible, admitting you can’t ‘do it all’ and giving an assignment to someone else is sometimes the best strategy of all."
Via Beth Kanter
by Guillaume Decugis on Oct 03, 2012
Via Ana Cristina Pratas, michel verstrepen
Humanities is an area ripe for exploiting big data, enabling scholars to analyze topics more broadly and deeply than ever before – whether in the form of books, artworks, music, or any other digitizable format.
In this video, Amanda Rust, Assistant Head of Research & Instruction, Arts & Humanities at the Snell Library of Northeastern University, Boston, MA tells us about her experience of and visions for the use of big data and digital humanities."
Karen du Toit's insight:
Video interview with Amanda Rust about the use of her experience and visions for big data.
Today, many scientific fields can be described as data-intensive disciplines, which turn raw data into information and then knowledge. If this sounds familiar it’s because this represents the late and influential computer scientist Jim Gray’s vision of the fourth research paradigm. Gray divided up the evolution of science into four periods or paradigms. One thousand years ago, science was experimental in nature, a few hundred years ago it became theoretical, a few decades ago it moved to a computational discipline, and today it’s data driven. Researchers are reliant on e-science tools to enable collaboration, federation, analysis, and exploration to address this data deluge, equal to about 1.2 zettabytes each year. If 11 ounces of coffee equaled one gigabyte, a zettabyte would be the same volume as the Great Wall of China. (...) - by Adrian Giordani, MyScienceWork blog, 27 november 2012
Via Julien Hering, PhD, Pavlinka Kovatcheva
Karen du Toit's insight:
"Today, many scientific fields can be described as data-intensive disciplines, which turn raw data into information and then knowledge. If this sounds familiar it’s because this represents the late and influential computer scientist Jim Gray’s vision of the fourth research paradigm. Gray divided up the evolution of science into four periods or paradigms. One thousand years ago, science was experimental in nature, a few hundred years ago it became theoretical, a few decades ago it moved to a computational discipline, and today it’s data driven. Researchers are reliant on e-science tools to enable collaboration, federation, analysis, and exploration to address this data deluge, equal to about 1.2 zettabytes each year. If 11 ounces of coffee equaled one gigabyte, a zettabyte would be the same volume as the Great Wall of China.
This article was originally published in International Science Grid This Week as “Enabling knowledge creation in data-driven science”
"To answer this problem [of data deluge], some are creating infrastructures and software that are set to radically transform the way scientific publishing is done, which has been little changed for centuries.Research publishing 2.0
While a number of scientific institutes, European Commission-funded projects, and research communities work on establishing common data policies and open-access infrastructures to make research data more searchable, shareable, and citable, the life sciences are looking at data analysis and publishing approaches that move the computer to the data rather than moving the data to the computers"
"Called "The Human Face of Big Data," (http://thehumanfaceofbigdata.com/) this is the latest project by longtime photojournalist Rick Smolan, the one-time National Geographic photographer who's best known for creating the "Day in the Life" series of books.
Via Nicolas Loubet, michel verstrepen
Sue McKemmish & Andrew Wilson:
"It’s estimated that in 2011 a truly staggering 1.8 zettabytes of digital information was created. Or to put it in more meaningful terms, that’s 57.5 billion 32-gigabyte iPads full.
Recent articles about this “digital deluge” warn of an approaching “digital dark age” if this vast amount of digital information isn’t preserved for posterity.
The old refrain that “storage is cheap, just keep everything” was never true. Recently the global market intelligence firm IDCestimated that the world’s demand for storage is increasing by 60% a year.
Given market research firm IHS iSuppli estimates hard disk storage densities will only improve by 19% a year for the next five years, and IT budgets are growing at an annual rate between 0 and 2%, there is clearly a looming storage crisis.
The challenges involved in preserving the huge datasets created by governments, businesses and research institutions have prompted some dire predictions about the loss of digital history."