Guillame DeCugis: "This is a very interesting piece by Erin Griffith (again!) on the potential scalability issues of content curation. You can pass quickly on her first part where she easily bashes the usual concerns about the curation word being overhyped and over used.
She makes a really good point on her second part, building on the experience of Behance, the platform to publish one's creative work: using a mix of algorithms and human curation is a part of the answer to this scale issue.
But another way to scale curation is to add a topic-centric layer. In the problem she describes (which is typically Behance's problem), scaling up is tough because curation is being applied to sort out the best content on a unique dimension: a home page that's the same for everyone.
"Behance’s front page could no longer display what algorithms determined was the most popular art within [the] site’s community. Because of boobs. They are universally the most popular thing on the Web, and not even a tasteful, creative site like Behance is safe when the “wisdom of the crowd” is involved.
To be clear — boobs are welcome on Behance, but the site skews toward commercially viable work. A porn pit may entice creative directors but not in the way Behance wants to entice them." she funnily writes.
If you added topics to that, you can solve the problem by having people follow whichever topics they want.
And I'm not talking about the usual 10-20 categories you find on any content sites. I'm talking about long-tail, user-created topics that any user can opt in to follow or unfollow. Boobs fans can then follow dozens of Boobs topics curated by other fellow users without having to pollute the experience for everyone else.
By mixing a topic-centric model with curation, you apply it to as many dimensions as your users will decide to curate. That's the model we've been using at Scoop.it and so far, it scales pretty well, doesn't it?"
Robin Good: For the record you may want to check this video of Gabe Rivera from Techmeme at LeWeb 2008 already discussing this issue and arriving at the same conclusions: http://www.youtube.com/watch?v=T4Zi_U6iZxU there's no way to build a perfect news or aggregation engine. The best solution is indeed a mix of aggregation and filtering tools matched by a topic-expert curator.
Robin Good: As you have probably already read somewhere else, this last weekend, Twitter launched a first-of-a-kind type of page.
The page, which you can see here: https://twitter.com/hashtag/nascar revolves around the last NASCAR car racing event, that took place last Sunday and it apparently aggregates interesting tweets and comments from a group of passionate NASCAR fans.
The interesting thing is that this page is in fact not an automatically aggregated page of tweets having a specific hashtag. There have been plenty of tweets in here with no hashtag at all, or not even mentioning explicitly NASCAR.
This is a human-curated page of tweets, selected from a curated list of relevant people for this topic.
This is the real news.
GigaOm writes about it: "The NASCAR page may not seem like anything to be concerned about, since it appears to be just a typical grouping of tweets collected by hashtag.
But there is editorial control behind it as well as algorithms, with an editor choosing which messages — including photos, videos and commentary from NASCAR insiders — were highlighted during the event, and which streamed by unacknowledged."
By mixing and matching technology-powered identification of relevant people and tweets for a specific topic, with an active layer of human curation allows Twitter to generate a page that's filled with value.
Here's what Twitter itself wrote on his blog before launching it: "...throughout the weekend – but especially during the race – a combination of algorithms and curation will surface the most interesting Tweets to bring you closer to all of the action happening around the track, from the garage to the victory lane."
And while this is only a first experiment from Twitter, I would bet that it will not be the last.
The value provided by adding a human curation layer, both to the selection of the sources as well as to the selection of the actual tweets, is huge.
As the avalanche of information coming through social networks and real-time tools like Twitter continues to grow, the need for filters to make sense of that tsunami of data also increases, and it seems as though everyone has a different way of trying to solve that problem.
Facebook threw its hat into the ring this week with what it says is an improved “newspaper-style” news feed that highlights important content, while Digg has just launched “newsrooms” aimed at doing the same thing, and online influence-ranking service Klout is rolling out topic pages based on what’s being shared by those with influence.
But will any of these be able to solve the filtering problem, or will they just add another source of noise?