First, before the Internet, we had human gatekeepers. Then, with the Internet, they were overflown by the amount of content out there. That's what some called information overload, or what Clay Shirky calls "Filter Failure" (see: http://bit.ly/ma2dSd).
Now, a lot of companies believe in "automated curation" which to me doesn't make sense because there is more to curation than just filtering (http://bit.ly/k3T9nc). Algorithmic filtering can help and is certainly needed but it doesn't replace human curation, an old concept now turning social on the Internet.
But there's another problem with filters. As summarized on the TED site: "As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our worldview. Eli Pariser argues powerfully that this will ultimately prove to be bad for us and bad for democracy."
Algorithms limit us to a pattern without being really accountable nor being challengeable. Who can you complain to in the above Google example?
Surprising, challenging views are what we need. Subjectivity as well. Can that really be delivered by a robot?