Yesterday, the ever-churning machine that is the Internet pumped out more unfiltered digital data.
Yesterday, 250 million photos were uploaded to Facebook, 864,000 hours of video were uploaded to YouTube, and 294 BILLION emails were sent. And that's not counting all the check-ins, friend requests, Yelp reviews and Amazon posts, and pins on Pintrest.
The volume of information being created is growing faster than your software is able to sort it out. As a result, you're often unable to determine the difference between a fake LinkedIn friend request, and a picture from your best friend in college of his new baby. Even with good metadata, it's still all "data"--whether raw unfiltered, or tagged and sourced, it's all treated like another input to your digital inbox.
What's happened is the web has gotten better at making data. Way better, as it turns out. And while algorithms have gotten better at detecting spam, they aren't keeping up with the massive tide of real-time data.
While devices struggle to separate spam from friends, critical information from nonsense, and signal from noise, the amount of data coming at us is increasingly mind-boggling.
In 2010 we frolicked, Googled, waded, and drowned in 1.2 zettabytes of digital bits and bytes. A year later volume was on an exponential growth curve toward 1.8 zettabytes. (A zettabyte is a trillion gigabytes; that’s a 1 with 21 zeros trailing behind it.)
Which means it's time to enlist the web's secret power--humans.