"The search engine only indexes content with either a public domain and / or Creative Commons license, and as such everything within it is available for reuse without any major need for copyright clearance or checking.
All of this is open source. The goal is hopefully to allow people from all over the world (the code is internationalised) to create their own repositories and curate their own open content. The code also has a modularised structure and so can be extended relatively quickly to allow for different tools, new APIs and new features. Hopefully with some more time the site could become almost WordPress like with the ease of usage and so on, so forth.
We've tried to create a topology around the repository, focusing on promoting and encouraging reuse and repurposing. Tied into the repository is a series of APIs and tools which would allow (OAI style) for a series of repositories to talk to each other and share resources to harvest. We've also a second Moodle plugin (https://github.com/solvonauts/moodle-url-reporting-plugin) which allows the repository to visit your Moodle (should you so wish) and see if you've used any resources that we have information on. In doing so, an idea of how popular a resource is (sort of paradata) could be. Paradata could be used to influence search results, or could be displayed as per metadata.
In terms of where we can harvest / index, at present the harvesting code supports RSS (all types), ATOM, OAI (DC), FlickR API, Tumblr API, Youtube API and the Slideshare API. OER publishing seems to have moved to a publish almost everywhere (which I think is a good thing) but it makes indexing these resources properly hard.
Semantic technology has been around for years and was supposed to save us from information overload. So far, it failed. The Semantic Web or Web 3.0 is still Tim Berners-Lee's dream, and good old Web 2.0 keeps drowning us in oceans of content. But while social media is certainly the cause of this deluge of information, it can also be the solution: first, as it provides us with a huge amount of data that we can use to qualify this information through big data technology; second, because it educated and created a need for millions to become human curators. By combining algorithms and humans, we reinvent media while bringing the meaning back to the Web.
Wow!This is slideshare is a must read for all of us insomniac curators! Find out how our "humanrithms" help us to curate! You will feel better about spending wee hours curating with this excellent slideshare about how Big data+Human Curation=Clever Publishing and Sharing!! Thankyou for an excellent and funny slideshare Guillaume Decugis, Co-Founder and CEO of Scoop.it!
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.