When you have a question you need answered, do you ask a friend – or a robot? The answer to that question used to be easy.
|Scooped by Vincent Henderson|
This is a nice article, soundly bringing people like, I guess, you and me, back to earth to some extent: its main points are that:
- small niche knowledge is hard to generalize for all-purpose search engines, making good answers in such niches far less relevant than in broader, more central domains;
- in such niche knowledge areas, specific useful knowledge for specific situations is far more usefully provided by humans who really know.
I think that, like often when one tries to make a point, the article overstates its case by seeming to advocate for powering question answering by actual humans answering the questions, such as Yahoo Answers or Quora and such, as a matter of policy, or principle.
Because ultimately, while Yahoo Answers or Quora do indeed often come up in my search results, I have never asked a question there. I have only found, through Google, answers that people had provided before to similar questions.
But this left me, of course, with the work of reading through the results and figuring out whether these actually answered my question. And that's really the key here: what tools do I have in Google that prevent me from having to go read websites to figure out my answer? Well if i'm looking for info on films, books, celebrities, sports teams, the weather (the list keeps growing stealthily) I get all the info straight from Google's awesome user experience. But as soon as I want to know something that is not in the general knowledge and entertainment of essentially what amounts to general newspaper rubrics, I have to do the work myself.
That's really the core of the question that Michal Borkovski touches on here. As he points out, much of it is driven by the fact that the data that Google mines from the web to answer questions is actually marked up so that it's possible for Google to mine it and have a reliable idea of what the data is.
The work of professional publishers should be two-fold to innovate search experiences for their users:
- Mark up our data so that the knowledge contained in the documents or databases is explicit knowledge that can be reasoned on for what it is by an algorithm (whether it's opened up to Google and how, that's a business strategy question not relevant here), and
- Devise user experiences à-la Google films, or Google's weather and other such widgets, with little such applets that respond to the kinds of questions that the professionals we serve have to gets answers to.
In the case of 1. above, it requires a pretty decent knowledge model representing the domains that are covered. That's hard work. It's also work that looks pretty unproductive at first glance. It's hard to make business cases for "develop a knowledge model", because by nature it covers all areas, both information and workflows, and in and of itself, it doesn't deliver a product. While on the other hand, if you want to build a product that could use such features, your business case can't support developing the knowledge model just for that product. It's the canonical catch-22.
The way to work around that problem, if the business owners don't really get it, which happens, er, sometimes..., is for professionals who implement the stuff, developers, user experience designers, business analysts, and, crucially, the subject matter organization ("editorial"), to be smart: don't just work off of a single product's requirement. Identify the underlying logic of modern product requirements, imagine what the next ones will be, and architect your solutions in ways that open new capabilities that will lower future products' business case thresholds. Build your knowledge model one product at a time, in a way that you can grow. Use the logic of open data and other semantic standards that mean you can grow your model.
In other words: don't silo.
How did I get there from the article? Well, the thing is: if we professional publishers, as we do, maintain significant levels of human expertise running our ships, people who write analysis, questions and answers and so on, we must use that work of humans answering specific questions to add to the knowledge model and make these answers parseable by algorithms. When we answer a niche question, we should ask ourselves: what are the data points, their relationships and how they relate to other things that I have? How can I model this in my knowledge model if it's not already covered?
If we apply this type of practice to our knowledge production workflows, in tandem with the writing of texts that answer questions, then we can really produce innovation and make sure that Google can't make us irrelevant once they decide to tackle our professional domains. This makes me think that I should do something about Google scholar at some point.
So yes, human niche knowledge to answer hard questions is still the most useful thing around. But we can model this meaning one question at a time to make it even more useful to others once the question has been answered.