Microsoft Artificial Tech Competes in Reading Docs to Answer Questions | Language Tech Market News | Scoop.it
The team at Microsoft Research Asia reached the human parity milestone using the “Stanford Question Answering Dataset”, known among researchers as “SQuAD”. It’s a machine reading comprehension dataset that is made up of questions about a set of Wikipedia articles. According to the SQuAD leaderboard, Microsoft submitted a model that reached the score of 82.650 on the exact match portion. The human performance on the same set of questions and answers is 82.304