Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
Development and Evaluation of Quality Control Methods in a Microtask Crowdsourcing Platform
Masayuki AshikawaTakahiro KawamuraAkihiko Ohsuga
Author information
JOURNAL FREE ACCESS

2014 Volume 29 Issue 6 Pages 503-515

Details
Abstract
Open Crowdsourcing platforms like Amazon Mechanical Turk provide an attractive solution for process of high volume tasks with low costs. However problems of quality control is still of major interest. In this paper, we design a private crowdsourcing system, where we can devise methods for the quality control. For the quality control, we introduce four worker selection methods, each of which we call preprocessing filtering, real-time filtering, post processing filtering, and guess processing filtering. These methods include a novel approach, which utilizes a collaborative filtering technique in addition to a basic approach of initial training or gold standard data. For an use case, we have built a very large dictionary, which is necessary for Large Vocabulary Continuous Speech Recognition and Text-to-Speech. We show how the system yields high quality results for some difficult tasks of word extraction, part-of-speech tagging, and pronunciation prediction to build a large dictionary.
Content from these authors
© The Japanese Society for Artificial Intelligence 2014
Previous article
feedback
Top