Abstract
Community question answering (CQA) sites such as Yahoo! Chiebukuro are known to be very useful resources for automatic question answering (QA) systems. However, CQA users often post questions expecting not some general truths but rather opinions of different people. We believe that a QA system should act according to these different question types. We therefore define two question types based on whether the questioner expects subjective or objective answers, and report on an automatic question classification experiment. We achieve over 80% weighted accuracy using uni-gram and bi-gram features learned by Naïve Bayes with smoothing. We also discuss the inter-annotator agreement and its impact on automatic classification accuracy, as well as what kind of questions tend to be misclassified.