J-STAGE トップ  >  資料トップ  > 書誌事項

人工知能学会論文誌
Vol. 32 (2017) No. 3 p. B-G81_1-13

記事言語:

http://doi.org/10.1527/tjsai.B-G81

原著論文
  • [Ahn 08] Ahn,L.,Dabbish,L.: Designing games with a purpose,Communications of the Association for Computing Machinery,pp.58-67,(2008).
  • [Almond 09] Almond,R.,et al.: Bayesian networks: A teacher 2019s view,International Journal of Approximate Reasoning 50.3,pp.450-460,(2009).
  • [Ashikawa 14] Ashikawa,M.,Kawamura,T. and Ohsuga,A.: Speech synthesis data collection for visually impaired person,Third AAAI Conference on Human Computation and Crowdsourcing. 2014,(2014).
  • [Bachrach 12] Bachrach,Y.,et al.: How to grade a test without knowing the answers---A Bayesian graphical model for adaptive crowdsourcing and aptitude testing,arXiv preprint arXiv: 1206.6386,(2012).
  • [Bragg 14] Bragg,J.,Weld,DS.: Crowdsourcing multi-label classification for taxonomy creation,First AAAI Conference on Human Computation and Crowdsourcing,(2013).
  • [Burnap 13] Burnap,A.,et al.: A simulation based estimation of crowd ability and its influence on crowdsourced evaluation of design concepts,ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers,pp.V03BT03A004-V03BT03A004,(2013).
  • [Butz 06] Butz,C.,Hua,S. and Maguire,B.: A Web-based Bayesian Intelligent Tutoring System for Computer Programming,Web Intelligence and Agent Systems,Vol.4,No.1,pp.77-97,IOS Press,(2006).
  • [Carpenter 11] Carpenter,B.: A hierarchical Bayesian model of crowdsourced relevance coding,TREC,(2011).
  • [Fernandez 11] Fernandez,A.,et al.: A system for relevance analysis of performance indicators in higher education using Bayesian networks,Knowledge and information systems 27.3,pp.327-344, (2011).
  • [Garcia 07] Garcia,P.,et al.: Evaluating Bayesian networks’ precision for detecting students’ learning styles,Computers \& Education,49.3, pp.794-808,(2007).
  • [Hoogerheide 12] Hoogerheide,L.,Block,JH. and Thurik,R.: Family background variables as instruments for education in income regressions: A Bayesian analysis,Economics of Education Review, 31.5,pp.515-523,(2012).
  • [Hutton 12] Hutton,A.,Liu,A. and Martin,C.E.: Crowdsourcing evaluations of classifier interpretability,AAAI Spring Symposium: Wisdom of the Crowd,(2012).
  • [Kamar 12] Kamar,E.,Hacker,S. and Horvitz,E.: Combining human and machine intelligence in large-scale crowdsourcing,Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1. International Foundation for Autonomous Agents and Multiagent Systems,pp.467-474,(2012).
  • [Kittur 08] Kittur,A.,Chi,E. and Suh,B.: Crowdsourcing user studies with mechanical turk,Human Computation \& Crowdsourcing,pp.453-456,(2008).
  • [Lin 12] Lin,CH. and Weld,D.: Crowdsourcing control: Moving beyond multiple choice,arXiv preprint arXiv: 1210.4870,(2012).
  • [May 06] May,H.: A multilevel Bayesian item response theory method for scaling socioeconomic status in international studies of education,Journal of Educational and Behavioral Statistics,31.1,pp.63-79,(2006).
  • [宮川 04] 宮川 雅巳:統計的因果推論,朝倉書店,(2004).
  • [Nushi 15] Nushi,B.,et al.: Crowd Access Path Optimization: Diversity Matters,Third AAAI Conference on Human Computation and Crowdsourcing,(2015).
  • [岡本 08] 岡本 敏雄,香山 瑞恵:人工知能と教育工学,,オーム社,(2008).
  • [Pardos 10] Pardos,ZA.,et al.: Using fine-grained skill models to fit student performance with Bayesian networks,Handbook of educational data mining pp.417-425,(2010).
  • [Raykar 14] Raykar,V.C. and Agrawal,P.: Sequential crowdsourced labeling as an epsilon-greedy exploration in a Markov decision process,AISTATS,pp.832-840,(2014).
  • [Shaw 11] Shaw,A.D.,Horton,J.J. and Chen,D.L.: Designing incentives for inexpert human raters,Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, ACM,pp.275-284,(2011).
  • [Simpson 15] Simpson,E. and Roberts,S.: Bayesian methods for intelligent task assignment in crowdsourcing systems,Decision Making: Uncertainty,Imperfection,Deliberation and Scalability,pp.1-32,Springer International Publishing,(2015).
  • [Sun 12] Sun,Y. and Dance,C.: When majority voting fails: Comparing quality assurance methods for noisy human computation environment,arXiv preprint arXiv: 1204.3516,(2012).
  • [Tang 11] Tang,W. and Lease,M.: Semi-supervised consensus labeling for crowdsourcing,SIGIR 2011 Workshop on Crowdsourcing for Information Retrieval (CIR),pp.1-6, (2011).
  • [Ueno 00] Ueno,M.: Intelligent tutoring system based on belief networks,International Workshop on Advanced Learning Technologies,pp141-142,(2000).
  • [Venanzi 15] Venanzi,M.,et al.: The ActiveCrowdToolkit: An open-source tool for benchmarking active learning algorithms for crowdsourcing research,Third AAAI Conference on Human Computation and Crowdsourcing,(2015).
  • [Wais 11] Wais,P.,et al.: Towards large-scale processing of simple tasks with mechanical turk,Third AAAI Conference on Human Computation and Crowdsourcing,(2011).
  • [Wauthier 11] Wauthier,F. L. and Jordan,M. I.: Bayesian bias mitigation for crowdsourcing,Advances in Neural Information Processing Systems,pp.1800-1808,(2011).
  • [Xenos 04] Xenos,M.: Prediction and assessment of student behaviour in open and distance education in computers using Bayesian networks,Computers \& Education 43.4,pp.345-359,(2004).
  • [Xie 15] Xie,H.,Lui,J.C.S. and Towsley,D.: Incentive and reputation mechanisms for online crowdsourcing systems,Quality of Service (IWQoS),2015 IEEE 23rd International Symposium on. IEEE,pp207-212,(2015).
Copyright © 人工知能学会 2017

記事ツール

この記事を共有