ケモインフォマティクス討論会予稿集
41th Symposium on Chemoinformatics, Kumamoto
会議情報

口頭発表
"How can we trust QSPR models?": Ideas on building interpretable machine learning methods
*陳 嘉修田中 健一小寺 正明船津 公人
著者情報
会議録・要旨集 フリー

p. 2B10-

詳細
抄録
In the chemical industry, designing novel compounds with desired characteristics is a bottleneck in the chemical manufacturing development. Quantitative structure–property relationship (QSPR) modeling with machine learning techniques can move the chemical design forward to work more efficiently. A challenge of current QSPR models is the lack of interpretability operating black-box models. Hence, interpretable machine learning methods will be essential for researchers to understand, trust, and effectively manage a QSPR model. Global interpretability and local interpretability are two typical ways to define the scope of model interpretation. Global interpretation is information on structure−property relationships for a series of compounds, helping shed some light on mechanisms of property of compounds. Local interpretability gives information about how different structural motifs of a single compound influence the property. In this presentation, we focus on the designs of interpretable frameworks for typical machine learning models. Two different approaches based on ensemble learning and deep learning to interpretable models will be presented to achieve global interpretation and local interpretation respectively which are equal to or even better than typical trustworthy models. We believe that trust in QSPR models can be enhanced by interpretable machine learning methods that conform to human knowledge and expectations.
著者関連情報
前の記事 次の記事
feedback
Top