Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 2L1-GS-11-04
Conference information

Uncertainty Evaluation Methodology for Machine Learning based on Prediction Interval
*Hiroki SAITO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Uncertainty, which is expressed as the variances of predictions in machine learning, is an important indicator from the viewpoint of maintenance and operation of machine learning, such as correcting predictions to be on the safe side or not using predictions when its uncertainty is large. However, the well-known uncertainty evaluation method is restricted to some machine learning models. Furthermore, since uncertainties are evaluated based on the distance among data or the distribution of model parameters, uncertainties may not always correctly represent important properties such as confidence intervals and prediction intervals. The purpose is to propose an uncertainty evaluation methodology that can be applied to any machine learning model and to verify its effectiveness. Applying the proposed methodology to toy data, it confirmed that uncertainties obtained by the proposed method satisfies the properties of confidence intervals and prediction intervals, and indicated its validity.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top