JSAI Technical Report, SIG-FPAI
Online ISSN : 2436-4584
117th (Sep, 2021)
Conference information

Probing Gap between Learning and Evaluation Criteria
Han BAO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Pages 01-

Details
Abstract

While machine learning has achieved dramatic successes recently, a numerous researches have risen concerns against its deficiency such as lack of robustness and fairness. This tendency can be observed even for the cutting edge model architectures and training algorithms. Why does this unexpectancy happen? In this talk, we focus on the discrepancy between objective functions that learning algor thms optimize and evaluation criteria that ultimately define the goodness of learned models. By clearly distinguishing them, it enables us to verify whether the learning algorithms do achieve our desired properties and design suitable learning criteria. Specifically, I will introduce our recent work on adversarial robust classification and similarity learning.

Content from these authors
© 2021 The Japaense Society for Artificial Intelligence
Next article
feedback
Top