主催: 横断型基幹科学技術研究団体連合
会議名: 第10回横幹連合コンファレンス
回次: 10
開催地: 新潟県長岡市 長岡技術科学大学
開催日: 2019/11/30 - 2019/12/01
This paper proposes the new methodology to verify safety of black-box type of artificial intelligence (AI) such as deep neural network (DNN). In the proposed methodology, the following two key technologies are used:
FRAM (Functional Resonance Analysis Method)
SpecTRM (Specification Tools and Requirement Methodology)
FRAM will be used to visualize the hidden internal logic inside the DNN which is supposed to be acquired during the machine learning (ML) process.
SpecTRM will be used to analyze the ML test result and tune-up/validate the FRAM model to show completeness of the ML and consisitency of the learned DNN model.
By this study, the following findings were acquired:
1) The hidden internal logic of DNN system can be effectively visualized by FRAM by its rich modeling capability to define timing, preconditions and postconditions.
2) Formal method technology (SpecTRM) can be used to check consistency of the FRAM model and validate that the learned AI logic is understandable by human.
3) Formal method technology (SpecTRM) can be used to check completness of the ML by confiming full-path testing is completed and by identiying all missing test casees.
4) The proposed methodology can be utilized not only by black-box type of AI, but also applicable to white-box type of AI such as random forest systems.