横幹連合コンファレンス予稿集
第10回横幹連合コンファレンス
セッションID: B-4
会議情報

B-4 ものづくりを行う上での安全性・信頼性に関する新たな取組み
FRAM/SpecTRM によるブラックボックス型人工知能システムの安全性検証
*野本 秀樹道浦 康貴飯野 翔太
著者情報
会議録・要旨集 オープンアクセス

詳細
抄録

This paper proposes the new methodology to verify safety of black-box type of artificial intelligence (AI) such as deep neural network (DNN). In the proposed methodology, the following two key technologies are used:
 FRAM (Functional Resonance Analysis Method)
 SpecTRM (Specification Tools and Requirement Methodology)
FRAM will be used to visualize the hidden internal logic inside the DNN which is supposed to be acquired during the machine learning (ML) process.
SpecTRM will be used to analyze the ML test result and tune-up/validate the FRAM model to show completeness of the ML and consisitency of the learned DNN model.
By this study, the following findings were acquired:
1) The hidden internal logic of DNN system can be effectively visualized by FRAM by its rich modeling capability to define timing, preconditions and postconditions.
2) Formal method technology (SpecTRM) can be used to check consistency of the FRAM model and validate that the learned AI logic is understandable by human.
3) Formal method technology (SpecTRM) can be used to check completness of the ML by confiming full-path testing is completed and by identiying all missing test casees.
4) The proposed methodology can be utilized not only by black-box type of AI, but also applicable to white-box type of AI such as random forest systems.

著者関連情報
© 2019 (NPO)横断型基幹科学技術研究団体連合(横幹連合)
前の記事 次の記事
feedback
Top