Proceedings of the Conference of Transdisciplinary Federation of Science and Technology
10th TRFST Conference
Session ID : B-4
Conference information

Safety analysis of Black box type of AI by FRAM and SpecTRM
*H. NomotoY. Michiura,S. Iino
Author information
Keywords: AI, Safety, FRAM
CONFERENCE PROCEEDINGS OPEN ACCESS

Details
Abstract

This paper proposes the new methodology to verify safety of black-box type of artificial intelligence (AI) such as deep neural network (DNN). In the proposed methodology, the following two key technologies are used:
 FRAM (Functional Resonance Analysis Method)
 SpecTRM (Specification Tools and Requirement Methodology)
FRAM will be used to visualize the hidden internal logic inside the DNN which is supposed to be acquired during the machine learning (ML) process.
SpecTRM will be used to analyze the ML test result and tune-up/validate the FRAM model to show completeness of the ML and consisitency of the learned DNN model.
By this study, the following findings were acquired:
1) The hidden internal logic of DNN system can be effectively visualized by FRAM by its rich modeling capability to define timing, preconditions and postconditions.
2) Formal method technology (SpecTRM) can be used to check consistency of the FRAM model and validate that the learned AI logic is understandable by human.
3) Formal method technology (SpecTRM) can be used to check completness of the ML by confiming full-path testing is completed and by identiying all missing test casees.
4) The proposed methodology can be utilized not only by black-box type of AI, but also applicable to white-box type of AI such as random forest systems.

Content from these authors
© 2019 Transdisciplinary Federation of Science and Technology
Previous article Next article
feedback
Top