人工知能学会全国大会論文集
Online ISSN : 2758-7347
34th (2020)
セッションID: 2K1-ES-2-03
会議情報

Is Neural Architecture Search A Way Forward to Develop Robust Neural Networks?
*Shashank KOTYANDanilo Vasconcellos VARGAS
著者情報
会議録・要旨集 フリー

詳細
抄録

An imperceptibly altered image can mislead nearly all neural networks to predict inaccurately. These modified images are also known as adversarial examples are generated by a special class of algorithms known as adversarial attacks. Many defensive algorithms are proposed to prevent the neural networks from such attacks, but none have satisfying outcomes. Recently, an innovative algorithm is proposed which claims to evolve intrinsically robust neural networks using neural architecture search. Previously, neural architecture searches have been used in the development of many accurate state-of-the-art neural networks. We examine this new algorithm to understand the feasibility of such architecture search in the domain of adversarial machine learning. Thus, we illustrate that more robust architectures exist as well as open up a new realm of possibilities for the advancement and exploration of neural networks using neural architecture search.

著者関連情報
© 2020 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top