人工知能
Online ISSN : 2435-8614
Print ISSN : 2188-2266
人工知能学会誌(1986~2013, Print ISSN:0912-8085)
情報量基準を用いたニューラルネットワークの最適な内部表現の獲得
小野田 崇
著者情報
解説誌・一般情報誌 フリー

1998 年 13 巻 3 号 p. 434-443

詳細
抄録

Engineers are often confronted with the problem of extracting information about poorly-known processes from data. Discerning the significant patterns in data, as a first step to process understanding, can be greatly facilitated by reducing dimensionality. An artificial neural network can develop a compact representation of the input data. This neural network can be applied discerning the significant patterns in data. The neural network, which can reduce dimensionality and produce a feature space map, contains an internal "bottleneck"layer (containing fewer units than input or output layers) and two additional hidden layers. In the case of this type neural network, the inputs of the network are reproduced at the output layer. An important problem is to determine the optimal neural network architecture to acquire the optimal feature space map. This paper proposes that information criteria are applied to determine the optimal neural network architecture and presents that Neural Network Information Criterion (NNIC) is the most useful for acquisition of the feature space map with a simple simulation. The neural network, which is selected by using NNIC, can adequately reduce dimensionality and produce the optimal feature space map resembling the actual distribution of the underlying system parameters.

著者関連情報
© 1998 人工知能学会
前の記事 次の記事
feedback
Top