IEEJ Transactions on Industry Applications
Online ISSN : 1348-8163
Print ISSN : 0913-6339
ISSN-L : 0913-6339
Input Layer Optimization of Neural Networks by Sensitivity Analysis and its Application to Recognition of Numerals
Hiroshi TakenagaShigeo AbeMasao TakatooMasahiro KayamaTadaaki KitamuraYosiyuki Okuyama
Author information
JOURNAL FREE ACCESS

1991 Volume 111 Issue 1 Pages 36-44

Details
Abstract

A major problem in applying a multi-layered neural network to image processing is how to select input components from among a number of features or pixels extracted from a recognized object. This is especially important for achieving both high speed processing and a high recognition rate.
This paper proposes a method of optimizing the number of input components, i. e. that of input neurons without lowering a recognition rate. In general, neural networks learn a recognition algorithm from input patterns and their desired output patterns. But if some input components are redundant, namely they are expressed by other input components or they do not contribute to recognition, their effect on the recognition algorithm, or their sensitivity to outputs is considered to be very small. Therefore, by analyzing input-output sensitivity of neural networks, redundant input components can be deleted. Repeatedly applying this method, the useful input components for recognition can be selected. For an experimental neural network for recognition of numerals with 12 feature input components, the input components could be reduced to eight without lowering a recognition rate.

Content from these authors
© The Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top