In this paper, we propose two learning methods of neural networks for a two-group discriminant problem. One method corresponds to the possibility analysis and the other to the necessity analysis. The proposed methods are the same as the back-propagation algorithm except for the cost function. The cost function to be minimized in our methods is not the sum of squared errors but the weighted sum of squared errors. The value of weight relating to each data point varies in process of learning and depends on whether the data point belongs to group 1 or group 2. For example, if we consider the possibility of group 1, the weight relating to the data points in group 1 is constant but the weight of group 2 is gradually decreased in process of learning. On the other hand, if we consider the necessity of group 1, the weight relating to the data points in group 1 is gradually decreased but the weight of group 2 is constant.
In this paper, we propose a method employing neural network to classify remote sensing data. Initially, we select training patterns based on the geographical knowledge and then the neural network is trained until the desired classification is attained. After training the neural network, the selected training patterns are again applied to the network. If the classification is not satisfactory, then the sample is deleted from the original training set and new training set is formed to classify the remote sensing data. Once the training is complete, the remote sensing data is applied to the trained network for classification. The experiments on LANDSAT TM data show that this approach produces excellent classification results which are more realistic and noiseless compared with usual Bayesian approach. Finally, the present neural network approach is also powerful to solve a problem of removal of cloud shadows.
In this paper, a neural network model is proposed which can recognize spatiotemporal patterns. The model is composed of feature detection module and feature integration module. The former extracts time-pecific features such as onsets and offsets of input signals. The latter consists of two types of neurons, a P-neuron and a T-neuron. A P-neuro integrates features. A T-neuron has a memory mechanism with input-dependent decay time. Pairs of these two types of neural layers are connected in cascade in the model. Combination of these two types of layers in hierarchical structure produces an ability of duration-independent recognition of spatio-temporal patterns. In a computer simulation, it has been shown that the model can tolerate wide range of distortion of input patterns such as elongation and shrink along the temporal axis, shift along spatial axis and deformation in shape.
A connectionist classifier with a new network architecture, Chebychev networks, is proposed for classifying continuous patterns in non-covex decision regions. The conventional connectionist classifier with monotonic sigmoid functions as its units is shown to face the problems of scaling and generalization. The new multi-layer networks, Chebychev networks, consist of nonmonotonic, nonlinear Chebychev functions as their input units, and monotonic sigmoid functions as their hidden and output units. The proposed connectionist classifier consists of two modules, a Chebychev networks module and a classification module. The Chebychev networks module transforms a complex pattern space into a high-order simple space. The classification module works on this transformed simple space. It is shown the problems of scaling and generalization can be overcome with the new connectionist classifier with Chebychev networks, and the classification performance is increased significantly compared with the conventional connectionist classifier.
In this paper, we propose a new learning scheme using feedback-error-learning for a Neural network model applied to an adaptive Nonlinear Feedback Controller (NNFC). This system uses a Conventional Feedback Controller (CFC) both as a usual feedback controller to guarantee global asymptotic stability and as a reference model of the response. The output of the conventional feedback controller is used as the error signal for a neural network model for adaptive nonlinear feedback control. The response of the controlled object follows the response of the reference model after the learning period. The convergence properties of this learning scheme are provided by using the averaged equation and the Liapunov method. This scheme was successfully applied to the control of an inverted pendulum by computer simulation. We also pointed out the relationship of this learning scheme to the cerebellum's posture and locomotion adaptive control mechanism in animals.
Combinatorial optimization by means of an analog neural network proposed by J. J. Hopfield and D. W. Tank is one of the major subjects in neural computing. Though a good deal of reexaminations and various applications of the network have been reported so far, the effectiveness of the approach is not clarified in sufficient manner. In the present paper, the effectiveness of the Hopfield model is discussed through its application to a circuit block placement problem. The results of computer simulation show that, although the Hopfield model is not effective enough if it is used without sophisticated preexamination of combinatorial problems, it has ability to yield quite satisfactory solutions when it is endowed with an appropriate form and parameters of the energy function. The meaning of appropriate parameter values yielding good solutions is also investigated theoretically.