Abstract
We propose a limited general regression neural network (LGRNN) for embedded systems.
LGRNN is a type of kernel neuro-fuzzy network
that continues incremental learning under a fixed number of kernel functions.
Initially, the LGRNN learns new samples incrementally by allocating new kernels.
If the number of kernel functions reaches the upper bound, the LGRNN has to remove one useless kernel to learn a new sample.
However, there are cases in which the adverse effects of removing a useless kernel are greater than the positive effects of learning the new sample.
In this case, the LGRNN should refrain from learning the new sample.
To achieve this, the LGRNN predicts the effects of several learning options (e.g., ignore or learning) before the learning process begins, and chooses the best learning option to be executed.
Experimental results show that the method successfully reduces errors even when the number of hidden units is limited to a certain upper bound under various datasets.
This article also presents an adaptive maximum power point tracking converter for a photovoltaic as an example of application of LGRNN.