Artificial neural networks (ANNs) have been used to construct empirical nonlinear models of process data. Because networks are not based on the physical theory and contain nonlinearities, their predictions are suspect when extrapolating beyond the range of original training data. Standard networks give no indication of possible errors due to extrapolation. This paper describes a sequential supervised learning scheme for the recently formalized Growing Multi-experts Network (GMN). It is shown that the Certainty Factor can be generated by the GMN that can be taken as an extrapolation detector for the GMN. The On-line GMN identification algorithm is presented and its performance is evaluated. The capability of the GMN to extrapolate is also indicated. Four benchmark experiments are dealt with to demonstrate the effectiveness and utility of the GMN as a universal function approximator.
An endeavour is made in this paper to describe a constructive modular neural network called Growing Multi-Experts Network (GMN), which can approxiniate to us an unknown nonlinear function from observed input-output training data. In the GMN, the problem space is decomposed into overlapping regions by an expertise domain and the local expert models are graded according to their expertise level. The network output is computed by the smooth combination of local linear models. On the other hand, in order to avoid over-fitting problems, the GMN deploys a Redundant Experts Removal Algorithm to remove the redundant local experts from the network. In addition, a Growing Neural Gas algorithm is used to generate an induced Delaunay triangulation that is highly desired for optimal function approxiniation. The GMN is tested by four benchmark problems to compare its performance with other modeling approaches. The performance of the GMN compares favorably with the existing techniqnes. Thus, it seems to be extremely promising to determine an optimal structure of the network with a lot of potentials to be exploited.
A simple and novel method is proposed to estimate the confidence
interval of any neural network. A recently introduced Growing Multi-Experts Network (GMN) is embedded with confidence interval estimator whose output directly indicates the defined measure. One-step hybrid learning is employed in which the unsupervised learning method of Growing Neural Gas (GNG) and the supervised learning are implemented simultaneously. Illustrative examples together with the application examples clearly place the utility of the defined measure in sharper focus.
In this paper, processing of sonar signals has been carried out using
the Minimal Resource Allocation Network (MRAN) and the Probabilistic Neural Network (PNN) in differentiating of commonly encountered features in indoor environments. The stability-plasticity behavior of both networks has been investigated. The experimental result shows that the MRAN possesses lower network complexity but experiences higher plasticity in comparison with PNN. The study also shows that the MRAN performance is superior in terms of on-line learning to PNN.