Three different learning rnethods for RBF networks and their combinations are preserited. Standard gradient learning, three-step algorithm with unsupervised part, and evolutionary algorithm are introduced. Their performance is compared on two benchmark problerns: Two spirals and Iris plants. The results show that the three-step learning is usually the fastest, while the gradient learning achieves better precision. The cornbination of these two approaches gives the best results.
A functional equivalence of feed-forward networks has been proposed to
reduce the search space of learning algorithms. A novel genetic learning algorithm for RBF networks and perceptrons with one hidden layer that makes use of this theoretical property is proposed. Experimental results show that our procedure outperforms the standard genetic learning.
A thorough analysis of theoretical and computational properties of Kolmogorov learning algorithm for feedforward neural networks lead us to proposal of efficient sequential and parallel impleinentation. A novel approach to parallelization is presented which combines our previous rcsnlts in order to achieve higher parallel speed-up.