This work concentrates on a novel method for empirical estimation
of generalization ability of neural networks. Given a set of training (and testing) data, one can choose a network architecture (nurnber of layers, number of neurons in each layer etc.), an initialization method, and a learning algorithrn to obtain a network. One measure of the performance of a trained network is how dosely its actual output approximates the desired output for an input that it has never seen before. Current methods provide a “number” that indicates the estimation of the generalization ability of the network. However, this number provides no further inforrnation to understand the contributing factors when the generalization ability is not very good. The method proposed uses a number of parameters to define the generalization ability. A set of the values of these parameters provide an estimate of the generalization ability. In addition, the value of each pararneter indicates the contribution of such factors as network architecture, initialization method, training data set, etc. Furthermore, a method has been developed to verify the validity of the estimated values of the parameters.