It is generally accepted that most benchmark problems known today
can be solved by artificial neural networks with one single hidden layer. Networks with more than one hidden layer normally slow down learning dramatically. Furthermore, generalisation to new input patterns is generally better in small networks [1], [2], However, most benchmark problems only involve a small training data set which is normally discrete (such as binary values 0 and 1) in nature. The ability of single hidden layer supervised networks to solve problems with large and continuous type of data (e.g. most engineering problems) is virtually unknown. A fast learning method for solving continuous type problems has been proposed by Evans et al. [3]. However, the method is based on the Kohonen competitive, and ART unsupervised network models. In addition, almost every benchmark problem has the training set containing all possible input patterns, so there is no study of the generalisation behaviour of the network [4]. This study attempts to show that Single hidden layer supervised networks can be used to solve large and continuous type problems within measurable algorithmic complexities.