The unsupervised learning of feature extraction in high-dimesional patterns is a central problem for the neural network approach. Feature extraction is a procedure which maps original patterns into the feature (or factor) space of reduced dimension. In this paper we demonstrate that Hebbian learning in Hopfield-like neural network is a natural procedure for unsupervised learning of feature extraction. Due to this learning, factors become the attractors of network dynamics, hence they can be revealed by the random search. The neurodynamics is analysed by Single-Step approximation which is known [8] to be rather accurate for sparsely encoded Hopfield-network. Thus, the analysis is restricted by the case of sparsely encoded factors. The accuracy of Single-Step approximation is confirmed by Computer simulations.
A sparsely encoded Willshaw-like attractor neural network based on the binary Hebbian synapses is investigated analytically and by Computer simulations. A special inhibition mechanism which supports a constant number of active neurons at each time step is used. The informationg capacity and the size of attraction basins are evaluated for the Single-Step and the Gibson-Robinson approximations, as well as for experimental results.