Autoencoder networks have been demonstrated to be effcient for unsupervised learning of representation of images, documents and time series. Sparse representation can improve the interpretability of the input data and the generalization of a model by eliminating redundant features and extracting the latent structure of data. In this paper, we use L1/2 regularization method to enforce sparsity on the hidden representation of an autoencoder for achieving sparse representation of data. The performance of our approach in terms of unsupervised feature learning and supervised classiffcation is assessed on the MNIST digit data set, the ORL face database and the Reuters-21578 text corpus. The results demonstrate that the proposed autoencoder can produce sparser representation and better reconstruction performance than the Sparse Autoencoder and the L1 regularization Autoencoder. The new representation is also illustrated to be useful for a deep network to improve the classiffcation performance.