This paper is concentrated on the Kanerva’s Sparse Distributed Memory as a kind of artificial rieural iiet and associative memory. SDM captures some basic properties of hnman long-term mernory. SDM may be regarded as a three-layered feed-forward neural net. Input layer neurons copy input vectors only, hidden layer nenrons have radial basis functions and output layer neurons have linear basis functions. The hidden layer is initialized randomly in the basic SDM algorithm. The aim of the paper is to study of Kanerva’s model behaviour for reál input data (largge input vectors, correlated data). A modification of the basic model is introduced and tested.
Recently, a new clustering method called maximum margin clustering (MMC) was proposed. It extended the support vector machine (SVM) thoughts to unsupervised scenarios and had shown promising performances. Traditionally, it was formulated as a non-convex integer optimization problem which was difficult to solve. In order to alleviate the computational burden, the efficient cutting-plane MMC (CPMMC) [wang2010mmc] was proposed which solved the MMC problem in its primal. However, the CPMMC is restricted to linear kernel. In this paper, we extend the CPMMC algorithm to the nonlinear kernel scenarios, which is the proposed sparse kernel MMC (SKMMC). Specifically, we propose to solve an adaptive threshold version of CPMMC in its dual and alleviate its computational complexity by employing the cutting plane subspace pursuit (CPSP) algorithm [joachims2009sparse]. Eventually, the SKMMC algorithm could work with nonlinear kernels at a linear computational complexity and a linear storage complexity. Our experimental results on several real-world data sets show that the SKMMC has higher accuracies than existing MMC methods, and takes less time and storage demands than existing kernel MMC methods.
Autoencoder networks have been demonstrated to be effcient for unsupervised learning of representation of images, documents and time series. Sparse representation can improve the interpretability of the input data and the generalization of a model by eliminating redundant features and extracting the latent structure of data. In this paper, we use L1/2 regularization method to enforce sparsity on the hidden representation of an autoencoder for achieving sparse representation of data. The performance of our approach in terms of unsupervised feature learning and supervised classiffcation is assessed on the MNIST digit data set, the ORL face database and the Reuters-21578 text corpus. The results demonstrate that the proposed autoencoder can produce sparser representation and better reconstruction performance than the Sparse Autoencoder and the L1 regularization Autoencoder. The new representation is also illustrated to be useful for a deep network to improve the classiffcation performance.
Tapeworms of the order Spathebothriidea Wardle et McLeod, 1952 (Cestoda) are reviewed. Molecular data made it possible to assess, for the first time, the phylogenetic relationships of all genera and to confirm the validity of Bothrimonus Duvernoy, 1842, Diplocotyle Krabbe, 1874 and Didymobothrium Nybelin, 1922. A survey of all species considered to be valid is provided together with new data on egg and scolex morphology and surface ultrastructure (i.e. microtriches). The peculiar morphology of the members of this group, which is today represented by five effectively monotypic genera whose host associations and geographical distribution show little commonality, indicate that it is a relictual group that was once diverse and widespread. The order potentially represents the earliest branch of true tapeworms (i.e. Eucestoda) among extant forms.
We discuss the prediction of a spatial variable of a multivariate mark composed of both dependent and explanatory variables. The marks are location-dependent and they are attached to a point process. We assume that the marks are assigned independently, conditionally on an unknown underlying parametric field. We compare (i) the classical non-parametric Nadaraya-Watson kernel estimator based on the dependent variable (ii) estimators obtained under an assumption of local parametric model where explanatory variables of the local model are estimated through kernel estimation and (iii) a kernel estimator of the result of the parametric model, supposed here to be a Uniformly Minimum Variance Unbiased Estimator derived under the local parametric model when complete and sufficient statistics are available. The comparison is done asymptotically and by simulations in special cases. The procedure for better estimator selection is then illustrated on a real-life data set.
The paper deals with Cox point processes in time and space with Lévy based driving intensity. Using the generating functional, formulas for theoretical characteristics are available. Because of potential applications in biology a Cox process sampled by a curve is discussed in detail. The filtering of the driving intensity based on observed point process events is developed in space and time for a parametric model with a background driving compound Poisson field delimited by special test sets. A hierarchical Bayesian model with point process densities yields the posterior. Markov chain Monte Carlo "Metropolis within Gibbs" algorithm enables simultaneous filtering and parameter estimation. Posterior predictive distributions are used for model selection and a numerical example is presented. The new approach to filtering is related to the residual analysis of spatio-temporal point processes.
Speaker identification is becoming an increasingly popular technology in today's society. Besides being cost effective and producing a strong return on investment in all the defined business cases, speaker identification lends itself well to a variety of uses and implementations. These implementations can range from corridor security to safer driving to increased productivity. By focusing on the technology and companies that drive today's voice recognition and identification systems, we can learn current implementations and predict future trends.
In this paper one-dimensional discrete cosine transform (DCT) is used as a feature extractor to reduce signal information redundancy and transfer the sampled human speech signal from time domain to frequency domain. Only a subset of these coefficients, which have large magnitude, are selected. These coefficients are necessary to save the most important information of the speech signal, which are enough to recognize the original speech signal, and then these coefficients are normalized globally. The normalized coefficients are fed to a multilayer momentum backpropagation neural network for classification. The recognition rate can be very high by using a very small number of the coefficients which are enough to reflect the specifications of the speaker voice.
An artificial neural network ANN is learned to classify the voices of eight speakers, five voice samples for each speaker are used in the learning phase. The network is tested using other five different samples for the same speakers. During the learning phase many parameters are tested which are: the number of selected coefficients, number of hidden nodes and the value of the momentum parameter. In the testing phase the identification performance is computed for each value of the above parameters.
On the ring $R=F[x_1,\dots ,x_n]$ of polynomials in n variables over a field $F$ special isomorphisms $A$'s of $R$ into $R$ are defined which preserve the greatest common divisor of two polynomials. The ring $R$ is extended to the ring $S\:=F[[x_1,\dots ,x_n]]^+$ and the ring $T\:=F[[x_1,\dots ,x_n]]$ of generalized polynomials in such a way that the exponents of the variables are non-negative rational numbers and rational numbers, respectively. The isomorphisms $A$'s are extended to automorphisms $B$'s of the ring $S$. Using the property that the isomorphisms $A$'s preserve GCD it is shown that any pair of generalized polynomials from $S$ has the greatest common divisor and the automorphisms $B$'s preserve GCD. On the basis of this Theorem it is proved that any pair of generalized polynomials from the ring $T=F[[x_1,\dots ,x_n]]$ has a greatest common divisor.