Let G be a group and !(G) be the set of element orders of G. Let k 2 !(G) and mk(G) be the number of elements of order k in G. Let nse(G) = {mk(G) : k 2 !(G)}. Assume r is a prime number and let G be a group such that nse(G) = nse(Sr), where Sr is the symmetric group of degree r. In this paper we prove that G = Sr, if r divides the order of G and r2 does not divide it. To get the conclusion we make use of some well-known results on the prime graphs of finite simple groups and their components., Azam Babai, Zeinab Akhlaghi., and Seznam literatury
In this paper a new class of self-mappings on metric spaces, which satisfy the nonexpensive type condition (3) below is introduced and investigated. The main result is that such mappings have a unique fixed point. Also, a remetrization theorem, which is converse to Banach contraction principle is given.
We present a new Generalized Learning Vector Quantization classifier called Optimally Generalized Learning Vector Quantization based on a novel weight-update rule for learning labeled samples. The algorithm attains stable prototype/weight vector dynamics in terms of estimated current and previous weights and their updates. Resulting weight update term is then related to the proximity measure used by Generalized Learning Vector Quantization classifiers. New algorithm and some major counterparts are tested and compared for synthetic and publicly available datasets. For both the datasets studied, it is seen that the new classifier outperforms its counterparts in training and testing with accuracy above 80% its counterparts and in robustness against model parameter varition.
We present an hitherto unknown cometary reflection nebula {a = 20^h18^m3, δ+37°00') associated with a dense dust cloud. A bright, compact Herbig-Haro oject is embedded in its brightest part. The highly reddened illuminating star of about 3-5 M„, located near the apex of the nebula, emits a collimated bipolar flow at high velocity, whose blueshifted stream feeds the HH object. The redshifted stream can be traced toward the interior of the dark cloud, where the density exceeds 10^5 cm^-3.
We consider a large class of impulsive retarded functional differential equations (IRFDEs) and prove a result concerning uniqueness of solutions of impulsive FDEs. Also, we present a new result on continuous dependence of solutions on parameters for this class of equations. More precisely, we consider a sequence of initial value problems for impulsive RFDEs in the above setting, with convergent right-hand sides, convergent impulse operators and uniformly convergent initial data. We assume that the limiting equation is an impulsive RFDE whose initial condition is the uniform limit of the sequence of the initial data and whose solution exists and is unique. Then, for sufficient large indexes, the elements of the sequence of impulsive retarded initial value problem admit a unique solution and such a sequence of solutions converges to the solution of the limiting Cauchy problem., Márcia Federson, Jaqueline Godoy Mesquita., and Obsahuje seznam literatury
The most algorithms for Recommender Systems (RSs) are based on a Collaborative Filtering (CF) approach, in particular on the Probabilistic Matrix Factorization (PMF) method. It is known that the PMF method is quite successful for the rating prediction. In this study, we consider the problem of rating prediction in RSs. We propose a new algorithm which is also in the CF framework; however, it is completely different from the PMF-based algorithms. There are studies in the literature that can increase the accuracy of rating prediction by using additional information. However, we seek the answer to the question that if the input data does not contain additional information, how we can increase the accuracy of rating prediction. In the proposed algorithm, we construct a curve (a low-degree polynomial) for each user using the sparse input data and by this curve, we predict the unknown ratings of items. The proposed algorithm is easy to implement. The main advantage of the algorithm is that the running time is polynomial, namely it is θ(n2), for sparse matrices. Moreover, in the experiments we get slightly more accurate results compared to the known rating prediction algorithms.
In this paper, a new adjustment to the damping parameter of the Levenberg-Marquardt algorithm is proposed to save training time and to reduce error oscillations. The damping parameter of the Levenberg-Marquardt algorithm switches between a gradient descent method and the Gauss-Newton method. It also affects training speed and induces error oscillations when a decay rate is fixed. Therefore, our damping strategy decreases the damping parameter with the inner product between weight vectors to make the Levenberg-Marquardt algorithm behave more like the Gauss-Newton method, and it increases the damping parameter with a diagonally dominant matrix to make the Levenberg-Marquardt algorithm act like a gradient descent method. We tested two simple classifications and a handwritten digit recognition for this work. Simulations showed that our method improved training speed and error oscillations were fewer than those of other algorithms.
For an improved neuro-spike representation of auditory signals within cochlea models, a new digital ARMA-type low-pass filter structure is proposed. It is compared to more conventional AR-type counterpart on a classification of biosonar echoes, in which echoes from various tree species insonified with a bat-like chirp call are converted to biologically plausible feature vectors. Next, parametric and non-parametric models of the class-conditional densities are built from the echo feature vectors. The models are deployed in single-shot and sequential-decision classification algorithms. The results indicate that the proposed ARMA filter structure offers an improved single-echo classification performance, which leads to faster sequential-decision making than its AR-type counterpart.
$G(3,m,n)$ is the group presented by $\langle a,b\mid a^5=(ab)^2=b^{m+3}a^{-n}b^ma^{-n}=1\rangle $. In this paper, we study the structure of $G(3,m,n)$. We also give a new efficient presentation for the Projective Special Linear group $PSL(2,5)$ and in particular we prove that $PSL(2,5)$ is isomorphic to $G(3,m,n)$ under certain conditions.