A mixture of support vector machines (SVMs) is proposed for time series forecasting. The SVMs mixture is composed of a two-stage architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM mixture, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The experiment shows that the SVMs mixture achieves significant improvement in the generalization performance in comparison with the single SVMs model. In addition, the SVMs mixture also converges faster and use fewer support vectors.
The functional structure of our new network is not preset; instead, it
comes into existence in a random, stochastic manner.
The anatomical structure of our model consists of two input “neurons”, hundreds up to five thousands of hidden-layer “neurons” and one output “neuron”.
The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result.
Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished “offspring”. The next iteration begins with the application of these new, most successful variants of the results, i.é., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200).
The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best “organiser”.
Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms.
For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons.
The latency of neurons, natural and artificial alike, is very importaiit as it
enables feedback action.
Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuek in the local minimum, the amount of noise has to be inereased again. While this will make the network organisation waver, it will also inerease the likelihood that the erisis in the local minimum will abate and improve substantially the state of the network in its self-organisation.
Our system allows for constant state-of-the-network reading by ineaiis of establishing the network energy level, i.e., basically ascertaining progression of the network’s rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
I review our series of works on the galactic evoluton putting emphasis on some fundaments of he models instead that on their detailed structure. I show, with some example how the mathematical formulation is produced from the physical input, how it can be modified as physics is enriched, and which kind of results can be provided.
We consider a construction of approximate confidence intervals on the variance component σ21 in mixed linear models with two variance components with non-zero degrees of freedom for error. An approximate interval that seems to perform well in such a case, except that it is rather conservative for large σ21/σ2, was considered by Hartung and Knapp in \cite{hk}. The expression for its asymptotic coverage when σ21/σ2→∞ suggests a modification of this interval that preserves some nice properties of the original and that is, in addition, exact when σ21/σ2→∞. It turns out that this modification is an interval suggested by El-Bassiouni in \cite{eb}. We comment on its properties that were not emphasized in the original paper \cite{eb}, but which support use of the procedure. Also a small simulation study is provided.
This paper proposes an offline gradient method with smoothing L1/2 regularization for learning and pruning of the pi-sigma neural networks (PSNNs). The original L1/2 regularization term is not smooth at the origin, since it involves the absolute value function. This causes oscillation in the computation and difficulty in the convergence analysis. In this paper, we propose to use a smooth function to replace and approximate the absolute value function, ending up with a smoothing L1/2 regularization method for PSNN. Numerical simulations show that the smoothing L1/2 regularization method eliminates the oscillation in computation and achieves better learning accuracy. We are also able to prove a convergence theorem for the proposed learning method.
In this paper, Runge-Kutta methods are discussed for numerical solutions of conservative systems. For the energy of conservative systems being as close to the initial energy as possible, a modified version of explicit Runge-Kutta methods is presented. The order of the modified Runge-Kutta method is the same as the standard Runge-Kutta method, but it is superior in energy-preserving to the standard one. Comparing the modified Runge-Kutta method with the standard Runge-Kutta method, numerical experiments are provided to illustrate the effectiveness of the modified Runge-Kutta method.
We consider the quotient categories of two categories of modules relative to the Serre classes of modules which are bounded as abelian groups and we prove a Morita type theorem for some equivalences between these quotient categories.
A combined study of morphology, stem anatomy and isozyme patterns was used to reveal the identity of sterile plants from two rivers on the Germany/France border. A detailed morphological examination proved that the putative hybrid is clearly intermediate between Potamogeton natans and P. nodosus. The stem anatomy had characteristics of both species. The most compelling evidence came from the isozyme analysis. The additive “hybrid” banding patterns of the six enzyme systems studied indicate inheritance from P. natans and P. nodosus. In contrast, other morphologically similar hybrids were excluded: P. ×gessnacensis (= P. natans × P. polygonifolius) by all the enzyme systems, P. ×fluitans (= P. lucens × P. natans) by AAT, EST and 6PGDH, and P. ×sparganiifolius (= P. gramineus × P. natans) by AAT and EST. All samples of P. ×schreberi are of a single multi-enzyme phenotype, suggesting that they resulted from a single hybridization event and that the present-day distribution of P. ×schreberi along the Saarland/Moselle border was achieved by means of vegetative propagation and long-distance dispersal. Neither of its parental species occur with P. ×schreberi or are present upstream, which suggests that this hybrid has persisted vegetatively for a long time in the absence of its parents. The total distribution of this hybrid is reviewed and a detailed account of the records from Germany is given. P. ×schreberi appears to be a rare hybrid. The risk of incorrect determination resulting from the identification of insufficiently developed or inadequately preserved plant material is discussed.
Modern organizations tend to constitute of communities of practice to cover the side effect of standardization and centralization of knowledge. The distributed nature of knowledge in groups, teams and other departments of organization and complexity of this tacit knowledge lead us to use community of practice as an environment to share knowledge. In this paper we propose an agent mediated community of a practice system using MAS-CommonKADS methodology. We support the principle of autonomy since every single agent, even those in the same community, needs its own autonomy in order to model an organization and its individuals correctly, using this approach, the natural model for an agent based on knowledge sharing system has been resulted. We presented all models of MAS-CommonKADS methodology required for developing the multi-agent system. We found MAS-CommonKADS useful to design Knowledge Management applications. Because of detailed description of agents, a resulted design model could be simply implemented. We modeled our system using Rebeca and verified it to show that by use of our system, knowledge sharing can be satisfied.