A method for identification of parameters of a non-linear dynamic system, such as an induction motor with saturation effect taken into account, is presented in this paper. Adaptive identifier with structure similar to model of the system performs identification. This identifier can be regarded as a special neural network, therefore its adaptation is based on the gradient descent method and Back-Propagation well known in the neural networks theory. Parameters of electromagnetic subsystems were derived from the values of synaptic weights of the estimator after its adaptation. Testing was performed with simulations taking into account noise in measured quantities. Deviations of identified parameters in case of electrical parameters of the system were up to 1% of real values. Parameters of non-linear magnetizing curve were identified with deviations up to 6% of real values. Identifier was able to follow sudden changes of rotor resistance, load torque and moment of inertia.
Most of the neural networks-based intrusion detection systems (IDS) examine all data features to detect intrusion or misuse patterns. Some of the features may be redundant or contribute little (if anything) to the detection process. That is why the purpose of this study is to identify important KDD features which will be used to train a neural network (NN), in order to best classify and detect attacks. Four NNs were studied: Modular, Recurrent, Principal Component Analysis (PCA), and Time-Lag recurrent (TLR) NNs. We investigated the performance of combining the Fisher's filter used as a feature selection technique, with one of the previously cited NNs. Our simulations show that using Fisher's filter improves largely the performance of the four considered NNs in terms of detection rate, attack classification, and computational time.
Software measurements provide developers and software managers with information on various aspects of software systems, such as effectiveness, functionality, maintainability, or the effort and cost needed to develop a software system. Based on collected data, models capturing some aspects of software development process can be constructed. A good model should allow software professionals to not only evaluate current or completed projects but also predict future projects with an acceptable degree of accuracy.
Artificial neural networks employ a parallel distributed processing paradigm for learning of system and data behavior. Some network models, such as multilayer perceptrons, can be used to build models with universal approximation capabilities. This paper describes an application in which neural networks are used to capture the behavior of several sets of software development related data. The goal of the experiment is to gain an insight into the modeling of software data, and to evaluate the quality of available data sets and some existing conventional models.
With increasing opportunities for analyzing large data sources, we have noticed a lack of effective processing in datamining tasks working with large sparse datasets of high dimensions. This work focuses on this issue and on effective clustering using models of artificial intelligence.
The authors of this article propose an effective clustering algorithm to exploit the features of neural networks, and especially Self Organizing Maps (SOM), for the reduction of data dimensionality. The issue of computational complexity is resolved by using a parallelization of the standard SOM algorithm. The authors have focused on the acceleration of the presented algorithm using a version suitable for data collections with a certain level of sparsity. Effective acceleration is achieved by improving the winning neuron finding phase and the weight actualization phase. The output presented here demonstrates sufficient acceleration of the standard SOM algorithm while preserving the appropriate accuracy.
This paper focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or a separate adaptive learning rate for each weight. The learning-rate adaptation is based on descent techniques and estimates of the local constants that are obtained without additional error function and gradient evaluations. This paper proposes three algorithms to improve the different versions of backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. The new modification consists of a simple change in the error signal function. Experiments are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with three training problems: XOR, encoding problem and character recognition, which are popular training problems.
In his paper, Aleksander proposed five axioms for the presence of minimal consciousness in an agent. The last two of these were planning and affective evaluation of plans, respectively. The first three axioms - depiction, imagination and attention - have already been implemented in neural models in the laboratory at Imperial College. The present paper describes efforts to model the last two axioms in a similar manner, i.e., through digital neuromodelling.
In this paper a sieve bootstrap scheme, the Neural Network Sieve bootstrap, for nonlinear time series is proposed. The approach, which is non parametric in its spirit, does not have the problems of other nonparametric bootstrap techniques such as the blockwise schemes. The procedure performs similarly to the AR-Sieve bootstrap for linear processes while it outperforms the AR-Sieve and the moving block bootstrap for nonlinear processes, both in terms of bias and variability.
In this paper we attempt to form a neural network to code nonlinear iterated function system. Our approach to this problem consists of finding an error function which will be minimized when the network coded attractor is equal to the desired attractor. First, we start with a given iterated function system attractor, with a random set of weights of the network. Second, we compare the consequent images using this neural network with the original image. On the basis of the result of this comparison, we can update the weight functions and the code of the nonlinear iterated function system (NLIFS). A common metric or error function used to compare between the two image fractal attractors is the Hausdorff distance. The error function gives us good means to measurement the difference between the two images.
This paper addresses the problem of stock market data prediction. It discusses the abilities of neural networks to learn and to forecast price quotations as well as proposes a neural approach to the future stock price prediction and detection of high increases or high decreases in stock prices. In order to validate the approach, a large number of experiments were performed on real-life data from the Warsaw Stock Exchange.
Each national language is described by specific grammatical rules. But rule-based knowledge representations alone cannot be used for the natural flow of speech.
In this paper, optimisation of the naturalness of speech, i.e. the optimal choice of phonetic and phonologic parameters for prosody modelling is sought. We will try to find relevant features (speech parameters) having the basic influence on the fundamental frequency and duration of speech units. If the prosody of the synthesizer is controlled by an artificial neural network (ANN), optimisation of the ANN topology is necessary.
The topology of the ANN is also dependent on the number of input neurons which represent the most important speech parameters. The pruning of the ANN based on the several approaches (GUHA method, sensitivities of the synaptic weights, etc.) is a suitable tool for reducing the ANN structure.