This paper describes an ongoing project that has the aim to develop a low cost application to replace a computer mouse for people with physical impairment. The application is based on an eye tracking algorithm and assu mes that the camera and the head position are fixed. Color tracking and template matching methods are used for pupil detection. Calibration is provided by neural networks as well as by parametric interpolation methods. Neural networks use back-propagation for learning and bipolar sigmoid function is chosen as the activation function. The user’s eye is scanned with a simple web camera with backlight compensation which is attached to a head fixation device. Neural networks significantly outperform parametric interpolation techniques: 1) the calibration proc edure is faster as they require less calibration marks and 2) cursor control is more precise. The system in its current stage of de velopment is able to distinguish regions at least on the level of desktop icons. The main limitation of the proposed method is the lack of head-pose invariance and its relative sensitivity to illumination (especially to incidental pupil reflections)., E. Demjén, V. Aboši, Z. Tomori., and Obsahuje bibliografii a bibliografické odkazy
Radial basis function networks provides a more flexible model and gives a very good performance over a wide range of applications. However, in the modeling process, care is taken not to choose the number of the basis functions and the positions of the centres, the regularization parameter and the smoothing parameter as appropriate according to the model complexity, they often gives poor generalization performance.
In this paper, we develop a new model building procedure based on radial basis function networks; positioning the centres with k-means clustering for the conditional distribution Pr(x|y) and estimating the weights by maximum penalized likelihood with Lasso penalty. We present an information criterion for choosing the regularization and smoothing parameters in the models. The proposed procedure determines the proper number and location of the centres automatically. The simulation result shows that the proposed method performs very well.
Feed-forward artificial neural networks (ANNs) have been applied to the diagnosis of mixed-mode electronic circuit. In order to tackle the circuit complexity and to reduce the number of test points, hierarchical approach to the diagnosis generation was implemented with two levels of decision: the system level and the circuit level. For every level, using the simulation-before-test (SBT) approach, fault dictionary was created first, containing data relating to the fault code and the circuit response for a given input signal. ANNs were used to model the fault dictionaries. During the learning phase, the ANNs were considered as an approximation algorithm to capture the mapping enclosed within the fault dictionary. Later on, in the diagnostic phase, the ANNs were used as an algorithm for mapping the measured data into fault code, which is equivalent to searching the fault dictionary performed by some other diagnostic procedures. At the topmost level, the fault dictionary was split into parts simplifying the implementation of the concept. A voting system was created at the topmost level in order to distinguish which ANN's output is to be accepted as the final diagnostic statement. The approach was tested on an example of an analog-to-digital converter, and only one test point was used, i.e. the digital output. Full diversity of faults was considered in both digital (stuck-at and delay faults) and analog (parametric and catastrophic faults) parts of the diagnosed system. Special attention was paid to the faults related to the A/D and D/A interfaces within the circuit.
Three different learning rnethods for RBF networks and their combinations are preserited. Standard gradient learning, three-step algorithm with unsupervised part, and evolutionary algorithm are introduced. Their performance is compared on two benchmark problerns: Two spirals and Iris plants. The results show that the three-step learning is usually the fastest, while the gradient learning achieves better precision. The cornbination of these two approaches gives the best results.
Associative neural network models are a commonly used methodology when investigating the theory of associative memory in the brain. Comparisons between the mammalian hippocampus and associative memory models of neural networks have been investigated [12]. Biologically based networks are systems built of complex biologically realistic cells with a variety of properties. Here we compare and contrast associative memory function in a network of biologically-based spiking neurons [22] with previously published results for a simple artificial neural network model [11]. We shall focus primarily on the recall process from a memory where patterns have previously been stored by Hebbian learning. We investigate biologically plausible implementations of methods for improving recall under biologically realistic conditions, such as a sparsely connected network. Network dynamics under recall conditions are further tested using network configurations including complex multi-compartment inhibitory interneurons, known as basket cells.
A new formal model of parallel computations - the Kirdin kinetic machinee - is suggested in [1]. It is expected tliat this model will play the role for parallel computations similar to Markov normal algorithms, Kolmogorov and Turing machine or Post schemes for sequential computations. The basic ways in which computations are realized are described; the basic properties of the elementary programs for the Kirdiii kinetic machine are investigated. It has been proved that the deterministic Kirdin kinetic machine is an effective computer. A simple application of the Kirdin kinetic machine - heap encoding - is suggested. Subprograms similar to usual programming enlarge the Kirdin kinetic machine.
Statistics, Operations Research and Artificial Intelligence are often
used to identify the factors that irifluence certain phenomena of real life based on sarnples of data. In the variety of data analysis techniques suggested by these disciplines, the classification capability of Logical Analysis of Data, proposed by Hammer in [12], is compared to the neural networks in the well-known financial classification problém of insolvency.
Stochastic interdependence of a probability distribution on a product space is measured by its Kullback-Leibler distance from the exponential family of product distributions (called multi-information). Here we investigate low-dimensional exponential families that contain the maximizers of stochastic interdependence in their closure. Based on a detailed description of the structure of probability distributions with globally maximal multi-information we obtain our main result: The exponential family of pure pair-interactions contains all global maximizers of the multi-information in its closure.
In the paper, we focus on reasoning with IF-THEN rules in propositional fragment of predicate calculus and on its modeling with neural networks. At first, IF-THEN deduction from facts is defined. Then it is proved that for any non-contradictory set of IF-THEN rules and literals (representing facts) there exists a layered recurrent network with 2 hidden layers that can specify all IF-THEN deducible literals. If we denote the set of all literal IF-THEN consequences as D_0 and the set of all literal logical consequences as D, then obviously D_0 \subset D. Thus, D_0 can be considered to be an approximation of D. Using the designed network for simulation of contradiction proof, the approximation D_0 may be easily refined. Furthermore, the network may also be used for determination of D. However, the algorithm that realizes necessary network computations has exponential complexity.
This paper investigates the use of Higher Order Neural Networks using a number of architectures to forecast the Gasoline Crack spread. The architectures used are Recurrent Neural Network and Higher Order Neural Networks; these are benchmarked against the standard MLP model. The final models are judged in terms of out-of-sample annualised return and drawdown, with and without a number of trading filters.
The results show that the best model of the spread is the recurrent network with the largest out-of-sample returns before transactions costs, indicating a superior ability to forecast the change in the spread. Further the best trading model of the spread is the Higher Order Neural Network with the threshold filter due a superior in- and out-of-sample risk/return profile.