During the last decade we have introduced probabilistic mixture models into image modelling area, which present highly atypical and extremely demanding applications for these models. This difficulty arises from the necessity to model tens thousands correlated data simultaneously and to reliably learn such unusually complex mixture models. Presented paper surveys these novel generative colour image models based on multivariate discrete, Gaussian or Bernoulli mixtures, respectively and demonstrates their major advantages and drawbacks on texture modelling applications. Our mixture models are restricted to represent two-dimensional visual information. Thus a measured 3D multi-spectral texture is spectrally factorized and corresponding multivariate mixture models are further learned from single orthogonal mono-spectral components and used to synthesise and enlarge these mono-spectral factor components. Texture synthesis is based on easy computation of arbitrary conditional distributions from the model. Finally single synthesised mono-spectral texture planes are transformed into the required synthetic multi-spectral texture. Such models can easily serve not only for texture enlargement but also for segmentation, restoration, and retrieval or to model single factors in unusually complex seven dimensional Bidirectional Texture Function (BTF) space models. The strengths and weaknesses of the presented discrete, Gaussian or Bernoulli mixture based approaches are demonstrated on several colour texture examples.
In the Czech Republic numerous existing structures are made of different types of masonry. Decisions concerning upgrades of these structures should be preferably based on the reliability assessment, taking into account actual material properties. Due to inherent variability of masonry information on its mechanical properties has to be obtained from tests. Estimation of masonry strength from measurements may be one of key issues in the assessment of existing structures. The standard technique provided in the Eurocode EN 1996-1-1 is used to develop the probabilistc model of masonry strength taking into account uncertainties in basic variables. In a numerical example characteristic and design values of the masonry strength derived using principles of the Eurocode are compared with corresponding fractiles of a proposed probabilistic model. It appears that the characteristic value based on the probabilistic model is lower than that obtained by the standard technique. To the contrary, the partial factor for masonry recommended in EN 1966-1-1 seems to be rather conservative. and Obsahuje seznam literatury
A model of vortex filaments based on stochastic processes is presented. In contrast to previous models based on semimartingales, here processes with fractal properties between $1/2$ and $1$ are used, which include fractional Brownian motion and similar non-Gaussian examples. Stochastic integration for these processes is employed to give a meaning to the kinetic energy.
In this paper, we propose an extension of a periodic GARCH (PGARCH) model to a Markov-switching periodic GARCH (MS-PGA RCH), and provide some probabilistic properties of this class of models. In particular, we address the question of strictly periodically and of weakly periodically stationary solutions. We establish necessary and sufficient conditions ensuring the existence of higher order moments. We further provide closed-form expressions for calculating the even-order moments as well as the autocovariances of the powers of a MS-PGARCH process. We thus show how these moments and autocovariances can be used for estimating model parameters using GMM method.
In this paper we formulate a general model of the continuous double auction. We (recursively) describe the distribution of the model. As a useful by-product, we give a (recursive) analytic description of the distribution of the process of the best quotes (bid and ask).
Bayesian networks became a popiilar framework for reasoning with
uncertainty. Efficient methods have been developed for probabilistic reasoning with new evidence. However, when new evidence is nncertain or imprecise, different methods have been proposed. The original contribution of this paper are guidelines for the treatment of different types of uncertain evidence, the rules for combining evidence from different sources, and the model revision with nncertain evidence.
This paper describes the probabilty analysis of reinforced concrete containment structure of NPP with the reactor VVER V-230 under high internal overpressure. The summary of calculation models and calculation methods for the probability analysis of the structural integrity in the case of the loss of coolant accident (LOCA) is showed. The probabilistic structural analysis (PSA) level 2 aims at an assessment of the probability of the concrete structure failure under excessive overpressure. In the non-linear analysis of the concrete structures a layered approximation of the shell elements with various material properties have been included. The uncertainties of longtime temperature and dead loads, material properties (concrete cracking and crushing, reinforcement, and liner) and model uncertainties were taken into account in the 106
direct MONTE CARLO simulations. The results of the probability analysis of the containment failure under excessive overpressure show that in the case of the LOCA accident at overpressure of 122,7 kPa the probability is smaller than the required 10-4 for design resistance. and Obsahuje seznam literatury
The purpose of this paper is twofold. Firstly, to investigate the merit of estimating probability density functions rather than level or classification estimations on a one-day-ahead forecasting the task of the silver time series.
This is done by benchmarking the Gaussian mixture neural network model (as a probability distribution predictor) against two other neural network designs representing a level estimator (the Mulit-layer perceptron network [MLP]) and a classification model (Softmax cross entropy network model [SCE]). In addition, we also benchmark the results against standard forecasting models, namely a naive model, an autoregressive moving average model (ARMA) and a logistic regression model (LOGIT).
The second purpose of this paper is to examine the possibilities of improving the trading performance of those models by applying confirmation filters and leverage.
As it turns out, the three neural network models perform equally well generating a recognisable gain while ARMA benchmark model, on the other hand, seems to have picked up the right rhythm of mean reversion in the silver time series, leading to very good results. Only when using more sophisticated trading strategies and leverage, the neural network models show an ability to identify successfully trades with a high Sharpe ratio and outperform the ARMA model.
Studies were conducted to investigate the distribution of larvae of the European vine moth, Lobesia botrana (Denis & Schiffermüller) (Lepidoptera: Tortricidae), a key vineyard pest of grape cultivars. The data collected were larval densities of the second and third generation of L. botrana on half-vine and entire plants of wine and table cultivars in 2003-2004. No insecticide treatments were applied to plants during the 2-year study. The distribution of L. botrana larvae can be described by a negative binomial. This reveals that the insect aggregates. A common value for the k parameter of the negative binomial distribution of kc = 0.6042, was obtained, using maximum likelihood estimation, and the advantages and cases of use of a common k are discussed. The k-1Sinh-1(ksqrt{x+1/2}) and k-1Sinh-1(ksqrt{x+3/8}) proved to be the best transformations for L. botrana larval counts. An entire vine is recommended as the sampling unit for research purposes, whereas a half-vine, which is suitable for grape vine cultivation in northern Greece, is recommended for practical purposes. We used these findings to develop a fixed precision sequential sampling plan and a sequential sampling program for classifying the pest status of L. botrana larvae.
This paper addresses the problem of probability estimation in Multiclass classification tasks combining two well-known data mining techniques: Support Vector Machines and Neural Networks. We present an algorithm which uses both techniques in a two-step procedure. The first step employs Support Vector Machines within a One-vs-All reduction from multiclass to binary approach to obtain the distances between each observation and the Support Vectors representing the classes. The second step uses these distances as inputs for a Neural Network, built with an entropy cost function and softmax transfer function for the output layer where class membership is used for training. Consequently, this network estimates probabilities of class membership for new observations. A benchmark using different databases demonstrates that the proposed algorithm is highly competitive with the most recent techniques for multiclass probability estimation.