Probabilistic mixtures provide flexible "universal'' approximation of probability density functions. Their wide use is enabled by the availability of a range of efficient estimation algorithms. Among them, quasi-Bayesian estimation plays a prominent role as it runs "naturally'' in one-pass mode. This is important in on-line applications and/or extensive databases. It even copes with dynamic nature of components forming the mixture. However, the quasi-Bayesian estimation relies on mixing via constant component weights. Thus, mixtures with dynamic components and dynamic transitions between them are not supported. The present paper fills this gap. For the sake of simplicity and to give a better insight into the task, the paper considers mixtures with known components. A general case with unknown components will be presented soon.
This text describes a method of estimating the hazard rate of survival data following monotone Aalen regression model. The proposed approach is based on techniques which were introduced by Arjas and Gasbarra \cite{gasbarra}. The unknown functional parameters are assumed to be a priori piecewise constant on intervals of varying count and size. The estimates are obtained with the aid of the Gibbs sampler and its variants. The performance of the method is explored by simulations. The results indicate that the method is applicable on small sample size datasets.
The paper presents the stopping rule for random search for Bayesian model-structure estimation by maximising the likelihood function. The inspected maximisation uses random restarts to cope with local maxima in discrete space. The stopping rule, suitable for any maximisation of this type, exploits the probability of finding global maximum implied by the number of local maxima already found. It stops the search when this probability crosses a given threshold. The inspected case represents an important example of the search in a huge space of hypotheses so common in artificial intelligence, machine learning and computer science.
Binary Factor Analysis (BFA) aims to discover latent binary structures in high dimensional data. Parameter learning in BFA faces an exponential computational complexity and a large number of local optima. The model selection to determine the latent binary dimension is therefore difficult. Traditionally, it is implemented in two separate stages with two different objectives. First, parameter learning is performed for each candidate model scale to maximise the likelihood; then the optimal scale is selected to minimise a model selection criterion. Such a two-phase implementation suffers from huge computational cost and deteriorated learning performance on large scale structures. In contrast, the Bayesian Ying-Yang (BYY) harmony learning starts from a high dimensional model and automatically deducts the dimension during learning. This paper investigates model selection on a subclass of BFA called Orthogonal Binary Factor Analysis (OBFA). The Bayesian inference of the latent binary code is analytically solved, based on which a BYY machine is constructed. The harmony measure that serves as the objective function in BYY learning is more accurately estimated by recovering a regularisation term. Experimental comparison with the two-phase implementations shows superior performance of the proposed approach.