Studied are differences of two approaches targeted to reveal latent variables in binary data. These approaches assume that the observed high dimensional data are driven by a small number of hidden binary sources combined due to Boolean superposition. The first approach is the Boolean matrix factorization (BMF) and the second one is the Boolean factor analysis (BFA). The two BMF methods are used for comparison. First is the M8 method from the BMDP statistical software package and the second one is the method suggested by Belohlavek \& Vychodil. These two are compared to BFA, especially with the Expectation-maximization Boolean Factor Analysis we had developed earlier has, however, been extended with a binarization step developed here. The well-known bars problem and the mushroom dataset are used for revealing the methods' peculiarities. In particular, the reconstruction ability of the computed factors and the information gain as the measure of dimension reduction was under scrutiny. It was shown that BFA slightly loses to BMF in performance when noise-free signals are analyzed. Conversely, BMF loses considerably to BFA when input signals are noisy.
We investigate Solutions provided by the finite-context predictive model called neural prediction machine (NPM) built on the recurrent layer of two types of recurrent neural networks (RNNs). One type is the first-order Elman’s simple recurrent network (SRN) trained for the next symbol prediction by the technique of extended Kalman filter (EKF). The other type of RNN is an interesting unsupervised counterpart to the “claissical” SRN, that is a recurrent version of the Bienenstock, Cooper, Munro (BCM) network that performs a kind of time-conditional projection pursuit. As experimental data we chose a complex symbolic sequence with both long and short memory structures. We compared the Solutions achieved by both types of the RNNs with Markov models to find out whether training can improve initial Solutions reached by random network dynamics that can be interpreted as an iterated function system (IFS). The results of our simulations indicate that SRN trained by EKF achieves better next symbol prediction than its unsupervised counterpart. Recurrent BCM network can provide only the Markovian solution that is not able to cover long memory structures in sequence and thus beat SRN.