For n=2m\geqslant 4, let \Omega\in \mathbb{R}^{n} be a bounded smooth domain and N\subset \mathbb{R}^{L} a compact smooth Riemannian manifold without boundary. Suppose that \left \{ uk \right \}\in W^{m,2}\left ( \Omega ,N \right ) is a sequence of weak solutions in the critical dimension to the perturbed m-polyharmonic maps \frac{{\text{d}}}{{{\text{dt}}}}\left| {_{t = 0}{E_m}({\text{II}}(u + t\xi )) = 0} \right with Ωk → 0 in W^{m,2}\left( \Omega ,N \right )* and {u_k} \rightharpoonup u weakly in W^{m,2}\left( \Omega ,N \right ). Then u is an m-polyharmonic map. In particular, the space of m-polyharmonic maps is sequentially compact for the weak- W^{m,2} topology., Shenzhou Zheng., and Obsahuje seznam literatury
From a theoretical point of view, Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs) are similar, still in practice they pose different challenges and perform in a different manner. In this study we present a comparative analysis of the two spatial-temporal classification methods: HMMs and DBNs applied to the Facial Action Units (AUs) recognition problem. The Facial Action Coding System (FACS) developed by Ekman and Friesen decomposes the face into 46 AUs, each AU being related to the contraction of one or more specific facial muscles. FACS proved its applicability to facial behavior modeling, enabling the recognition of an extensive palette of facial expressions. Even though a lot has been published on this theme, it is still difficult to draw a conclusion regarding the best methodology to follow, as there is no common basis for comparison and sometimes no argument is given why a certain classification method was chosen. Therefore, our main contributions reside in discussing and comparing the relative performance of the two proposed classifiers (HMMs vs. DBNs) and also of different Region of Interest (ROI) selections proposed by us and different optical flow estimation methods. We can consider our automatic system towards AUs classification an important step in the facial expression recognition process, given that even one emotion can be expressed in different ways, fact that suggests the complexity of the analyzed problem. The experiments were performed on the Cohn-Kanade database and showed that under the same conditions regarding initialization, labeling, and sampling, both classification methods produced similar results, achieving the same recognition rate of 89% for the classification of facial AUs. Still, by enabling non-fixed sampling and using HTK, HMMs rendered a better performance of 93% suggesting that they are better suited for the special task of AUs recognition.
Studied are differences of two approaches targeted to reveal latent variables in binary data. These approaches assume that the observed high dimensional data are driven by a small number of hidden binary sources combined due to Boolean superposition. The first approach is the Boolean matrix factorization (BMF) and the second one is the Boolean factor analysis (BFA). The two BMF methods are used for comparison. First is the M8 method from the BMDP statistical software package and the second one is the method suggested by Belohlavek \& Vychodil. These two are compared to BFA, especially with the Expectation-maximization Boolean Factor Analysis we had developed earlier has, however, been extended with a binarization step developed here. The well-known bars problem and the mushroom dataset are used for revealing the methods' peculiarities. In particular, the reconstruction ability of the computed factors and the information gain as the measure of dimension reduction was under scrutiny. It was shown that BFA slightly loses to BMF in performance when noise-free signals are analyzed. Conversely, BMF loses considerably to BFA when input signals are noisy.
Several counterparts of Bayesian networks based on different paradigms have been proposed in evidence theory. Nevertheless, none of them is completely satisfactory. In this paper we will present a new one, based on a recently introduced concept of conditional independence. We define a conditioning rule for variables, and the relationship between conditional independence and irrelevance is studied with the aim of constructing a Bayesian-network-like model. Then, through a simple example, we will show a problem appearing in this model caused by the use of a conditioning rule. We will also show that this problem can be avoided if undirected or compositional models are used instead.
The prediction of traffic accident duration is great significant for rapid disposal of traffic accidents, especially for fast rescue of traffic accidents and re- moving traffic safety hazards. In this paper, two methods, which are based on artificial neural network (ANN) and support vector machine (SVM), are adopted for the accident duration prediction. The proposed method is demonstrated by a case study using data on approximately 235 accidents that occurred on freeways located between Dalian and Shenyang, from 2012 to 2014. The mean absolute error (MAE), the root mean square error (RMSE) and the mean absolute percentage error (MAPE) are used to evaluate the performances of the two measures. The conclusions are as follows: Both ANN and SVM models had the ability to predict traffic accident duration within acceptable limits. The ANN model gets a better result for long duration incident cases. The comprehensive performance of the SVM model is better than the ANN model for the traffic accident duration prediction.
We compare a recent selection theorem given by Chistyakov using the notion of modulus of variation, with a selection theorem of Schrader based on bounded oscillation and with a selection theorem of Di Piazza-Maniscalco based on bounded A , Λ-oscillation.
The regulator equation is the fundamental equation whose solution must be found in order to solve the output regulation problem. It is a system of first-order partial differential equations (PDE) combined with an algebraic equation. The classical approach to its solution is to use the Taylor series with undetermined coefficients. In this contribution, another path is followed: the equation is solved using the finite-element method which is, nevertheless, suitable to solve PDE part only. This paper presents two methods to handle the algebraic condition: the first one is based on iterative minimization of a cost functional defined as the integral of the square of the algebraic expression to be equal to zero. The second method converts the algebraic-differential equation into a singularly perturbed system of partial differential equations only. Both methods are compared and the simulation results are presented including on-line control implementation to some practically motivated laboratory models.
Let $\tilde{f}$, $\tilde{g}$ be ultradistributions in $\mathcal Z^{\prime }$ and let $\tilde{f}_n = \tilde{f} * \delta _n$ and $\tilde{g}_n = \tilde{g} * \sigma _n$ where $\lbrace \delta _n \rbrace $ is a sequence in $\mathcal Z$ which converges to the Dirac-delta function $\delta $. Then the neutrix product $\tilde{f} \diamond \tilde{g}$ is defined on the space of ultradistributions $\mathcal Z^{\prime }$ as the neutrix limit of the sequence $\lbrace {1 \over 2}(\tilde{f}_n \tilde{g} + \tilde{f} \tilde{g}_n)\rbrace $ provided the limit $\tilde{h}$ exist in the sense that \[ \mathop {\mathrm N\text{-}lim}_{n\rightarrow \infty }{1 \over 2} \langle \tilde{f}_n \tilde{g} +\tilde{f} \tilde{g}_n, \psi \rangle = \langle \tilde{h}, \psi \rangle \] for all $\psi $ in $\mathcal Z$. We also prove that the neutrix convolution product $f \mathbin {\diamondsuit \!\!\!\!*\,}g$ exist in $\mathcal D^{\prime }$, if and only if the neutrix product $\tilde{f} \diamond \tilde{g}$ exist in $\mathcal Z^{\prime }$ and the exchange formula \[ F(f \mathbin {\diamondsuit \!\!\!\!*\,}g) = \tilde{f} \diamond \tilde{g} \] is then satisfied.