1 - 4 of 4
Number of results to display per page
Search Results
2. Optimally approximating exponential families
- Creator:
- Rauh, Johannes
- Format:
- bez média and svazek
- Type:
- model:article and TEXT
- Subject:
- exponential family and information divergence
- Language:
- English
- Description:
- This article studies exponential families E on finite sets such that the information divergence D(P∥E) of an arbitrary probability distribution from E is bounded by some constant D>0. A particular class of low-dimensional exponential families that have low values of D can be obtained from partitions of the state space. The main results concern optimality properties of these partition exponential families. The case where D=log(2) is studied in detail. This case is special, because if D<log(2), then E contains all probability measures with full support.
- Rights:
- http://creativecommons.org/publicdomain/mark/1.0/ and policy:public
3. Properties of unique information
- Creator:
- Rauh, Johannes, Schünemann, Maik, and Jost, Jürgen
- Format:
- bez média and svazek
- Type:
- model:article and TEXT
- Subject:
- unique information and information decomposition
- Language:
- English
- Description:
- We study the unique information function UI(T:X∖Y) defined by Bertschinger et al. [4] within the framework of information decompositions. In particular, we study uniqueness and support of the solutions to the convex optimization problem underlying the definition of UI. We identify sufficient conditions for non-uniqueness of solutions with full support in terms of conditional independence constraints and in terms of the cardinalities of T, X and Y. Our results are based on a reformulation of the first order conditions on the objective function as rank constraints on a matrix of conditional probabilities. These results help to speed up the computation of UI(T:X∖Y), most notably when T is binary. Optima in the relative interior of the optimization domain are solutions of linear equations if T is binary. In the all binary case, we obtain a complete picture of where the optimizing probability distributions lie.
- Rights:
- http://creativecommons.org/licenses/by-nc-sa/4.0/ and policy:public
4. Scaling of model approximation errors and expected entropy distances
- Creator:
- Montúfar, Guido F. and Rauh, Johannes
- Format:
- bez média and svazek
- Type:
- model:article and TEXT
- Subject:
- exponential families, KL divergence, MLE, and Dirichlet prior
- Language:
- English
- Description:
- We compute the expected value of the Kullback-Leibler divergence of various fundamental statistical models with respect to Dirichlet priors. For the uniform prior, the expected divergence of any model containing the uniform distribution is bounded by a constant 1−γ. For the models that we consider this bound is approached as the cardinality of the sample space tends to infinity, if the model dimension remains relatively small. For Dirichlet priors with reasonable concentration parameters the expected values of the divergence behave in a similar way. These results serve as a reference to rank the approximation capabilities of other statistical models.
- Rights:
- http://creativecommons.org/publicdomain/mark/1.0/ and policy:public