Hypertriglyceridemia is an important marker of increased levels of highly atherogenic rem nant -like particles. The importance of lowering plasma levels of triglycerides (TG) has been called into question many times, but currently it is considered an integral part of residual cardiovascular risk reduction strategies. Lifestyle changes (improved diet and increased physical activity) are effective TG lowering measures. Pharmacological treatment usually starts with statins, although associated TG reductions are typically modest. Fibrates are currently the drugs of choice for hyperTG, frequently in c ombination with statins. Niacin and omega -3 fatty acids improve control of triglyceride levels when the above measures are inadequately effective. Some novel therapies including anti- sense oligonucleotides and inhibitors of microsomal triglyceride transfer protein have shown significant TG lowering efficacy. The current approach to the management of hypertriglyceridemia is based on lifestyle changes and, usually, drug combinations (statin and fibrate and/or omega -3 fatty acids or niacin)., M. Vrablík, R. Češka., and Obsahuje bibliografii
In probability theory, Bayesian statistics, artificial intelligence and database theory the minimum cross-entropy principle is often used to estimate a distribution with a given set P of marginal distributions under the proportionality assumption with respect to a given "prior'' distribution q. Such an estimation problem admits a solution if and only if there exists an extension of P that is dominated by q. In this paper we consider the case that q is not given explicitly, but is specified as the maximum-entropy extension of an auxiliary set Q of distributions. There are three problems that naturally arise: (1) the existence of an extension of a distribution set (such as P and Q), (2) the existence of an extension of P that is dominated by the maximum entropy extension of Q, (3) the numeric computation of the minimum cross-entropy extension of P with respect to the maximum entropy extension of Q. In the spirit of a divide-and-conquer approach, we prove that, for each of the three above-mentioned problems, the global solution can be easily obtained by combining the solutions to subproblems defined at node level of a suitable tree.
A new kind of a deterministic pushdown automaton, called a \emph{Tree Compression Automaton}, is presented. The tree compression automaton represents a complete compressed index of a set of trees for subtrees and accepts all subtrees of given trees. The algorithm for constructing our pushdown automaton is incremental. For a single tree with n nodes, the automaton has at most n+1 states, its transition function cardinality is at most 4n and there are 2n+1 pushdown store symbols. If hashing is used for storing automaton's transitions, thus removing a factor of logn, the construction of the automaton takes linear time and space with respect to the length n of the input tree(s). Our pushdown automaton construction can also be used for finding all subtree repeats without augmenting the overall complexity.
In this work we deal with tree pattern matching over ranked trees, where the pattern set to be matched against is defined by a regular tree expression. We present a new method that uses a tree automaton constructed inductively from a regular tree expression. First we construct a special tree automaton for the regular tree expression of the pattern E, which is somehow a generalization of Thompson automaton for strings. Then we run the constructed automaton on the subject tree t. The pattern matching algorithm requires an O(|t||E|) time complexity, where |t| is the number of nodes of t and |E| is the size of the regular tree expression E. The novelty of this contribution besides the low time complexity is that the set of patterns can be infinite, since we use regular tree expressions to represent patterns.
First, this paper discusses tree-controlled grammars with root-to-leaf derivation-tree paths restricted by control languages. It demonstrates that if the control languages are regular, these grammars generate the family of context-free languages. Then, in a similar way, the paper introduces tree-controlled grammars with derivation-tree cuts restricted by control languages. It proves that if the cuts are restricted by regular languages, these grammars generate the family of recursively enumerable languages. In addition, it places a binary-relation-based restriction upon these grammars and demonstrate that this additional restriction does not affect the generative power of these grammars.
Soubory zvířecích kostí a zubů datované do 8. až 14. století byly shromážděny při archeologických výzkumech na několika místech Prahy (Pražský hrad, Malá Strana a Staré Město). Získaný osteologický materiál představuje odpad vznikající převážně při úpravě a konzumaci masa. Jeho detailní vyhodnocení se zaměřením na druhové složení, úmrtní věk a pohlaví zvířat přináší bližší informace nejen o složení stravy a kvalitě masa, ale i využívání dalších živočišných produktů. Porovnáním více souborů na prostorové a časové úrovni jsme se pokusili lépe porozumět trendům v hospodaření se zvířaty a spotřebě jejich produktů v prostoru středověké Prahy. and Assemblages of animal bones and teeth dated to the 8th–14th century AD were collected during archaeological excavations at several Prague locations (Prague Castle, Lesser Town and Old Town). The acquired osteological material is waste resulting mainly from the butchering and consumption of meat. A detailed evaluation of this material with a focus on the taxonomic representation, the slaughter age and the sex of the animals provides more detailed information on both the composition of the diet and the quality of meat, but also the use of other animal products. By means of a comparison of multiple assemblages on the spatial and temporal level, we attempted to gain a better understanding of the trends in animal husbandry and the consumption of their products in medieval Prague.