Steroid profiling helps various pathologies to be rapidly
diagnosed. Results from analyses investigating steroidogenic
pathways may be used as a tool for uncovering pathology
causations and proposals of new therapeutic approaches. The
purpose of this study was to address still underutilized application
of the advanced GC-MS/MS platform for the multicomponent
quantification of endogenous steroids. We developed and
validated a GC-MS/MS method for the quantification of
58 unconjugated steroids and 42 polar conjugates of steroids
(after hydrolysis) in human blood. The present method was
validated not only for blood of men and non-pregnant women
but also for blood of pregnant women and for mixed umbilical
cord blood. The spectrum of analytes includes common
hormones operating via nuclear receptors as well as other
bioactive substances like immunomodulatory and neuroactive
steroids. Our present results are comparable with those from our
previously published GC-MS method as well as the results of
others. The present method was extended for corticoids and
17α-hydroxylated 5α/β-reduced pregnanes, which are useful for
the investigation of alternative “backdoor” pathway. When
comparing the analytical characteristics of the present and
previous method, the first exhibit by far higher selectivity, and
generally higher sensitivity and better precision particularly for
17α-hydroxysteroids.
This paper describes an invented method for direct measurement of coalbed methane content in situ. In contrast to known procedures, this method does not need to place a rock or drilling cuts into an airtight canister, and does not involve sealing of the hole. Moreover, the new method is monitoring methane content in situ continuously and synchronously during drilling the hole not losing any portion of the gas. These positive features are a sequel of new approach based on injection of known portion of neutral gas into the hole. Methane content was determined from concentration of the mixture ‘methane-neutral gas´ at the hole´s mouth. New method is applicable for commercial recovery application of coalbed methane and forecast of dangerous gas and coal bursts. FLAC3D computer simulation helped to investigate dynamics of methane outflow from the hole to account for the effect of drilling speed on the rate of gas emanation.
Text sentiment analysis plays an important role in social network information mining. It is also the theoretical foundation and basis of personalized recommendation, circle of interest classification and public opinion analysis. In view of the existing algorithms for feature extraction and weight calculation, we find that they fail to fully take into account the in fluence of sentiment words. Therefore, this paper proposed a fine-grained short text sentiment analysis method based on machine learning. To improve the calculation method of feature selection and weighting and proposed a more suitable sentiment analysis algorithm for features extraction named N-CHI and weight calculation named W-TF-IDF, increasing the proportion and weight of sentiment words in the feature words Through experimental analysis and comparison, the classification accuracy of this method is obviously improved compared with other methods.
This article reports a method for forecasting an earthquake by synchronous anomalies of optical astronomic time-latitude residuals. The so-called optical astronomic time-latitude residuals for a certain astrometric instrument are the rest after deducting the effects of Earth whole motion from the astronomical time and latitude observations determined by the instrument. Forecasting practice for four earthquakes around the Yunnan Observatory occurring after 2010 shows that it does not generate false forecasts, and also does not miss forecasts of major earthquakes. This forecasting practice proves that the synchronous anomalies of astronomical time-latitude residuals can provide effective warning sign for earthquake occurrence around observatory station, thus deserves attention and further study. and Su Youjin, Gao Yuping, Hu Hui.
Thought experiments are frequently vague and obscure hypothetical scenarios that are difficult to assess. The paper proposes a simple model of thought experiments. In the first part, I introduce two contemporary frameworks for thought experiment analysis: an experimentalist approach that relies on similarities between real and thought experiment, and a reasonist approach focusing on the answers provided by thought experimenting. Further, I articulate a minimalist approach in which thought experiment is considered strictly as doxastic mechanism based on imagination. I introduce the basic analytical tool that allows us to differentiate an experimental core from an attached argumentation. The last section is reserved for discussion. I address several possible questions concerning adequacy of minimalistic definition and analysis., Myšlenkové experimenty jsou často vágní a temné hypotetické scénáře, které je těžké posoudit. Příspěvek navrhuje jednoduchý model myšlenkových experimentů. V první části představuji dva současné rámce pro analýzu myšlenkového experimentu: experimentistický přístup, který se opírá o podobnosti mezi skutečným a myšlenkovým experimentem a přístupem k rozumu, který se zaměřuje na odpovědi poskytované experimentováním s myšlenkami. Dále artikuluje minimalistický přístup, ve kterém je experiment experimentu považován za přísně doxastický mechanismus založený na představivosti. Představuji základní analytický nástroj, který nám umožňuje odlišit experimentální jádro od připojené argumentace. Poslední sekce je vyhrazena k diskusi. Zabývám se několika možnými otázkami týkajícími se přiměřenosti minimalistické definice a analýzy., and Marek Picha
A mixture of support vector machines (SVMs) is proposed for time series forecasting. The SVMs mixture is composed of a two-stage architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM mixture, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The experiment shows that the SVMs mixture achieves significant improvement in the generalization performance in comparison with the single SVMs model. In addition, the SVMs mixture also converges faster and use fewer support vectors.
The functional structure of our new network is not preset; instead, it
comes into existence in a random, stochastic manner.
The anatomical structure of our model consists of two input “neurons”, hundreds up to five thousands of hidden-layer “neurons” and one output “neuron”.
The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result.
Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished “offspring”. The next iteration begins with the application of these new, most successful variants of the results, i.é., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200).
The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best “organiser”.
Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms.
For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons.
The latency of neurons, natural and artificial alike, is very importaiit as it
enables feedback action.
Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuek in the local minimum, the amount of noise has to be inereased again. While this will make the network organisation waver, it will also inerease the likelihood that the erisis in the local minimum will abate and improve substantially the state of the network in its self-organisation.
Our system allows for constant state-of-the-network reading by ineaiis of establishing the network energy level, i.e., basically ascertaining progression of the network’s rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
I review our series of works on the galactic evoluton putting emphasis on some fundaments of he models instead that on their detailed structure. I show, with some example how the mathematical formulation is produced from the physical input, how it can be modified as physics is enriched, and which kind of results can be provided.
We consider a construction of approximate confidence intervals on the variance component σ21 in mixed linear models with two variance components with non-zero degrees of freedom for error. An approximate interval that seems to perform well in such a case, except that it is rather conservative for large σ21/σ2, was considered by Hartung and Knapp in \cite{hk}. The expression for its asymptotic coverage when σ21/σ2→∞ suggests a modification of this interval that preserves some nice properties of the original and that is, in addition, exact when σ21/σ2→∞. It turns out that this modification is an interval suggested by El-Bassiouni in \cite{eb}. We comment on its properties that were not emphasized in the original paper \cite{eb}, but which support use of the procedure. Also a small simulation study is provided.