Thought experiments are frequently vague and obscure hypothetical scenarios that are difficult to assess. The paper proposes a simple model of thought experiments. In the first part, I introduce two contemporary frameworks for thought experiment analysis: an experimentalist approach that relies on similarities between real and thought experiment, and a reasonist approach focusing on the answers provided by thought experimenting. Further, I articulate a minimalist approach in which thought experiment is considered strictly as doxastic mechanism based on imagination. I introduce the basic analytical tool that allows us to differentiate an experimental core from an attached argumentation. The last section is reserved for discussion. I address several possible questions concerning adequacy of minimalistic definition and analysis., Myšlenkové experimenty jsou často vágní a temné hypotetické scénáře, které je těžké posoudit. Příspěvek navrhuje jednoduchý model myšlenkových experimentů. V první části představuji dva současné rámce pro analýzu myšlenkového experimentu: experimentistický přístup, který se opírá o podobnosti mezi skutečným a myšlenkovým experimentem a přístupem k rozumu, který se zaměřuje na odpovědi poskytované experimentováním s myšlenkami. Dále artikuluje minimalistický přístup, ve kterém je experiment experimentu považován za přísně doxastický mechanismus založený na představivosti. Představuji základní analytický nástroj, který nám umožňuje odlišit experimentální jádro od připojené argumentace. Poslední sekce je vyhrazena k diskusi. Zabývám se několika možnými otázkami týkajícími se přiměřenosti minimalistické definice a analýzy., and Marek Picha
A mixture of support vector machines (SVMs) is proposed for time series forecasting. The SVMs mixture is composed of a two-stage architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM mixture, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The experiment shows that the SVMs mixture achieves significant improvement in the generalization performance in comparison with the single SVMs model. In addition, the SVMs mixture also converges faster and use fewer support vectors.
The functional structure of our new network is not preset; instead, it
comes into existence in a random, stochastic manner.
The anatomical structure of our model consists of two input “neurons”, hundreds up to five thousands of hidden-layer “neurons” and one output “neuron”.
The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result.
Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished “offspring”. The next iteration begins with the application of these new, most successful variants of the results, i.é., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200).
The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best “organiser”.
Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms.
For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons.
The latency of neurons, natural and artificial alike, is very importaiit as it
enables feedback action.
Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuek in the local minimum, the amount of noise has to be inereased again. While this will make the network organisation waver, it will also inerease the likelihood that the erisis in the local minimum will abate and improve substantially the state of the network in its self-organisation.
Our system allows for constant state-of-the-network reading by ineaiis of establishing the network energy level, i.e., basically ascertaining progression of the network’s rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
I review our series of works on the galactic evoluton putting emphasis on some fundaments of he models instead that on their detailed structure. I show, with some example how the mathematical formulation is produced from the physical input, how it can be modified as physics is enriched, and which kind of results can be provided.
In Czech and Polish underground hard coal mines of the Upper Silesian Coal Basin high-energy seismic phenomena are periodically recorded, the sources of which are located ahead of the longwall. Generally, these types of tremors are rooted in very strong, thick layers of sandstone, which are subject to the deformation border. The consequences are discontinuities and cracks with a range depending on the mechanical properties of destroyed rocks: the mechanical parameters of layers. Forecasting methods, developed in the Central Mining Institute, for stress concentration, seismic energy, fault zone and range, together with methods of rock fracturing using liquid or explosives, ,allow precise identification of suitable locations for controlled fracturing of rock mass with a pre-established direction. The size and range of discontinuities have an impact on mining parameters, dependent on basic exploitation intensity and expressed by the average daily progress of the longwall face. The rockmass is locally weakened because of exploitation or technical measures of discontinuities in the roof-rock on the longwall face. To prevent rockburst, measures are needed to reduce the amount of energy accumulating in the rockmass in the area of the longwall face. Knowledge of where stress is concentrated is extremely important for the development and implementation of effective preventative methods. For many years several research centres have been working on defining the range of these areas. In this paper, basic information is presented on methods developed by Central Mining Institute and used in Polish hard coal mines for forecasting energy concentration and assessing how it can be reduced., Jan Drzewiecki and Janusz Makówka., and Obsahuje bibliografii
We consider a construction of approximate confidence intervals on the variance component σ21 in mixed linear models with two variance components with non-zero degrees of freedom for error. An approximate interval that seems to perform well in such a case, except that it is rather conservative for large σ21/σ2, was considered by Hartung and Knapp in \cite{hk}. The expression for its asymptotic coverage when σ21/σ2→∞ suggests a modification of this interval that preserves some nice properties of the original and that is, in addition, exact when σ21/σ2→∞. It turns out that this modification is an interval suggested by El-Bassiouni in \cite{eb}. We comment on its properties that were not emphasized in the original paper \cite{eb}, but which support use of the procedure. Also a small simulation study is provided.
This paper proposes an offline gradient method with smoothing L1/2 regularization for learning and pruning of the pi-sigma neural networks (PSNNs). The original L1/2 regularization term is not smooth at the origin, since it involves the absolute value function. This causes oscillation in the computation and difficulty in the convergence analysis. In this paper, we propose to use a smooth function to replace and approximate the absolute value function, ending up with a smoothing L1/2 regularization method for PSNN. Numerical simulations show that the smoothing L1/2 regularization method eliminates the oscillation in computation and achieves better learning accuracy. We are also able to prove a convergence theorem for the proposed learning method.
Chlorophyll index and leaf nitrogen status (SPAD value) was incorporated into the nonrectangular hyperbola (NRH) equation for photosynthetic light-response (PLR) curve to establish a modified NRH equation to overcome the parameter variation. Ten PLR curves measured on rice leaves with different SPAD values were collected from pot experiments with different nitrogen (N) dosages. The coefficients of initial slope of the PLR curve and the maximum net photosynthetic rate in NRH equation increased linearly with the increase of leaf SPAD. The modified NRH equation was established by multiplying a linear SPAD-based adjustment factor with the NRH equation. It was sufficient in describing the PLR curves with unified coefficients for rice leaf with different SPAD values. SPAD value, as the indicator of leaf N status, could be used for modification of NRH equation to overcome the shortcoming of large coefficient variations between individual leaves with different N status. The performance of the SPAD-modified NRH equation should be further validated by data collected from different kinds of plants growing under different environments., J. Z. Xu, Y. M. Yu, S. Z. Peng, S. H. Yang, L. X. Liao., and Obsahuje bibliografii
In this paper, Runge-Kutta methods are discussed for numerical solutions of conservative systems. For the energy of conservative systems being as close to the initial energy as possible, a modified version of explicit Runge-Kutta methods is presented. The order of the modified Runge-Kutta method is the same as the standard Runge-Kutta method, but it is superior in energy-preserving to the standard one. Comparing the modified Runge-Kutta method with the standard Runge-Kutta method, numerical experiments are provided to illustrate the effectiveness of the modified Runge-Kutta method.