Transients of chlorophyll fluorescence in photosynthetic objects are often measured using short pulses of exciting radiation, which has recently been employed to capture kinetic images of fluorescence at the macroscopic level. Here we describe an instrument introducing this principle to recording of two dimensional fluorescence transients in microscopic objects. A modified fluorescence microscope is equipped with a CCD camera intensified by a micro-channel plate image amplifier. The microscopic field is irradiated simultaneously by three types of radiation: actinic radiation, saturating flashes, and pulsed measuring radiation. The measuring pulses are generated by a light-emitting diode and their duration is between 10 to 250 µs. The detection of fluorescence images (300×400 pixels, 8 bit) has a maximum time resolution of 40 ms and is gated in synchrony with the exciting pulses. This allows measuring on a background of a continuous actinic radiation up to irradiance that can elicit the maximal fluorescence yield (FM). On the other hand, the integral irradiance of the objects by the measuring radiation is very low, e.g., 0.08 µmol m-2 s-1 at 0.05 µm spatial resolution and 0.006 µmol m-2 s-1 at 4 µm spatial resolution. This allows a reliable recording of F0 even in very short time intervals (e.g., 5×80 ms). The software yields fluorescence kinetic curves for objects in user-selected areas as well as complete false-colour maps of the essential fluorescence kinetics parameters (FM, FO, FV, FV/FM, etc.) showing a two-dimensional distribution of their values. Several examples demonstrate that records of fluorescence kinetics can be obtained with a reasonable signal-to-noise ratio with all standard microscope objectives and with object sizes reaching from segments of leaf tissue to individual algal cells or chloroplasts. and H. Küpper ... [et al.].
In the present study, a high percentage of Japanese anglerfish, Lophius litulon (Jordan, 1902), contained a microsporidian infection of the nervous tissues. Xenomas were removed and prepared for standard wax histology and transmission electron microscopy (TEM). DNA extractions were performed on parasite spores and used in PCR and sequencing reactions. Fresh spores measured 3.4 × 1.8 µm and were uniform in size with no dimorphism observed. TEM confirmed that only a single developmental cycle and a single spore form were present. Small subunit (SSU) rDNA sequences were >99.5% similar to those of Spraguea lophii (Doflein, 1898) and Glugea americanus (Takvorian et Cali, 1986) from the European and American Lophius spp. respectively. The microsporidian from the nervous tissue of L. litulon undoubtedly belongs in the genus Spraguea Sprague et Vávra, 1976 and the authors suggest a revision to the generic description of Spraguea to include monomorphic forms and the transfer of Glugea americanus to Spraguea americana comb. n. Since no major differences in ultrastructure or SSU rDNA sequence data exist between Spraguea americana and the microsporidian from the Japanese anglerfish, they evidently belong to the same species. This report of Spraguea americana is the first report of a Spraguea species from L. litulon and indeed from the Pacific water mass.
A new Microsporidium sp. infects Rhizophagus grandis Gyllenhall, a beetle which preys on the bark beetle Dendroctonus micans Kugellan in Turkey. Mature spores are single, uninucleate, oval in shape (3.75 ± 0.27 µm in length by 2.47 ± 0.13 µm in width), with a subapically fixed polar filament. The polar filament is anisofilar, coiled in 7-8 normal and 3-4 reduced coils. Other characteristic features of the microsporidium are the four/five nuclear divisions to form 16/32 (commonly 16) spores, subpersistent sporophorous vesicles (pansporoblasts) remaining till formation of the endospore, and the vesicles dissolved with free mature spores. The polaroplast is divided into three zones: an amorphous zone, dense layers, and a lamellar-tubular area extending to the central part of the spore.
Thought experiments are frequently vague and obscure hypothetical scenarios that are difficult to assess. The paper proposes a simple model of thought experiments. In the first part, I introduce two contemporary frameworks for thought experiment analysis: an experimentalist approach that relies on similarities between real and thought experiment, and a reasonist approach focusing on the answers provided by thought experimenting. Further, I articulate a minimalist approach in which thought experiment is considered strictly as doxastic mechanism based on imagination. I introduce the basic analytical tool that allows us to differentiate an experimental core from an attached argumentation. The last section is reserved for discussion. I address several possible questions concerning adequacy of minimalistic definition and analysis., Myšlenkové experimenty jsou často vágní a temné hypotetické scénáře, které je těžké posoudit. Příspěvek navrhuje jednoduchý model myšlenkových experimentů. V první části představuji dva současné rámce pro analýzu myšlenkového experimentu: experimentistický přístup, který se opírá o podobnosti mezi skutečným a myšlenkovým experimentem a přístupem k rozumu, který se zaměřuje na odpovědi poskytované experimentováním s myšlenkami. Dále artikuluje minimalistický přístup, ve kterém je experiment experimentu považován za přísně doxastický mechanismus založený na představivosti. Představuji základní analytický nástroj, který nám umožňuje odlišit experimentální jádro od připojené argumentace. Poslední sekce je vyhrazena k diskusi. Zabývám se několika možnými otázkami týkajícími se přiměřenosti minimalistické definice a analýzy., and Marek Picha
A mixture of support vector machines (SVMs) is proposed for time series forecasting. The SVMs mixture is composed of a two-stage architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM mixture, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The experiment shows that the SVMs mixture achieves significant improvement in the generalization performance in comparison with the single SVMs model. In addition, the SVMs mixture also converges faster and use fewer support vectors.
The functional structure of our new network is not preset; instead, it
comes into existence in a random, stochastic manner.
The anatomical structure of our model consists of two input “neurons”, hundreds up to five thousands of hidden-layer “neurons” and one output “neuron”.
The proper process is based on iteration, i.e., mathematical operation governed by a set of rules, in which repetition helps to approximate the desired result.
Each iteration begins with data being introduced into the input layer to be processed in accordance with a particular algorithm in the hidden layer; it then continues with the computation of certain as yet very crude configurations of images regulated by a genetic code, and ends up with the selection of 10% of the most accomplished “offspring”. The next iteration begins with the application of these new, most successful variants of the results, i.é., descendants in the continued process of image perfection. The ever new variants (descendants) of the genetic algorithm are always generated randomly. The determinist rule then only requires the choice of 10% of all the variants available (in our case 20 optimal variants out of 200).
The stochastic model is marked by a number of characteristics, e.g., the initial conditions are determined by different data dispersion variance, the evolution of the network organisation is controlled by genetic rules of a purely stochastic nature; Gaussian distribution noise proved to be the best “organiser”.
Another analogy between artificial networks and neuronal structures lies in the use of time in network algorithms.
For that reason, we gave our networks organisation a kind of temporal development, i.e., rather than being instantaneous; the connection between the artificial elements and neurons consumes certain units of time per one synapse or, better to say, per one contact between the preceding and subsequent neurons.
The latency of neurons, natural and artificial alike, is very importaiit as it
enables feedback action.
Our network becomes organised under the effect of considerable noise. Then, however, the amount of noise must subside. However, if the network evolution gets stuek in the local minimum, the amount of noise has to be inereased again. While this will make the network organisation waver, it will also inerease the likelihood that the erisis in the local minimum will abate and improve substantially the state of the network in its self-organisation.
Our system allows for constant state-of-the-network reading by ineaiis of establishing the network energy level, i.e., basically ascertaining progression of the network’s rate of success in self-organisation. This is the principal parameter for the detection of any jam in the local minimum. It is a piece of input information for the formator algorithm which regulates the level of noise in the system.
I review our series of works on the galactic evoluton putting emphasis on some fundaments of he models instead that on their detailed structure. I show, with some example how the mathematical formulation is produced from the physical input, how it can be modified as physics is enriched, and which kind of results can be provided.
In Czech and Polish underground hard coal mines of the Upper Silesian Coal Basin high-energy seismic phenomena are periodically recorded, the sources of which are located ahead of the longwall. Generally, these types of tremors are rooted in very strong, thick layers of sandstone, which are subject to the deformation border. The consequences are discontinuities and cracks with a range depending on the mechanical properties of destroyed rocks: the mechanical parameters of layers. Forecasting methods, developed in the Central Mining Institute, for stress concentration, seismic energy, fault zone and range, together with methods of rock fracturing using liquid or explosives, ,allow precise identification of suitable locations for controlled fracturing of rock mass with a pre-established direction. The size and range of discontinuities have an impact on mining parameters, dependent on basic exploitation intensity and expressed by the average daily progress of the longwall face. The rockmass is locally weakened because of exploitation or technical measures of discontinuities in the roof-rock on the longwall face. To prevent rockburst, measures are needed to reduce the amount of energy accumulating in the rockmass in the area of the longwall face. Knowledge of where stress is concentrated is extremely important for the development and implementation of effective preventative methods. For many years several research centres have been working on defining the range of these areas. In this paper, basic information is presented on methods developed by Central Mining Institute and used in Polish hard coal mines for forecasting energy concentration and assessing how it can be reduced., Jan Drzewiecki and Janusz Makówka., and Obsahuje bibliografii