The main focus of this paper is to find a suitable distribution for the hydrology series of six catchments in Pakistan. Among others Gumbel and Generalized Extreme Value (GEV) distributions were implemented for frequency analysis using Probability weighted moments (PWM) and maximum likelihood (ML) methods for estimating the parameters. Based on goodness of fit tests it was found that GEV distribution fits closely and PWM method is best suited for estimating the parameters. Peaks over threshold (POT) series model was also tried which resulted in favor of GEV distribution. The quantile estimates based on aforementioned distributions also revealed that GEV distribution has been found close to the observed values of annual maximum peak (AMP) flows. Power comparison studies using various goodness of fit tests for Gumbel and GEV distributions using Log-normal and Weibull as alternative distributions at different levels of significance and for different sample sizes n = 10, 30, 50 showed that Anderson Darling (AD) test is more powerful test followed by Modified Anderson Darling (MAD), Cramer Von Mises (CVM) and Kolmogorov Smirnov (KS) test. and Príspevok je zameraný na odvodenie vhodného rozdelenia pravdepodobností hydrologických radov šiestich povodí v Pakistane. Spomedzi iných sa použili rozdelenia Gumbela a jeho štandardnej extrémnej hodnoty (GEV), s odvodením parametrov metódou momentov vážených pravdepodobnosťou (PWM) a metódou maximálnej pravdepodobnosti (ML) pre analýzu početností. Na základe testov zhody sa zistilo, že na odhad parametrov je najvhodnejšia metóda PWM, a že najlepšia zhoda je pre GEV rozdelenie. Model s výberom maxím nad určitou hranicou (POT) sa tiež použil, s najlepšou zhodou tiež pre rozdelenie GEV. Odhady kvantilov uskutočnené podľa vyššie uvedených rozdelení tiež potvrdili, že tieto pre rozdelenie GEV sú blízke pozorovaným hodnotám ročných maxím (AMP). Porovnanie mocnín (PC) sa tiež uskutočnili pri použití rôznych testov zhody. Pre rozdelenia podľa Gumbela a GEV, alternatívne tiež rozdelenia podľa Weibulla a Logaritmicko-normálne, sa pre súbory s veľkosťou n = 10, 30, 50 členov ako najúčinnejšie ukázali nasledujúce testy na rôznych hladinách významnosti: podľa Andersona Darlinga (AD) ako najúčinnejšieho, nasledoval modifikovaný Anderson Darling (MAD), podľa Cramera von Missesa (CVM) a Kolmogorova-Smirnova (KS).
Autogenous deformation is a phenomenon that origins in chemical shrinkage during the cement hydration. The physical mechanism is the consumption of capillary water during the hydration and the refinement of capillary porosity. The microscopic underpressure due to thermo-dynamical equilibrium in a pore exerts the negative pressure on the solid skeleton of the paste. This behavior is simulated by means of FEM, where the microstructure of cement paste is loaded directly by underpressure. Validation shows that creep of the cement paste must be also taken into account when good quantitative prediction is expected. and Obsahuje seznam literatury
Updating probabilities by information from only one hypothesis and thereby ignoring alternative hypotheses, is not only biased but leads to progressively imprecise conclusions. In psychology this phenomenon was studied in experiments with the "pseudodiagnosticity task''. In probability logic the phenomenon that additional premises increase the imprecision of a conclusion is known as "degradation''. The present contribution investigates degradation in the context of second order probability distributions. It uses beta distributions as marginals and copulae together with C-vines to represent dependence structures. It demonstrates that in Bayes' theorem the posterior distributions of the lower and upper probabilities approach 0 and 1 as more and more likelihoods belonging to only one hypothesis are included in the analysis.
In the paper, we focus on reasoning with IF-THEN rules in propositional fragment of predicate calculus and on its modeling with neural networks. At first, IF-THEN deduction from facts is defined. Then it is proved that for any non-contradictory set of IF-THEN rules and literals (representing facts) there exists a layered recurrent network with 2 hidden layers that can specify all IF-THEN deducible literals. If we denote the set of all literal IF-THEN consequences as D_0 and the set of all literal logical consequences as D, then obviously D_0 \subset D. Thus, D_0 can be considered to be an approximation of D. Using the designed network for simulation of contradiction proof, the approximation D_0 may be easily refined. Furthermore, the network may also be used for determination of D. However, the algorithm that realizes necessary network computations has exponential complexity.
Accurate and nondestructive methods to determine individual leaf areas of plants are a useful tool in physiological and agronomic research. Determining the individual leaf area (LA) of rose (Rosa hybrida L.) involves measurements of leaf parameters such as length (L) and width (W), or some combinations of these parameters. Two-year investigation was carried out during 2007 (on thirteen cultivars) and 2008 (on one cultivar) under greenhouse conditions, respectively, to test whether a model could be developed to estimate LA of rose across cultivars. Regression analysis of LA vs. L and W revealed several models that could be used for estimating the area of individual rose leaves. A linear model having L×W as the independent variable provided the most accurate estimate (highest r2, smallest MSE, and the smallest PRESS) of LA in rose. Validation of the model having L×W of leaves measured in the 2008 experiment coming from other cultivars of rose showed that the correlation between calculated and measured rose LA was very high. Therefore, this model can estimate accurately and in large quantities the LA of rose plants in many experimental comparisons without the use of any expensive instruments. and Y. Rouphael ... [et al.].
This paper is concerning with simulations of cavitation flow around the NACA 0015 hydrofoil. The problem is solved as the multi-phase and single-phase model of flow, for two different impact angles and for two different densities of computational net. The attention is focused on the comparison of single-phase and multi-phase results.
Sinkholes that are often formed over shallow mining workings constitute a significant threat to buildings, infrastructure and – most of all – to the residents. These deformations often occur after a long time following the completion of mining works. A considerable number of sinkholes are formed above shallow headings due to the loss of bearing capacity of old wooden supports. Because of the above, the problem of predicting the formation of sinkholes gains significance. This paper presents the assumptions of Strzałkowski’s method of predicting discontinuous surface deformations (sinkholes). The deterministic model reflects the essence of the mechanism of destruction of the rock mass surrounding the void. The presented study case – another example of using the method for providing an ex post prognosis indicates its practical usability. The computer programme used in this paper is helpful to laborious calculations.
The paper presents three-dimensional CFD analysis of two-phase (sand-water) slurry flows through 263 mm diameter pipe in horizontal orientation for mixture velocity range of 3.5-4.7 m/s and efflux concentration range of 9.95- 34% with three particle sizes viz. 0.165 mm, 0.29 mm and 0.55 mm with density 2650 kg/m3 . RNG k-ε turbulence closure equations with Eulerian multi-phase model is used to simulate various slurry flows. The simulated values of local solid concentration are compared with the experimental data and are found to be in good agreement for all particle sizes. Effects of particle size on various slurry flow parameters such as pressure drop, solid phase velocity distribution, friction factor, granular pressure, turbulent viscosity, turbulent kinetic energy and its dissipation have been analyzed.
Technology has undergone rapid development in the past several decades and we are now at a point where many technologies are available to help create smart cities. Many technology companies and research institutions as well as political organizations are currently discussing this field with the highest priority. One can say that the biggest challenge to smart cities is not technologies themselves, but the merging of all available technologies into one symbiotic unit that fulfills the expected objectives. Smart cities are about connecting subsystems, sharing and evaluating data, and providing quality of life and satisfaction to citizens. We have various models of transportation systems, optimizations of energy usage, street lighting systems, building management systems, urban transport optimizations, however currently, such models are dealt with separately. In this paper, we provide an overview of the smart city concept and discuss why Multi-agent systems are the right tool for the modeling of smart cities. The biggest challenge is in connecting and linking particular subsystems within a smart city. In this paper, a modeling of a smart city building blocks is provided and demonstrated with one particular example -- a smart street lighting system. Focus will be on the decomposition of the system into subsystems as well as a description of particular modules. We propose to build models and since each individual entity can be modeled as an agent with its beliefs, desires and intentions, we suggest using Multi-agent systems as a tool for modeling systems` connections within the smart city and assessing how best to use the data from those systems.
The topic of the presented paper is the discussion of possible approaches to the homogenization of synaptic information functions from the system-engineering point of view. Homogenization is a significant step to the construction of effective models that should enable understanding synaptic information functions. An attempt of a pragrnatic language translation within the niultilingual environrnent is proposed and briefly discussed.