This paper presents an observation on adaptation of Hopfield neural network dynamics configured as a relaxation-based search algorithm for static optimization. More specifically, two adaptation rules, one heuristically formulated and the second being gradient descent based, for updating constraint weighting coefficients of Hopfield neural network dynamics are discussed. Application of two adaptation rules for constraint weighting coefficients is shown to lead to an identical form for update equations. This finding suggests that the heuristically-formulated rule and the gradient descent based rule are analogues of each other. Accordingly, in the current context, common sense reasoning by a domain expert appears to possess a corresponding mathematical framework.
This paper presents a neuro-based approach for annual transport energy demand forecasting by several socio-economic indicators. In order to analyze the influence of economic and social indicators on the transport energy demand, gross domestic product (GDP), population and total number of vehicles are selected. This approach is structured as a hierarchical artificial neural networks (ANNs) model based on the supervised multi-layer perceptron (MLP), trained with the back-propagation (BP) algorithm. This hierarchical ANNs model is designed properly. The input variables are transport energy demand in the last year, GDP, population and total number of vehicles. The output variable is the energy demand of the transportation sector in Million Barrels Oil Equivalent (MBOE). This paper proposes a hierarchical artificial neural network by which the inputs to the ending level are obtained as outputs of the starting levels. Actual data of Iran from 1968-2007 is used to train the hierarchical ANNs and to illustrate capability of the approach in this regard. Comparison of the model predictions with conventional regression model predictions shows its superiority. Furthermore, the transport energy demand of Iran for the period of 2008 to 2020 is estimated.
High resolution (0.C5 A) spectra are used for the determlnatlon of the profiles of the diffuse Interstellar bands 5780 and 5797 orlginatlng in single clouds. It is shown that some of the clouds may contain agents of certain diffuse bands whereas others may not. The proflles of diffuse bands observed in dlstant stars contaln contributions from several clouds, differing In optical properties, which share the Doppler displacements observed in interstellar sodium lines. The Doppler splitting inside the diffuse bands´ profiles is, however, efficlently veiled because of the large intrinslc widths of the bands under consideration.
Artificial neural networks (ANN) are one of the highly preferred artificial intelligence techniques for brain image segmentation. The commonly used ANN is the supervised ANN, namely Back Propagation Neural Network (BPN). Even though BPNs guarantee high efficiency, they are computationally non-feasible due to the huge convergence time period. In this work, the aspect of computational complexity is tackled using the proposed high speed BPN algorithm (HSBPN). In this modified approach, the weight vectors are calculated without any training methodology. Magnetic resonance (MR) brain tumor images of three stages, namely severe, moderate and mild, are used in this work. An extensive feature set is extracted from these images and used as input for the neural network. A comparative analysis is performed between the conventional BPN and the HSBPN in terms of convergence time period and segmentation efficiency. Experimental results show the superior nature of HSBPN in terms of the performance measures.
In IaaS (Infrastructure as a Service) cloud environment, users are provisioned with virtual machines (VMs). However, the initialization and resource allocation of virtual machines are not instantaneous and usually minutes of time are needed. Therefore, to realize efficient resource provision, it is necessary to know the accurate amount of resources needed to be allocated in advance. For this purpose, this paper proposes a high-accuracy self-adaptive prediction method using optimized neural network. The characters of users demands and preferences are analyzed firstly. To deal with the specific circumstances, a dynamic self-adaptive prediction model is adopted. Some basic predictors are adopted for resource requirements prediction of simple circumstances. BP neural network with self-adjusting learning rate and momentum is adopted to optimize the prediction results. High-accuracy self-adaptive prediction is realized by using the prediction results of basic predictors with different weights as training data besides the historical data. Feedback control is introduced to improve the whole operation performance. Statistic validation of the method is conducted adopting multiple evaluation criteria. The experiment results show that the method is promising for effectively predicting resource requirements in the cloud environment.
A test statistic for homogeneity of two or more covariance matrices is presented when the distributions may be non-normal and the dimension may exceed the sample size. Using the Frobenius norm of the difference of null and alternative hypotheses, the statistic is constructed as a linear combination of consistent, location-invariant, estimators of trace functions that constitute the norm. These estimators are defined as U-statistics and the corresponding theory is exploited to derive the normal limit of the statistic under a few mild assumptions as both sample size and dimension grow large. Simulations are used to assess the accuracy of the statistic.
The laws of gravity and mass interactions inspire the gravitational search algorithm (GSA), which finds optimal regions of complex search spaces through the interaction of individuals in a population of particles. Although GSA has proven effective in both science and engineering, it is still easy to suffer from premature convergence especially facing complex problems. In this paper, we proposed a new hybrid algorithm by integrating genetic algorithm (GA) and GSA (GA-GSA) to avoid premature convergence and to improve the search ability of GSA. In GA-GSA, crossover and mutation operators are introduced from GA to GSA for jumping out of the local optima. To demonstrate the search ability of the proposed GA-GSA, 23 complex benchmark test functions were employed, including unimodal and multimodal high-dimensional test functions as well as multimodal test functions with fixed dimensions. Wilcoxon signed-rank tests were also utilized to execute statistical analysis of the results obtained by PSO, GSA, and GA-GSA. Experimental results demonstrated that the proposed algorithm is both efficient and effective.
Time series consists of complex nonlinear and chaotic patterns that are difficult to forecast. This paper proposes a novel hybrid forecasting model which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for the LSSVM model and the LSSVM model that works as time series forecasting. Three well-known time series data sets are used in this study to demonstrate the effectiveness of the forecasting model. These data are utilized to forecast through an application aimed to handle real life time series. The results found by the proposed model were compared with the results of the GMDH and LSSVM models. Experiment result indicates that the hybrid model was a powerful tool to model time series data and provides a promising technique in time series forecasting methods.
Let $q \ge 3$ be a positive integer. For any integers $m$ and $n$, the two-term exponential sum $C(m,n,k;q)$ is defined by $C(m,n,k;q) = \sum _{a=1}^q e ({(ma^k +na)}/{q})$, where $e(y)={\rm e}^{2\pi {\rm i} y}$. In this paper, we use the properties of Gauss sums and the estimate for Dirichlet character of polynomials to study the mean value problem involving two-term exponential sums and Dirichlet character of polynomials, and give an interesting asymptotic formula for it.