The social foraging behavior of Escherichia coli bacteria has recently been studied by several researchers to develop a new algorithm for distributed optimization control. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, has many features analogous to classical Evolutionary Algorithms (EA). Passino [1] pointed out that the foraging algorithms can be integrated in the framework of evolutionary algorithms. In this way BFOA can be used to model some key survival activities of the population, which is evolving. This article proposes a hybridization of BFOA with another very popular optimization technique of current interest called Differential Evolution (DE). The computational chemotaxis of BFOA, which may also be viewed as a stochastic gradient search, has been coupled with DE type mutation and crossing over of the optimization agents. This leads to the new hybrid algorithm, which has been shown to overcome the problems of slow and premature convergence of both the classical DE and BFOA over several benchmark functions as well as real world optimization problems.
Optimization of sensors position is a challenging problem in wireless sensor networks since the processing process significantly affects energy consumption, surveillance ability and network lifetime. Vectorbased algorithm (VEC) and Voronoi-based algorithm (VOR) are two existing approaches. However, VEC is sensitive to initial deployment, while VOR always moves to the coverage holes. Moreover, the nodes in a network may oscillate for a long time before they reach the equilibrium state. This paper presents an initially central deployment model that is cost effective and easy to implement. In this model, we present a new distributed deployment algorithm based on boundary expansion and virtual force (BEVF). The proposed scheme enables nodes to move to the boundary rapidly and ultimately reach equilibrium quickly. For a node, only the location of its nearby nodes and boundary information are needed in the algorithm, thereby avoiding communication cost for transmitting global information. The distance threshold is adopted to limit node movement and to avoid node oscillations. Finally, we compare BEVF with existing algorithms Results show that the proposed algorithm achieves a much larger coverage and consumes lower energy.
Web caching is a technology to improve network traffic on the Internet. It is a temporary storage of Web objects for later retrieval. Three significant advantages of Web caching include reduction in bandwidth consumption, server load, and latency. These advantages make the Web to be less expensive yet it provides better performance. This research aims to introduce an advanced machine learning method for a classification problem in Web caching that requires a decision to cache or not to cache Web objects in a proxy cache server. The challenges in this classification problem include the issues in identifying attributes ranking and improve the classification accuracy significantly. This research includes four methods that are Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data with a size of Web objects as a significant attribute for Web cache performance enhancement.
Since their appearance in 1993, first approaching the Shannon limit, turbo codes have given a new direction in the channel encoding field, especially since they have been adopted for multiple norms of telecommunications such as deeper communication. A robust interleaver can significantly contribute to the overall performance a turbo code system. Search for a good interleaver is a complex combinatorial optimization problem. In this paper, we present genetic algorithms and differential evolution, two bio-inspired approaches that have proven the ability to solve non-trivial combinatorial optimization tasks, as promising optimization methods to find a well-performing interleaver for large frame sizes.
Descriptive analysis of the magnitude and situation of road safety in general and road accidents in particular is important, but understanding of data quality, factors related with dangerous situations and various interesting patterns in data is of even greater importance. Under the umbrella of information architecture research for road safety in developing countries, the objective of this machine learning experimental research is to explore data quality issues, analyze trends and predict the role of road users on possible injury risks. The research employed TreeNet, Classification and Adaptive Regression Trees (CART), Random Forest (RF) and hybrid ensemble approach. To identify relevant patterns and illustrate the performance of the techniques for the road safety domain, road accident data collected from Addis Ababa Traffic Office is subject to several analyses. Empirical results illustrate that data quality is a major problem that needs architectural guideline and the prototype models could classify accidents with promising accuracy. In addition, an ensemble technique proves to be better in terms of predictive accuracy in the domain under study.
Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing systems and represents an NP-complete problem. Therefore, using meta-heuristic algorithms is a suitable approach in order to cope with its difficulty. In many meta-heuristic algorithms, generating individuals in the initial step has an important effect on the convergence behavior of the algorithm and final solutions. Using some pure heuristics for generating one or more near-optimal individuals in the initial step can improve the final solutions obtained by meta-heuristic algorithms. Pure heuristics may be used solitary for generating schedules in many real-world situations in which using the meta-heuristic methods are too difficult or inappropriate. Different criteria can be used for evaluating the efficiency of scheduling algorithms, the most important of which are makespan and flowtime. In this paper, we propose an efficient pure heuristic method and then we compare the performance with five popular heuristics for minimizing makespan and flowtime in heterogeneous distributed computing systems. We investigate the effect of these pure heuristics for initializing simulated annealing meta-heuristic approach for scheduling tasks on heterogeneous environments.