打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
A novel bee swarm optimization algorithm for numerical function optimization

A novel bee swarm optimization algorithm for numerical function optimization

  • Reza Akbari
    ,
    ,
  • Alireza Mohammadi
    ,
  • Koorush Ziarati
Show more
http://dx.doi.org/10.1016/j.cnsns.2009.11.003
Get rights and content

Abstract

The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO–RP and BSO–RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

Keywords

  • Bee swarm optimization;
  • Numerical function optimization;
  • Time-varying weights;
  • Repulsion factor

1. Introduction

Optimization has been an active area of research for several decades. As many real-world optimization problems become increasingly complex, better optimization algorithms are always needed. In optimization problems, the objective is to find the minimum or maximum of the function under consideration. So, unconstrained optimization problems can be formulated as a D-dimensional minimization or maximization problem:

equation1
min(ormax)f(x),x=(x1,x2,,xD)
Turn MathJax on
where D is the number of the parameters to be optimized. There are many population based optimization techniques available for unconstrained numerical optimization. Genetic algorithms (GA), Particle Swarm Optimization (PSO), and Bee Algorithms (BA) are among the most popular optimization algorithms which employ a population of individuals to solve the problem on the hand. The success of a population based method depends on its ability to establish proper balance between exploration and exploitation. A poor balance between exploration and exploitation may result a weak optimization method which may suffer from premature convergence, trapping in a local optima, and stagnation.

GA is the most popular evolutionary algorithms, in which a population of individuals evolves according to a set of rules such as selection, crossover and mutation [1]. In such algorithm, exploitation is obtained through selection, where individuals move toward regions with higher fitness. Also, the exploration is provided by perturbing individuals using crossover and mutation operators [2].

PSO is a swarm intelligence technique developed by Eberhart and Kennedy [3], inspired by the social behavior of bird flocking and fish schooling. PSO has been shown successfully optimizes a wide range of continuous functions. The search for an optimum in PSO is an iterative process that is based on random decisions. In PSOs, exploitation is obtained through selecting the particle with best fitness as gbest and moving toward that particle. Each particle memorizes its previous best position, represented as pbest to explore the space between its previous best position and global best position. Although, PSO provides powerful methods for optimization problems, it suffers from different problems such as premature convergence and stagnation. Different approaches such as inertia weight, and time-varying coefficients were proposed to alleviate these problems [4], [5], [6] and [7].

The algorithms which are inspired from intelligent behaviors of honey bees constitute third class of population based methods. These algorithms have been developed and applied to different engineering fields [8], [9], [10], [11], [12], [13] and [14]. However, a few algorithms based on this idea were presented in literature for numerical optimization. A numerical optimization algorithm based on foraging behavior of honey bees, called Artificial Bee Colony (or ABC) was proposed by Karaboga in [15]. In ABC, the employer bees try to find food source and advertise them. The onlooker bees follow their interesting employer, and the scout bee fly spontaneously to find better food sources. A virtual bee algorithm (or VBA) developed by Yang in [16]. The VBA aimed to optimize the numerical function in 2 dimensions using a swarm of virtual bees which move randomly in the phase space and interact by finding food sources corresponding to the encoded values of the function. The intensity of interactions between these bees results the solution for the optimization problem. Sundareswaran et al. proposed a different approach based on natural behavior of honey bees in nectar collection in which the randomly generated worker bees are forced to move in the direction of the elite bee. The elite bee represents the best possible solution [17]. The bees move based on a probabilistic approach. The step distance of flight of bees is made as a variable parameter in the algorithm. The experiments showed that the developed algorithms based on intelligent behavior of honey bees are successful in solving numerical optimization problems and outperform other population based algorithm such as PSO, GA, and ACO [15], [16], [17], [18] and [19].

Similar to other swarm based optimization algorithms, it is important to establish a proper balance between exploration and exploitation in bee swarm optimization approaches. In a bee swarm, different behaviors of the bees provide this possibility to establish powerful balancing mechanism between exploration and exploitation. This property provides the opportunity to design more efficient algorithms in comparison with other population based algorithms such as PSO and GA.

It seems that exploration of the search space is achieved through two different ways. The first way is provided by scout bees. Spontaneous searching of the scout bees navigate them to explore new regions beyond that defined by employer or onlooker bees. Patterns by which the foragers and onlookers control their movements may provide good exploration approach. For example experienced foragers can use historical information about location and quality of food source to adjust their movement patterns in the search space. This approach alleviates the hard constriction on the search trajectory of a forager bee and extends the search trajectory to a search area. Usually, in bee swarms, an exploitation mechanism is based on two different behaviors. In first way, a honey bee advertise its’ food source by dancing in dance area. This behavior encourages other bees to select a dancer bee and follow her. In the second way, a forager try to forage its’ food source without advertising it.

This work presents a novel method based on foraging behaviors of honey bees. The proposed approach uses different types of bees to optimize numerical functions. Each type of bees employs a distinct moving pattern. The scout bees fly randomly over their nearby regions. An onlooker bee selects an experienced forager bee as its interesting elite and moves toward that bee. An experienced forager bee memorizes the information of the best food source which is found so far by it. It selects the best experienced forager bee as the elite bee, and adjusts its position based on the cognitive and social knowledge. The BSO algorithm utilizes a set of approaches to mitigate the stagnation and premature convergence problems. These approaches include repulsion factor, penalizing fitness, random walking, time-varying weights, and alleviating the hard constriction on the moving direction of the bees.

The paper is organized as follows. Section 2 introduces the intelligent behaviors of honey bees. Description of the proposed BSO algorithm along with approaches for mitigating of stagnation and premature convergence are presented in Section 3. Section 4 reports numerical test functions, experimental settings of the algorithms, and experimental analysis on the proposed approach in comparison with other algorithms. Finally, Section 5 concludes this work.

2. Intelligent behaviors of honey bees

A swarm of honey bees capable to perform complex tasks using relatively simple rules of individual bees’ behavior. Collecting, processing, and advertising of nectars are examples of intelligent behaviors of honey bees [12]. One of the main differences of the bee swarm in comparison with other population based methods is that it contains different types of bees such as scouts, onlookers, foragers, etc. Therefore, a bee swarm can provide different types of patterns which are used by the bees to adjust their flying trajectories. Therefore, we expect that this capability result effective and robust algorithms to solve highly complex problems.

A bee may have one of the following types: (employed/unemployed) forager, scout, onlooker, recruit, and experienced forager [10], [11], [12] and [13]. The type of a bee depends on the action which is performed and the level of information which may be used by it. A potential forager will start as unemployed forager. This type of bee has no knowledge about the environment, and the position of food sources in the environment. There are two types of unemployed foragers with different flying patterns, so-called scout and onlooker bees. Scout bees fly spontaneously around the hive and search for new food sources without any knowledge about the environment. Usually, small part of a swarm is selected as scout bees. It is possible to adjust the number of scout bees adaptively according to the gathered information from the environment. The onlooker bees wait in the nest, process the information shared by employed foragers, and select their interesting dancers. After that, the unemployed forager can be a recruit and can start searching by using the knowledge from the selected dancer bee.

The employed forager bees provide information about the environment and the currently discovered food sources for the onlooker bees and advertise them on the dance floor. The provided information by employed foragers is shared with a probability proportional to the profitability of the food source. So, the onlooker bees can employ a probabilistic approach to choose an employed forager between numerous dancers and adjust its search trajectory toward the most profitable source. The onlooker bees choose more profitable food sources with a greater probability, since more profitable sources provide more valuable information. Hence, more profitable food sources attract more recruit bees.

After choosing the food source by the recruit bee, she flies to finds the food source. After finding the food source, the recruit bee switches its type to the employed forager. The employed forager bee memorizes the location of food source and then starts exploiting it. After the foraging bee takes a part of nectar from the food source, it returns to the hive and saves the nectar to a food area in the hive. After saving the food, the bee enters to the decision making process and select one of the following three options (type of the bee at the next time depends on its selection) [10] and [11]:

If the nectar amount decreased to a low level or exhausted, she abandons the food source (the forager bee becomes to unemployed forager).

If there are still sufficient amount of nectar in the food source, it can continue to forage without recruiting the nestmates (the bee remains its type as forager).

It can perform waggle dance to inform the nestmates about the same food source. After that it recruits the nestmates before she returns to the food source (the bee remains its type as forager).

A bee may select one of these options to switch its type based on different information such as quality of food source, its distance from the hive, its direction, and ease of extracting the food source. For numerical optimization, the food source is considered as a point in the search space, and its’ quality corresponds to the fitness of the point [15].

The forager bees can memorize their historical information about the location and quality of food sources and use this information to make decisions at the next times intelligently. The bees which have this capability are called experienced foragers [10]. Using the historical information results a bee with more intelligence than forager bees. An experienced forager bee may obtain this intelligence using the information from waggle dance and use this information to select the next operation. For example, the bee may try to explore a food source if there are some other bees confirm the quality of that food source; she may adjust its flying trajectory based on information of the previously found food source and the new food sources, or she can be scout bee to search new areas if the fitness of the food source decreases rapidly.

3. Bee swarm optimization algorithm

In this section we present the proposed bee swarm optimization algorithm (BSO). The BSO algorithm contains three types of bees: experienced forager, onlooker, and scout bees which fly in a D  -dimensional search space S?RDS?RD to find the optimum solution. Assume that we have a set of bees in a swarm which represented as β  . As shown in Fig. 1, these bees are partitioned as β=?∪κ∪ξβ=?κξ based on their fitness, where ξ,κξ,κ, and ?  , respectively represent the sets of experienced forager, onlooker, and scout bees. In BSO algorithm, the fitness of a bee represents the quality of food source which is found by that bee so far. In this work, the percentage of scout, forager and experienced forager are determined manually. We use a small part of bees as scouts. The other part of bees is divided equally to onlooker and experienced forager bees (i.e. n(κ)=n(ξ)n(κ)=n(ξ)). Each bee i   is associated with a position vector

x(β,i)=(x(β,i1),x(β,i2),,x(β,iD)), which represents a feasible solution for an optimal problem in the D  -dimensional search space S  . In BSO algorithm, each position vector
xβ,i represents a food source with an associated quality which is represented as
fit(x(β,i)).

Fig. 1. 

The structure of the bee swarm and the flying patterns of the bees: the scout bees perform random walk around their current positions. An onlooker bee selects an experienced forager probabilistically as its interesting elite bee and follows it. The experienced forager bees memorize their historical information, select the global best bee as the elite bee and update their position according social and cognitive knowledge.

Figure options

The pseudocode of the BSO algorithm is presented in Fig. 2. The BSO employs stochastic process to find optimal solution. Initially, number of the bees, n(β)n(β), percentage of experienced forager, onlooker, and scout bees, and maximum number of iterations, ItermaxItermax, are determined. Also, other parameters are initialized. After that, at the start time of the algorithm all the bees are positioned randomly in the search space:

equation2
x0(β,i)=Init(i,S)?iβ
Turn MathJax on
where Init(i,S)Init(i,S) is the initialization function which associates a random position to the bee i   in the search space S  . After initialization, the bees of the swarm employ the following process to adjust their positions throughout iterations until the termination condition is met. At each iteration of the algorithm, the fitness of each bee is calculated and they are sorted based on their fitness values using the Sort()Sort() function. Thereafter, each of the bees is classified as experienced forager, onlooker or scout. A predefined percentage of the bees which have the worst fitness are selected as scouts, while the remaining bees are divided equally as experienced foragers and onlookers. The first half of these bees which have better fitness is selected as experienced foragers and the other half are selected as employers. Classification of bees into the three categories provides a swarm with highly dynamic behavior which can use different flying patterns. The flying patterns are presented in Fig. 1. The experienced forager, onlooker, and scout bees use these flying patterns to probabilistically adjust their trajectories in the search space for finding new food sources with better nectars.

Fig. 2. 

Pseudocode of bee swarm optimization algorithm

Figure options

In BSO algorithm, the food sources with poor qualities are abandoned and replaced by the new food sources which are found by the scout bees. The scout bees employ a random flying pattern to discover new food source and replacing the abandoned one with the new food source. A scout bee walks randomly in the search space and updates the position of the food source if the new food source has better quality than previously found food source by that bee. The scout bee walks randomly in a region with radius τ  . The search region is centered at current position of the scout bee. So the next position of a scout bee is updated using the following equation:

equation3
xnew(?,i)=xold(?,i)+Rw(τ,xold(?,i))
Turn MathJax on
where
xold(?,i) represents the position of the abandoned food source which is replaced by the new food source positioned at
xnew(?,i), and Rw   is a random walk function that depends on the current position of the scout bee and the radius search τ  . The initial value of radius τ   is a percentage of |Xmax-Xmin||Xmax-Xmin|, where XmaxXmax and XminXmin, respectively represent the maximum and minimum values of the search space along a dimension. The value of τ   is linearly decreased from τmaxτmax to τminτmin throughout iterations. The scout bees adjust their walking based on the τ. The large value of τ at the first iterations enables scout bees walking with large step size to explore wide regions in the search space. While the small values of τ in the last iterations encourage the scout bees to walk more precisely within small regions.

The success of an optimization algorithm highly depends on the balancing mechanism between exploration and exploitation. As mentioned in Section 2, poor balance between exploitation and exploration results a weak algorithm. In previous work on bee algorithms such as [15], [16], [17], [18] and [19], there exists a hard constriction on the trajectories of bees (e.g. the forager bees gravitates only to the elite bee). This may result the premature convergence. To cope with this problem, BSO employs the experienced foragers that use their historical information about the food sources and their qualities. The information which is provided for an experienced forager bee is based on its own experience (or cognitive knowledge) and the knowledge of other experienced forager bees in the swarm. The cognitive knowledge is provided by experienced foragers. These bees memorize the decisions that they have made so far and the success of their decisions. An experienced forager i   remembers the position of the best food source, denoted as

b(ξ,i)=(b(ξ,i1),b(ξ,i2),,b(ξ,iD)), and its quality which is found by that bee at previous times. The position of the best food source is replaced by the position of the new food source if it has better fitness (i.e.
b(ξ,i)=x(ξ,i)if(fit(x(ξ,i))>fit(b(ξ,i)))). The social knowledge is provided by sharing the nectar information of the food sources which are found by experienced forager bees. All the experienced forager bees select the best food source which is found by the elite bee as their interesting area in the search space. The position of the best food source is represented as the vector
e(ξ,·)=(e(ξ,·1),e(ξ,·2),,e(ξ,·D)). The elite bee is selected as an experienced forager with the highest fitness (i.e.
fiteˉξ,(·)>fit(b(ξ,i))?iξ). Since the relative importance of cognitive and social knowledge can vary from one cycle to another, random parameters are associated to each component of position update equation. So, the position of an experienced forager bee i∈ξiξ is updated using the following equation:

equation4
xnew(ξ,i)=xold(ξ,i)+ωbrb(b(ξ,i)-xold(ξ,i))+ωere(e(ξ,·)-xold(ξ,i))
Turn MathJax on
where rbrb and rere are random variables of uniform distribution in range of [0, 1] which model the stochasticity of the flying pattern, the parameters ωbωb and ωeωe, respectively control the importance of the best food source ever found by the i  th bee and the best food source which is found by elite bee, and
xnew(ξ,i) and
xold(ξ,i), respectively represents the position vectors of the new and old food sources which are found by the experienced forager i∈ξiξ. The second component in the right side of the position update equation is the cognitive knowledge which represents that an experienced forager is attracted towards the best position ever found by that bee. The third component is the social knowledge which represents that an experienced forager is attracted towards the best position
e(ξ,·) which is found by the interesting elite bee. The aforementioned flying pattern extends the movement trajectory of an experienced forager bee from a line to an area as shown in Fig. 1, and improves their searching abilities.

An onlooker bee uses the social knowledge provided by the experienced forager bees to adjust its moving trajectory at the next time. At each cycle of the algorithm, the nectar information about food sources and their positions (social knowledge) which are provided by the experienced forager bees are shared in the dance area. After that, an onlooker bee evaluates the provided nectar information, employs a probabilistic approach to choose one of the food sources and follows the experienced forager bee which found the selected food source. In other word, an onlooker bee i   selects an experienced forager j   from set ξ   as its own interesting elite bee, denoted as

e(ξ,i)=(e(ξ,i1),e(ξ,i2),,e(ξ,iD)), with probability pjpj. The probability pjpj is defined as a relative fitness of the selected experienced forager j   in the set ξ  :

equation5
pj=fit(x(ξ,j))c=1n(ξ)fit(x(ξ,c))
Turn MathJax on
where
fit(x(ξ,i)) is the fitness value of the food source which is found by the experienced forager bee j   which is proportional to the quality of food source, and n(ξ)n(ξ) is the number of experienced forager bees. The roulette wheel approach is used by onlooker bees for selecting their interesting elite bees. In this approach, as the quality of a food source is increased, the probability of its selection is increased, too. The flying trajectory of an onlooker bee i   is controlled using the following equation:
equation6
xnew(κ,i)=xold(κ,i)+ωere(e(ξ,i)-xold(κ,i))
Turn MathJax on
where
e(ξ,i) is the position vector of the interesting elite bee for onlooker bee i∈κiκ which is selected using (5),
xold(κ,i) and
xnew(κ,i), respectively represents the position of the old food source and the new one which are selected by the onlooker bee i  , and werewere probabilistically controls the attraction of the onlooker bee towards its interesting food source area.

The BSO algorithm provides a swarm of bees with different flying patterns. This heterogeneity results an effective population based algorithm for optimizing numerical functions. The random pattern is used by the scout bees. The BSO algorithm employs the scout bees to control diversity and stochasticity of the behaviors of the bees in the swarm. Usually, increasing diversity of population is used as a way to mitigate stagnation problem. In BSO algorithm, diversity of bee swarm is effectively controlled by adjusting the percentage of scout bees. Employing scout bees in an efficient way may produce valuable results. Re-initialization of scout bees is a one way which is performed to maintain diversity of population [19]. Re-initialization is a good choice in first iterations of the algorithm. In the last iterations, the algorithm tends to converge to optimum position, in such case, stagnation may occur and it seems that re-initialization is not useful. The BSO algorithm encourages the scout bees to fly randomly over the local areas to achieve better positions. The proposed approach permanently encourages the population to exit from stagnation state by exploring the nearby regions using scout bees. This approach provides excellent result in cases when the gradient does not point toward optimum solution or multiple local optima exist in direction of global optimum solution. The probabilistic pattern is used by the onlooker bees. The onlooker bees utilize the social knowledge about the food sources and their positions and probabilistically select one of them. This pattern encourages the swarm to control efficiently the exploration and convergence towards global optimum. Usually, premature convergence and stagnation may occur due to hard constriction on the movement trajectories of the individuals of a population. The third flying pattern which is used by experienced forager bees alleviates these problems. In our method, experienced foragers memorize the information of the previously found food sources and utilize this information to improve the diversity of the algorithm.

As a convenient observation, the convergence behavior of the BSO algorithm is presented. The convergence of BSO algorithm for a group of n = 11 bees (5 experienced foragers, 5 onlookers, and 1 scout) for two dimensional Sphere function is shown in Fig. 3. Fig. 3(a)–(d) presents the distribution of bees with different roles at 5th, 10th, 15th, and 20th iterations. The circles, squares, and triangle signs, respectively represent experienced forager, onlooker, and scout bees. As described before, and shown in this figure we can see that the distribution of bees in the first iterations depicts that BSO algorithm prefer exploration, while shrinking bees in last iterations represents that BSO algorithm tends to exploit around local optima. This process is occurred due to stochastic behaviors of bees which modify topology of the swarm constantly. At each cycle of the algorithm, the worst bee is selected as scout bee. The experienced foragers and onlookers are selected based on their fitness. So, the onlooker and scout bees are placed at the border of the swarm while the experienced forager bees are placed inside the swarm (the dashed circle partition the swarm into the experienced forager and onlooker bees). The experienced forager bees select the best bee as their interesting elite bee, and onlooker bees stochastically select their interesting elites from the experienced forager bees which are placed at the interior region of the swarm. This process provides powerful way to maintain balance between exploration and exploitation. The onlooker and scout bee control the global search while the experienced forager bee maintain the convergence towards the global optimum.

Fig. 3. 

Distribution of different types of bees on the two dimension search space. Triangle, circles and squares, respectively represent scout, experienced forager and onlooker bees.

Figure options

3.1. Approaches to mitigate stagnation

Stagnation and premature convergence are the major weaknesses of the population based algorithms. As mentioned before, the flying patterns in BSO algorithm control diversity of the swarm and provide an effective way to balance between exploration and exploitation. This approach alleviates the premature convergence problem. An extension of the BSO algorithm which aims to alleviate stagnation problem and improve its performance is suggested in this section. The stagnation occurs when the algorithm reaches its convergence in which the stagnant individuals of the population choose the same position and the population traps to local optima. We propose the following approaches (repulsion factor and penalizing fitness) aiming to alleviate stagnation to improve the performance of the BSO algorithm in terms of minimizing numerical functions. We call the BSO algorithm with these extensions as BSO–RP.

Repulsion factor: The concept of repulsion technique was first introduced in PSO methods [20], [21] and [22]. As stated by Eberhart [23], “the repulsion technique adds the ability to guarantee that all individuals of a population will not move toward the best solution which is found so far. Therefore, an algorithm with repulsion technique can have the ability to avoid already found solutions and, therefore, to have more chance to find new position of search space with better fitness”. The repulsion technique provides this ability by increasing diversity of the population. To increase diversity, a repulsion technique encourages a part of individuals to move in opposite directions of their elites.

In bee colony algorithms, the onlooker bees move according to the past experience of forager bees. In this approach the new population inherits the experience of the past population and the individuals of population move toward the food sources which are found by forager bees according to their fitness. This procedure causes that the population ignores a part of search space that is behind of forager bees and may containing better solutions. So, the population may spend more time to explore this part of search space.

In order to alleviate this problem and enhance the searching ability of the bees in BSO–RP, we suggest a parameter called repulsion factor. The repulsion factor, represented as sign(r)signr, encourage a bee to modify its search direction and move toward a region which may ignore by the population. Based on these considerations, the position update equations for onlooker and experienced foragers are modified as follows:

equation7
xnew(κ,i)=xold(κ,i)+sign(r){ωere(e(ξ,i)-xold(κ,i))}
Turn MathJax on
equation8
xnew(ξ,i)=xold(ξ,i)+sign(r){ωbrb(b(ξ,i)-xold(ξ,i))+ωere(e(ξ,·)-xold(ξ,i))}
Turn MathJax on
where sign(r)sign(r) is defined as:
equation9
sign(r)=1if(r?Prf)-1if(r>Prf)
Turn MathJax on
where r   is a random parameter which is chosen uniformly from the interval [0, 1], and PrfPrf is a predefined probability which controls the rate of repulsion in the swarm. The repulsion factor maintains the diversity of the population by means of PrfPrf parameter. As the value of PrfPrf increases the level of diversity in population decreases. As a consequence, this approach successfully extends the search range of the population.

Penalizing fitness  : To further mitigate the stagnation problem we use a parameter called penalty. As stated by Parsapolous [24], the penalty parameter is used to escape from local optima. The penalty is assigned to the fitness of a solution. In BSO algorithm, the best solutions are maintained by experienced forager bees. At stagnation time, usually, a part of experienced forager bees trapped in local optima and other bees stop moving once they catch up with their interesting experienced forager bees. Therefore, we can use penalty parameter for the stagnant experienced forager bees to escape them from local optima. Penalty is a function of solution age. Solution age is determined as a number of iterations in which the solution is not changed. The fitness of a stagnant bee i   is decreased using the following equation:

equation10
fit(x(β,i))=ρ(ai)×fit(x(β,i))
Turn MathJax on
where ρ(ai)ρ(ai) represents the penalty function, and aiai is the age of the bee i. The penalty of a bee increases as its age increases. The penalty decreases the fitness of a bee which is trapped in a local optima, and encourages that bee to escape form its’ position and gravitate toward other bees.

3.2. Time-varying weights

Another concept “time-varying weights” (TVW) is introduced here to further improve the performance of BSO algorithm. During the first stages of a swarm optimization method we prefer that individuals of the swarm wander through the entire search space without convergence toward local optima, while during the last stages, it is desirable to improve the moving patterns of individuals to enhance convergence toward the optimum solution. The optimization algorithms based on bee colonies provide more flexibility than other population based algorithms to control the wandering and converging mechanism respectively in the first and the last iterations. In one hand, the bee algorithms use scout bees which wander through entire search space throughout iteration. In other hand, the onlooker bees stochastically select the forager to follow them. This process provides relatively good exploration through search space. However, a drawback of the bee algorithm is that the onlooker bees are gravitated toward selected foragers rapidly. This may encourage the individuals to cluster around local optima and premature convergence occurs.

The aforementioned concerns encourage us to propose dynamic weights for the BSO algorithm (BSO–RPTVW). The dynamic weights aim to enhance the global search in the first iterations and encourage the bees to converge toward the optimum solution in the last iterations. This approach has been widely used in PSO methods [25] and [26]. To achieve this goal the dynamic weights are developed for (3) and (5). The Eqs. (3) and (5) show that, for experienced foragers, the search toward the optimum solution is guided by the two weight parameters (the cognitive component and the social component) while for onlookers, the search is guided only by social component. However, it seems that proper control of these two components is very important to find the optimum solution accurately and efficiently.

It seems that a relatively high value of the ωbωb, in comparison with the weight ωeωe, encourage the individuals to wander the search space, while a relatively high value of the ωeωe may lead bees to prematurely converge toward a local optimum. Considering this property, we reduce the importance of the cognitive part and increase the importance of the social part. For this purpose, the forager bees start with a large cognitive part and small social part at the first iterations and end with a small cognitive part and a large social part. This allows the bees to move around the search space in the first iterations without converging toward local optima. Also, the bees are allowed to converge to the best solution in the last iterations. In our model the weights are dynamically adjusted through iterations. Normally, the value of ωbωb is set to a high value and decreased and the value of ωeωe is set to a low value and increased linearly through iterations to balance between individual knowledge and that of the others. The dynamics weights ωbωb and ωeωe are represented as the following formulations:

equation11
ωb=ωb,max-ωb,max-ωb,minitermax×iter
Turn MathJax on
equation12
ωe=ωe,min+ωe,max-ωe,minitermax×iter
Turn MathJax on
where ωb,maxωb,max and ωe,maxωe,max, respectively represent the maximum values of the cognitive and social parts, ωb,minωb,min and ωe,minωe,min, respectively represent the minimum value of the cognitive and social part, iter   is the current iteration number, and itermaxitermax is the maximum number of allowable iterations. In this paper, the weight ωbωb changes linearly from 1.5 to 0.75 while the weight ωeωe linearly increases from 0.75 to 1.5.

4. Experiments

In this section, the experiments that have been done to evaluate the performance of the proposed BSO algorithm and its two extensions for a number of analytical benchmark functions are described. The previous studies on optimizations algorithms based on foraging behavior of the honey bees show that these algorithms have better performance in comparison with other population based algorithms such as genetic algorithms and PSOs [15], [16], [17], [18] and [19]. So, we only focus on optimization algorithm inspired from foraging behavior of honey bees in this study. There exists few numerical function optimization algorithms based on the bee colony concepts. However, the performance of BSO algorithm is evaluated in comparison with two other algorithms based on artificial bees. The comparisons are carried out in the case of numerical continuous functions. The continuous test functions in this experiment have been extensively used to compare population based optimization algorithms.

4.1. Benchmarks

To test the performance of BSO, six well known benchmark functions are used here for comparison, both in terms of optimum solution after a predefined number of iterations and the rate of convergence to the optimum solution. These benchmarks are widely used in evaluating performance of population based methods [4], [5], [6] and [7]. The first two functions are unimodal, while others are multimodal. A function is called unimodal, if it has only one optimum position. The multimodal functions have two or more local optima.

Table 1 gives the test functions, mathematical expression, their optimum values, their optimum positions, and ranges. To test the robustness of the algorithms, the most common initialization ranges used in the literature for these benchmarks considered in this paper. This commonly accepted approach, called asymmetric initialization, is used in this paper. In asymmetric initialization the individuals of a population are initiated only in the right side of the search space, while the symmetric initialization employs the entire search space (the works presented in [9] and [13] were used symmetric initialization). Considering this approach, each bee has been initialized with a position which is randomly chosen in range

Xmax2,Xmax. To control the explosion of each bee, the excessive searching outside the predefined search space is limited. During a trial of each algorithm, the position values of a bee are limited to interval [Xmin;Xmax][Xmin;Xmax].

Table 1.

The benchmark functions, their rand, optimum value, and maximum values for position and velocity.

FunctionFormulaTypeRangeOptimum positionOptimum value
fSphfSph
fSph(x)=i=1nxi2
Unimodal[?100, 100](0,0,…,0)0
fRosfRos
fRos(x)=i=1n-1(100(xi+1-xi2)2+(xi-1)2)
Unimodal[?30, 30](1,1,…,1)0
fRasfRas
fRas(x)=i=1n(xi2-10cos(2πxi)+10)
Multimodal[?5.12, 5.12](0,0,…,0)0
fShf6fShf6
fSchf6=0.5-sinx12+x222-0.5(1+0.001(x12+x22))2
Multimodal[?100, 100](0,0,…,0)0
fGrifGri
fGri(x)=14000i=1nxi2-i=1ncosxii+1
Multimodal[?600, 600](0,0,…,0)0
fAckfAck
fAck(x)=-20exp-0.21n1nxi2-exp1n1ncos(2πxi)+20+e
Multimodal[?30, 30](0,0,…,0)0
Table options

We select these test function as each of them is a candidate for a different class of real-world problems. They have different characteristics (e.g. they are unimodal or multimodal, or have dependent or independent variables). A robust optimization algorithm maintaining balance between exploration and exploitation, controlling diversity, mitigating premature convergence and stagnation to cope with problems of different types.

The Sphere has independent variables, contains no local optima, and have smooth gradient toward global optimum. It represents an easy problem which successfully solved by many population based optimization algorithm. The Rosenbrock function has smooth slope around its’ global optimum position, its global optimum lays inside a long, narrow, and parabolic shaped flat valley, its variables are strongly dependent, and the gradient do not point towards its optimum position. All of these cause that the convergence toward the global optimum in Rosenbrock be relatively difficult. This function has been frequently used to test the optimization algorithms. The algorithms with hard constriction on movement trajectories of their individuals may easily encounter to stagnation. The Schaffer’ f6 is a multimodal function with dependant variables. It has smooth slope around global optimum, so methods with poor flying patterns encounter problem in regions nearby global optimum. An optimization algorithm should increase diversity of the population to cope with the problem of this type.

Ackley’s is a widely used multimodal test function. This function has one narrow global optimum basin and numerous local optima. In comparison with other multimodal functions, it represents a relatively easy problem as it has shallow local optima. There is no dependency between variables of the Rastrigrin test function. The cosine modulation produces frequent local optima. So, this test function is highly multimodal which makes it to be a complex problem. An optimization algorithm should provide an efficient balance between exploration and exploitation and have good diversity to overcome the problems of this type. The Griewank’s function is based on the Sphere function. Similar to Rastrigrin function it has many wide spread local optima regularly distributed. Its second component represents linkage between variables which make it to a difficult multimodal problem. The local optima are located in direction of gradient, so an optimization algorithm should provide efficient balance between global and local search to solve this type of problem. The Griewank’s function with high dimensionality seems unimodal.

4.2. Settings of the algorithms

Simulations were performed to observe the performance of the proposed algorithm for finding optimum solutions. There are few algorithms in the literatures for numerical optimization based on foraging behaviors of honey bees. The performance of the new method (BSO) and its two extensions (BSO–RP and BSO–RPTVW) are compared with the performance of the Artificial Bee Colony (ABC) [15], and Bee and Foraging Algorithm (BFA) [17] methods. These algorithms have a few parameters; part of them is common while another part is specific for each algorithm.

Common parameters are number of dimensions of the search space, maximum generation, population size, and total number of trials. For all test functions with exception of 2D function Schaffer’s f6, three different dimension sizes, 10, 20 and 30 are tested. The corresponding maximum generations are 5000, 7500 and 10,000, respectively. For Schaffer’s f6 function, the maximum generation is set to 2000. For all the test functions, the population size is set to 200, and a total of 100 trials for each experimental setting are conducted.

In BFA, the parameter λ  , the distance of flight, has key role. We set λ=11λ=11 for all test functions. The parameter ptpt which controls the selection of elite bee or other forger bees is set to 0.8. The percentage of onlooker and scout bees is controlled by parameter ptpt throughout iterations. The onlooker bees are gravitated toward elite bee.

The same settings as presented in [15] and [19] are used in this paper for ABC algorithm. The percentage of onlooker and employed bees are equally set to 50% of the colony and the number of scout bees is selected as one.

In BSO algorithm, the percentage of onlooker bees is 48% of the swarm, the experienced forager bees is 48% of the swarm, and the scout bees is 4% of the swarm. In BSO–RP algorithm, the control parameter prfprf is set to 0.8, and the maximum and minimum values of the parameter τ   are set as τmax=1τmax=1 and τmin=0.2τmin=0.2. In BSO–RPTVW algorithm, the maximum values of the dynamic weights are set 1.5 (i.e. ωb,maxe,max=1.5ωb,max=ωe,max=1.5), and their minimum values are set to 0.5 (i.e. ωb,mine,min=0.5ωb,min=ωe,min=0.5). The value of ωbωb linearly decreases from its’ maximum value to its’ minimum value, while the value of ωeωe linearly increases from its’ minimum value to its’ maximum value.

4.3. Experimental results

In the experiments, the number of iterations to reach a predefined threshold was specified for each function. Different success criteria for different functions are presented in the literatures. For Schaffer’s f6, the success criterion was set to 0.00001, whereas for the other functions, the stopping criteria were set to 0.0001. After the final iteration, if the minimum value was reached by the algorithm was not below the threshold, the trial was considered unsuccessful. The values of mean, standard deviations, success rate, and the average number of iterations before the algorithms reach the success criteria in terms of each test function which are obtained by the BFA, ABC, BSO, BSORP, and BSO–RPTVW algorithms are given in Table 2 and Table 3.To ease of observation, the best results obtained by the algorithms are shown in bold. The X sign in Table 3 represents that the corresponding algorithm was failed in reaching success criteria in all trials. The average fitness of the algorithms which are smaller than E?45 are considered as 0.

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
雅思阅读第007套P1-FLIGHT_OF_THE_HONEYBEE
Expectation-propagation for weak radionuclide identification at radiation portal monitors
An ICA page-papers,code,demo,links (Tony Bell)
M2Eclipse: FAQ
Particle Swarm Optimization_Artificial Intelligence
分布式搜索引擎search.minty dowser类聚引擎和larbin蜘蛛
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服