Flexible Wolf Pack Algorithm for Dynamic Multidimensional Knapsack Problems

Optimization problems especially in a dynamic environment is a hot research area that has attracted notable attention in the past decades. It is clear from the dynamic optimization literatures that most of the efforts have been devoted to continuous dynamic optimization problems although the majority of the real-life problems are combinatorial. Moreover, many algorithms shown to be successful in stationary combinatorial optimization problems commonly have mediocre performance in a dynamic environment. In this study, based on binary wolf pack algorithm (BWPA), combining with flexible population updating strategy, a flexible binary wolf pack algorithm (FWPA) is proposed. Then, FWPA is used to solve a set of static multidimensional knapsack benchmarks and several dynamic multidimensional knapsack problems, which have numerous practical applications. To the best of our knowledge, this paper constitutes the first study on the performance of WPA on a dynamic combinatorial problem. By comparing two state-of-the-art algorithms with the basic BWPA, the simulation experimental results demonstrate that FWPA can be considered as a feasibility and competitive algorithm for dynamic optimization problems.


Introduction
Most research in evolutionary computation focuses on static problems where the entire problem-related data remains stationary through optimization procedure [1][2][3]. However, numerous real-world optimization problems arising from the uncertainty of future events indeed have a dynamic nature. Changes in dynamic optimization problems (DOPs) may occur in the decision variables, constraints, and objective function [4,5]. This requires optimization algorithms to not only detect and respond to the change of optima as quickly as possible but also keep track of the changing optima dynamically. Hence, the capability of continuously adapting the solution to a changing environment is necessary for optimization approaches [3,6]. Therefore, DOPs are more challenging to address than stationary optimization problems.
DOPs can be generally divided into two major fields as combinatorial and continuous [7][8][9]. Typical combinatorial DOPs include dynamic travelling salesman problem (DTSP) [10], dynamic vehicle routing problem (DVRP) [11], dynamic job-shop scheduling problem (DJSSP) [12], and dynamic knapsack problem (DKP) [13][14][15]. In fact, many practical problems can be abstracted as a specific type of dynamic multidimensional knapsack problem (DMPK) when multiple dynamic constraints are needed to be tackled, such as task allocation, investment decision, cargo loading, and budget management [9,16]. Given their wild application and complexity, DMKPs have important theoretical and practical value. Evolutionary algorithms (EAs) and swarm intelligence-based algorithms are expected to perform well on solving both combinatorial and continuous DOPs since evolutionary dynamics in nature also take place in a highly uncertain environment [8,17,18].
Wolf pack algorithm (WPA) [19] is a relatively new and promising member of swarm intelligence-based algorithms that model the cooperative hunting behavior of wolf pack. It has been proved an efficient optimizer for solving many nonlinear and complex optimization problems by successful applications in image processing [20], power system control [21], robot path planning [22], and static MKPs [23]. Many derivative versions of WPA also have been designed for solving different problems, such as binary WPA (BWPA) for 0-1 ordinary knapsack problem [24], improved binary WPA (IBWPA) for MKPs [23], and discrete WPA (DWPA) for TSP [25]. In [26], an integer coding wolf pack algorithm (ICWPA) is proposed to cope with the combat task allocation problems of aerial swarm. In [27], the improved WPA (IWPA) is proposed to solve VRP. Despite its high efficiency of binary WPA (BWPA) in solving static MKPs, WPA has not been introduced into the area of DMKPs.
The key issue of handling DOPs using EAs is how to avoid population diversity loss problem and maintain population diversity while tracking the changing global optima [5,8,9,28]. In this regard, a flexible population updating strategy which is capable of introducing and maintaining diversity during execution is designed for BWPA to address the DMKPs in this study. Moreover, the flexible population updating strategy that generates new individuals by making use of the memory of previously found good solutions can be viewed as an explicit memory scheme [29,30].
Compared with static extensions, there are relatively far less reported publications about DMKPs. It is necessary to develop new solution approaches for addressing DMKPs more efficiently as DMKPs have numerous practical implications. This is one of the main motivations of this study. Secondly, to the best of our knowledge, this is the first study that investigates the performance of BWPA and its improved version (as proposed in this paper) on MKPs in dynamic environments.
The reminder of this paper is arranged as follows: Section 2 provides the literature review and related concepts of DMKPs. The original BWPA and its variant FWPA are discussed in detail in Section 3. While Section 4 conducts the simulation experiment and analyzes the results. Finally, conclusions and some future research issue are given in Section 5.

Problem Definition and Related Work
In this section, we outline the necessary concepts of DMKPs and overview the related work about the MKPs in dynamic environments.

Definition of the Dynamic Multidimensional Knapsack
Problem. MKP is a NP-hard problem and has been wildly used as a combinatorial benchmark problem of EAs and swarm intelligence-based algorithms [31,32]. The MKP depends on the values of the profits p j , resource consumptions w kj , and the resource constraints c k . As the generalization of the ordinary knapsack problem, MKP is more representative of real-world scenarios because multiple constrains are concerned [33]. The static MKP can be generally formulated as follows [34]: s:t: w kj x j ≤ c k , k ∈ M = 0, 1,⋯,m f g where n is the number of items and m is the number of knapsack constrains with capacities c k for k = 1, 2, ⋯, m. Each item j ∈ N requires w kj units of resource consumption in the k th knapsack and yields p j units of profit upon inclusion. The goal of MKP is to find a subset of all items that yield maximum profit without exceeding the multidimensional resource capacities [34]. All entries are naturally nonnegative. More precisely, without loss of generality, it can be assumed that the following constraints, as defined by (3), are satisfied. If this is not the case, one or more variables could be fixed to 0 or 1.
Dynamic instances of knapsack problems have been proposed before. However, these studies are mainly focused on either only one dimension problem or a cyclic change of the resource constraint [35,36]. Inspiration from [13,37], we construct the dynamic MKP by updating all parameters of w kj , p j , and c k after a predefined simulation time unit using a normally distributed random distribution with zero mean and standard deviation θ: In formula (4), p + j , w + kj , and c + k denote the updated parameters of MKP when a change occurs after a predefined simulation time units, respectively. The less number of simulation time units yield to more frequent changes and vice versa [13,[37][38][39]. The number of iterations allocated for each environment is usually adopted as the frequency of changes.

Related Work on DMKPs.
In recent years, DMKPs have attracted growing interest from the optimization community with its wide applications and challenging solutions. The related research on DMKPs can be generally summarized as follows: (1) Various dynamic benchmark generators for DMKPs: many generators have been proposed to generate changing environments for MKPs and then translate a well-known static MKP into a dynamic version using specialized procedures. Branke et al. [37] designed a dynamic version of MKP by using a normal distribution to update each parameter of a MKP when a change occurs, as shown in formula (4). Yang and Yao [39] formalized a well-known dynamic problem generator to create required dynamics for a given static combinatorial problem using the bitwise exclusive-or (XOR) operator. This generator is also available for MKPs. Based on a XOR DOP generator, Li and Yang [40] proposed a generalized dynamic benchmark generator (GDBG) that can be instantiated into the binary space, real space, and combinatory space. In addition, the GDBG can present a set of different properties to test algorithms by tuning some control parameters. Rohlfshagen and Yao [38] proposed a new benchmark problem for dynamic combinatorial optimization by taking both the underlying dynamics of the problem and the distances between successive global optima into consideration; the parameters of MKP can be changed over time by some set of difference equations (2) Effects of solution representation techniques for DMKPs: the effects of different solution representations (i.e., weight coding, binary representation, and permutation representation) were compared with a set of DMKPs in [37]. Simulation results revealed that the solution representation affects the algorithms' performance greatly when solving DMKPs and the binary representation performs relatively poor In [41], a stochastic 0/1 KP where the value of the items p j are deterministic but the unit resource consumptions w + kj are randomly distributed was studied. He et al. [42] proposed a more generalized timevarying KP (TVKP) called randomized TVKP (RTVKP) where all parameters of MKPs p + j , w + kj , and c + k change dynamically in a random way. Moreover, the dynamic version of MKPs that change its parameters p + j , w + kj , and c + k using normal distribution is used as the dynamic benchmark problem of MKPs most wildly [13,14,37,38] (4) Different solution approaches for DMKPs: both EAs and swarm intelligence-based algorithms have been applied to solve DMKPs by adding some strategies to improve their adaptability to dynamic environments. In [42], the elitists model-based genetic algorithm (EGA) was integrated with greedy optimization algorithm (GOA) to handle RTVKPs; the GOA is capable of avoiding infeasible solutions and improving the convergence rate. Ünal [43] adopted the random immigrant-based GA and memory-based GA to solve the DMKPs, respectively. Compared with the random immigrant-based GA, the memory-based GA was proved to be more effective to adapt to the changing environments for DMKPs. Afterward, Ünal and Kayakutlu [14] tested different partial random restarting approaches of parthenogenetic algorithm (PGA) [44] by solving a set of MKPs in dynamic environments. When solving the DMKPs using ant colony algorithm (ACO), Randall [45] updated the pheromone trails indirectly according to the changes made to the solutions during the solution repair period; therefore, partial knowledge of the previous environment is preserved and the adaptability to dynamic environments is enhanced. Baykasoğlu and Ozsoydan [13] proposed an improved firefly algorithm (FA) that introduces population diversity by partial random restarts and the adaptive move procedure. The simulation results showed that the improved FA was a very powerful algorithm for solving both static and dynamic MKPs

Overview of Binary Wolf Pack Algorithm
Wolf pack algorithm (WPA) is a relatively new swarm intelligence-based optimizer which simulate the collaborative hunting behavior of wolf pack [19]. The basic WPA was originally designed for continuous optimization problems. Due to its simple implementation, robustness, and competitive global convergence performance for high-dimension multimodal functions [19][20][21], WPA has attracted increasing attention and its various derivative versions for solving discrete problems have been developed in recent years. In [24], Wu et al. proposed a binary WPA (BWPA) based on binary coding of solution to solve the classic 0-1 KPs. Afterward, they modified the BWPA by adding a trying-loading solution repair operator to handle MKPs [23]. Inspired by social hierarchy of biological wolves, individuals in WPA are divided as artificial lead wolf, scout wolves, and ferocious wolves according to their roles during searching optimum. The optimization process of WPA can be generally summarized as scouting, calling, and besieging behavior. In each iteration, the lead wolf can be replaced by other wolves that dynamically own better fitness and the whole population is updated in order to increase diversity. The main operation procedures of the BWPA are summarized as follows: Step 1. Initialize the parameters of algorithm step coefficient S, distance determinant coefficient d near , maximum number of repetitions in scouting behavior T max , and population renewing proportional coefficient β. Randomly initialize the position of artificial wolves in N × n Euclidean space, where N is the number of wolves and n is the number of variables, the position of artificial wolf i is As for MKP, X i is a n bit binary string and represents a potential solution. Y i = f ðX i Þ denote the objective function value of the wolf i. The wolf X lead with best objective function value Y lead = max fY i g is selected as the lead wolf of the first generation.
Step 2. Scouting behavior models the board search of prey in wolf pack's hunting behavior under the command of lead wolf. Except the lead wolf, the rest n − 1 wolves act as the scout wolves to take the scouting behavior by implementing the moving operator Θ, respectively, until Y i > Y lead or the scouting repetition number T reaches T max , then go to Step 3.
If Y i > Y lead , the scout wolf i replaces the role of previous lead wolf and acts as the new lead wolf; Elseif Y i ≤ Y lead , the scout wolf i, respectively, takes a step towards h different directions and move to the best direction p * (i.e., Y ip * = max fY ip g). h is a positive integer that is randomly selected in the interval of ½h min , h max . After taking a step towards the p th scouting direction (p ∈ H, H = f1, 2,⋯,hg), the position of the scout wolf i is updated by where X i and step a denote the position and step size of the scout wolf i, respectively. M a = f1, 2, ⋯, mg. The function of moving operator ΘðX i , M a , step a Þ is updating the X i by reversing the step a bits values which are randomly selected from M a .
Step 3. Except the lead wolf, the rest n − 1 wolves secondly act as the ferocious wolves in calling behavior. In order to hunt the prey, the lead wolf commands the ferocious wolves to gather towards its position X lead by howling. The position of the ferocious wolf i is updated by where X new i and step b denote the updated position and step size of the ferocious wolf i, respectively. M b is the set of bits with different values between X lead and X i . Θ is the same moving operator as defined in Step 2.
If Y inew ≥ Y lead , the ferocious wolf i replaces the previous lead wolf and restarts the calling behavior; otherwise, the ferocious wolf i continues running until d is ≤ d near , then go to Step 4, where d is indicates the distance between X lead and X i .
Step 4. After calling behavior, the wolves approach and surround the prey, then the whole wolf pack attack and capture the prey successfully. The position of the wolf i is updated by where X new i and step c denote the updated position and besieging step size of the wolf i, respectively. M c and Θ are the same as that defined in Step 3. The relationships between step a , step b , and step c are described as follows: where step c is commonly set to 1; rand int indicates a randomly selected integer in this interval.
Step 5. Update the position of wolf pack with population renewing proportional coefficient β.
Step 6. Output the position and function value of lead wolf (i.e., the optimal solution) when termination condition is satisfied, otherwise go to Step 2. The pseudocode of BWPA is illustrated in Algorithm 1.

Proposed Flexible Wolf Pack Algorithm
Flexibility is the ability to respond to changing environments effectively. Flexible wolf pack algorithm (FWPA) does not pursue the ultimate convergence of the population, but to maintain the diversity of the population throughout the evolution process, that is, to maintain a strong ability to open up new solution space, which of course should be combined with elite retention strategies. In this section, a flexible population updating strategy based on convergence situation is designed for FWPA to develop its capability of adapting to changing environments.
where X lead denotes the position of the lead wolf, M = f1, 2, ⋯, ng, and zðgÞ = 10g/MaxGen − 5; b·c represents that rounding down of L 1 to an integer. Population updating in catastrophic situation indicates that R artificial wolves are randomly selected and deleted from the whole population when the best objective function value is not updated in t max continuous iterations, then R new wolves are reproduced by where d·e represents that rounding up of L 2 to an integer, f avg is the average fitness value of whole population. k 1 and k 2 are commonly set to 2 and 4, respectively.

Flexible Population Updating
Strategy. The original population updating strategy helps to increase the population diversity to some degree; however, it may also lead to two problems: (1) L 1 , L 2 , and the average fitness value of whole population f avg are evaluated in each generation so that the computational cost increases. (2) In the catastrophic situation, the best objective function value is not updated after t max continuous generations, which can be judged that the lead wolf has fallen into a local optimum.
Generating R new wolves based on the previous randomly selected wolves has a tiny effect on jumping out the current local optima, because the updated wolves may gather to the previous lead wolf with a large probability. Therefore, the original population updating strategy is ineffective to introduce or maintain population diversity in a catastrophic situation. Based on the above analysis, we design a simpler and efficient population updating strategy using the Cauchy distribution random number. Cauchy distribution is a wellknown continuous probability distribution. Its probability density function and distribution function are presented by formulas (11) and (12), respectively.
where z is the positional parameter and τ is the scaling or shape parameter. Cauchy distribution is named the standard Cauchy distribution C (0,1) when τ = 1 and z = 1.
The Cauchy random numbers can be obtained by converting formula (12) to its inverse function (13), where FðxÞ ∈ uð0, 1Þ. The distribution of the Cauchy distribution random numbers with iterations is shown as Figure 2.
As can be seen from Figures 2(a) and 2(b), the Cauchy random numbers consist of a few mutation numbers and many smoothly fluctuating numbers. Such distribution property is available for generating a few mutant wolves when updating their position in the search process. Therefore, the flexible population updating strategy can be formulated as where C 1 = djxje and C 1 = C 2 /μ, X new denotes the position of new generated wolves, and μ is the correlation coefficient that binds the population updating in both normal and catastrophic situation together. The flexible population updating strategy can be described as follows: in normal situation t ≤ t max , similar to original population updating strategy, R worst wolves are deleted and then R new wolves are generated based on the position of lead wolf. In catastrophic situation t > t max , contrary to normal situation, R current best wolves are deleted.
The pseudocode of flexible population updating strategy is shown as Algorithm 2.
In fact, C 1 and C 2 can be viewed as the distance between the reinitialized wolf and the previous lead wolf. The larger C 1 and C 2 yield to new wolves that are more different from the previous lead wolf. Therefore, C 1 is larger than C 2 when μ subjects to (0,1). The new generated wolves are close to the previous lead wolf in normal situation, and the convergence rate can be accelerated because the positive individual informant is reused, while the new generated wolves are relatively far away from the previous lead wolf in catastrophic situation, so that the negative informant is deleted and the population diversity is increased consequently. The idea of this dynamic population updating strategy compromises the merits of Partial restart [46][47][48] and Memory scheme [49,50]. The dynamic population updating strategy is inferior to the original ones in terms of increasing population diversity and previous informant reusing. The capability of adapting the dynamic environments of BWPA is developed.

Adapting in Changing
Environments. All swarm intelligence-based algorithms are initially designed for converge to the optima quickly and precisely. However, when Evaluate new − Y Algorithm 2: Flexible population updating strategy.
solving DOPs, the capability of adapting to changing environments (i.e., detecting and tracking the changing optima quickly) is necessary. The efficient approach of increasing/maintaining population diversity is significant for enhancing the adaptation capability. However, too high level of diversity will not always lead to better performance for an algorithm.
The knowledge transfer and diversity maintenance should be well balanced. In this study, the proposed flexible population updating strategy is capable of generating new wolves at each generation, so the diversity loss problem can be well addressed. After generating the new wolves, the fitness values of the whole population are reevaluated at each generation; the changed optima can be detected and tracked when all parameters of an MKP change. Moreover, the population is updated with the use of previous positive information, which is also beneficial for converging to the new optima quickly. Therefore, any other dynamic change detecting method is required.

4.4.
Design of the FWPA for DMKP. The pseudocode of proposed FWPA is shown as Algorithm 3. To some extent, all dynamic methods try to make balance between diversification (global search), intensification (local search), and the balance between accuracy and speed. FWPA shows a good performance on both of them.

Simulation Experiments
To verify the performance of FWPA, we conduct both the static and dynamic experiments using a set of MKP benchmarks.

Experimental Data Set.
As for stationary environment, we select 9 different benchmark problems with different difficulty levels available on the OR-LIBRARY website (http:// people.brunel.ac.uk/mastjjb/jeb/orlib/files). The items of these instances, n, range from 100 to 500, and the constraints, m, vary from 5 to 30. These problems were also previously used in [14,[51][52][53]. We express these instances by the notation m.n.i which indicates the ith instance with m constraints, n items. For example, 10.250.00 is the first instance of mknapcb5.txt with 10 constraints, 250 items, and tightness ratio of 0.25.
As for the dynamic environment, similar to [13,37], dynamic instances of MKP are designed by updating the parameters after a predefined simulation time units as defined in Section 2.1. The instance of 10.250.00 was adopted here as the initial and basic environment to generate the changing environments. After the change occurs, the parameters are updated by formula (4).

Experimental Setup and Parameter Setting.
For static experiments, the correlation coefficient μ is set to 0.5, 0.75, 1, and 2, respectively, to measure its effect on the performance of the improved BWPA. Two state-of-theart algorithms that have been used to solve the MKPs are used for comparisons; they are chaotic binary particle swarm optimization with time-varying acceleration coefficients (CBPSOTVAC) [51] and parthenogenetic algorithm (PGA) [14]. The parameters of the algorithms were set as shown in Table 1. For each algorithm and each problem, 30 independent runs with 1000 iterations are implemented; the population sizes of all algorithms are equal to 100. The best Input: the parameters of BWPA Output: the best objective value 1 Generate initial population and select the initial lead wolf 2 Set iteration counter for initial population g:=0 3 while g < MaxGen do 5 Scouting behavior 6 Calling behavior 7 Besieging behavior 8 Population updating based on the flexible population updating strategy 9 g++ 10 Restart the scouting, calling, and besieging behaviors 11 end while Algorithm 3: Flexible binary wolf pack algorithm.  For dynamic experiments, the standard deviation σ of normal distributions of each parameter is assumed to be equal. σ reflects the severity of dynamic changes, so two different values of σ p = σ w = σ c = 0:05 and σ p = σ w = σ c = 0:1 are set, respectively, to test the proposed algorithm's capability of adapting to the different dynamic environments. In this study, for each algorithm and each problem, 30 independent runs with 2000 iterations are implemented, and a series of 200 iterations is adopted as the frequency of changes. Therefore, 10 different environments were generated using the basic environment (i.e., the instance of 10.250.00). The average best-of-generation was used to measure the algorithm's ability of finding a better solution at each generation in dynamic environments.
Both static and dynamic experiments were executed using the MATLAB software with a personal computer bundled with Intel i7 1.6 GHZ processor and 8 GB RAM.  Table 2. The best results of each instance achieved by algorithms are denoted in bold.
According to the results presented in Table 2, FWPA proves inferior to the other three approaches in the majority of the problems in terms of Best, Avg., and Std. With the introduction of dynamic population updating strategy, the proposed algorithm enables to maintain the population diversity and enhance the capability of jumping out of the local optima. Therefore, FWPA can find better solutions and the efficiency of the proposed strategy is proved.
From the comparison of the different versions of FWPA that own different values of μ, it can be seen that the FWPA with μ = 0:75 performs best, and the performance of the algorithms with μ = 0:5, 1, and 2 is similar to BWPA. For each test instance, the FWPA with μ = 0:75 achieves best results in terms of Best and Avg. Therefore, the parameter μ might affect the performance of FWPA crucially. In the following dynamic experiments, the parameter μ is set to 0.75. For Std, the FWPA with μ = 0:75 achieves the better results than the compared algorithms in Inst. 10.100.14, 10.250.14, 10.500.0, and 30.100.0, which shows that the proposed algorithm has a good stability.

Results on Dynamic Environment.
Results of average best-of-generation on dynamic environments that are generated by the instance 10.250.00 are shown in Table 3. An efficient algorithm is expected to quickly adapt to new environments and track the moving optima. From the results presented in Table 3, it can be seen that the proposed algorithm outperforms the compared algorithms for σ = 0:05 and 0.1. By partially restarting new wolves based on the memory of previous stored informant, the proposed algorithm is capable of tracking the changing optima quickly by efficiently maintaining/introducing the population diversity.
By comparing the results when σ = 0:05 and 0.1, which reflect the severity of the change between two dynamic environments, it can be seen that the differences of two consecutive environments become larger with the increase of σ. The proposed algorithm is capable of tracking the changing optima quickly and find better results than the other algorithms; this situation can attribute to the powerful capability of opening up new solution space using the dynamic population updating strategy.
Convergence graphs of the four algorithms when σ = 0:05 and 0.1 are presented in Figures 3 and 4, respectively. For each change, the FWPA can achieve best results. It is apparent from the figures that the proposed algorithm has more efficient capability of adapting the dynamic environments.

Statistical Verification.
The statistical results of comparing algorithms by a one-tailed test with 98 degrees of freedom at a 0.05 level of significance are given in Table 4. In Table 4, the t-test result regarding FWPA, BWPA, PGA, and CBPSOTVAC is shown as "+," "~," and "-" when one algorithm is insignificantly better than, insignificantly worse than, significantly better than, and significantly worse than the other one, respectively.
From the statistical verification presented by Table 4, we can conclude that FWPA outperforms the other three algorithms for both dynamic and stationary environments. This result demonstrates the effectiveness of the dynamic population updating strategy.

Conclusions and Future Work
This paper presents a flexible BWPA (FWPA) by designing a novel and simpler flexible population updating strategy. The proposed flexible population updating strategy aims at addressing the problem of lack diversity for the WPA during the procedure of solving dynamic optimization problems. In fact, the flexible population updating strategy is a hybridization of Partial restart and Memory scheme strategy. The simulation experiments on a set of static MKPs prove the effectiveness of the proposed algorithm. Moreover, the simulation experiments on dynamic MKP instances demonstrate that the FWPA is capable of tracking the changing optima quickly and converge to a good solution.
To the best of our knowledge, this paper constitutes the first paper on the combinatorial dynamic optimization problems of WPA. Another contribution of the study is extending the family of approaches for dynamic optimization. One of the future work is conducting a comparative study with the advanced algorithms such as jDE, SaDE, and Hyper-Mutation GA which were designed particularly for dynamic optimization problems. Moreover, the FWPA will be applied on various combinatorial dynamic optimization problems such as DTSP, DVRP, and DJSSP. Continuous dynamic optimization is also an expected research issue for WPA. Besides, as a relatively new metaheuristic algorithm, WPA has room for developing the performance of solving dynamic optimization problems in the long run. The issues of addressing the dynamic optimization problems more efficient for WPA can be summarized as follows: (1) A more powerful capability of detecting and tracking the dynamic events (2) Faster convergence rate along with the capability of being stuck in local optima (3) Taking advantages of gathered evolutionary information to decrease computational cost and adapting the changing environment more efficiently (4) Self-adaptive parameters tuning performance to decrease the difficulty of implementation

Conflicts of Interest
We declared that we have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.  -CBPSOTVAC  -----------