The equation (3.1.1) is formed based on the graph of . As it can be shown in Figure 4, when the value of x for the is below zero, the value of y is lower than one. Thus, if the magnitude is negative (meaning that the agent found better solution), then the intensity will be decreased because is multiplied with a number lower than the unity. On the other hand, if magnitude is positive, the intensity will increase because is multiplied with a number bigger than unity. And so, as further of the optimum an agent is so much bigger the increase of the intensity will be, resulting into bigger steps to find a better solution.
Although, to transform the high value of to a more useful one, implementing the physical analogue:
(3.1.3)
where is the Intensity of the -th agent and is the Threshold of Hearing [9], which is defined as:
(3.1.4)
In our experiments, the value of Threshold of Hearing is set to , but can be changed based on the problem.
3.2 Effective radius
The initial value of should be considered, based on the solution space. A small value of the radius will drive the algorithm to perform smaller steps. On the other hand, the choice of a bigger radius will lead to longer jumps of the algorithm towards better optima, but with the risk of overlooking (bypassing) other solutions of desired quality.
By reversing equation (2.3), the effective radius is calculated as:
(3.2.1)
where is the area scanned by the -th agent in the -th iteration.
As we see, this model represents the real relation between these measures; in higher intensity the area scanned is bigger than in lower intensity. Thus, the effective radius is smaller too. The aim here is to increase the radius, if no better solution is found, so that each agent searches further than its current position.
3.3 Full Scan Loop
In order to search wider areas of the solution space, in each iteration every agent checks the space around it that is limited by its effective radius . This process is called full scan loop, because three steps are repeated until a full cyclic search has been done. Beginning from the angle of , random rotations in each dimension are executed. Each rotation covers a maximum of and is calculated as follows:
(3.3.1)
where is a random number produced from the uniform distribution function and is the rotation angle in dimension . When any of exceeds , the loop is stopped. The vector of angles is converted in vector of movements in every dimension so that:
(3.3.2)
where is the random radius inside the cycle that is defined by the effective radius and is the number of dimensions of the problem. In Figure 5 below, an example of 36 points generated in one dimension for the Ackley’s function is presented. A decrease of the maximum rotation angle , leads into smaller rotations and thus, more generated points in every dimension. With this mechanism, each agent searches more points around its position, while other algorithms’ agents check one point per iteration.
The new position is calculated as:
(3.3.3)
where, is the position of the -th agent in the -th dimension and is the -th element of the equation (3.3.2). In each one of the rotation phases, the fitness of the new position is calculated and if it is better than the best found by the current agent, the best position and its fitness are updated.
Fig. 5. Example of generated points checked in a single full scan loop by one agent for one dimension of Ackley’s function
In order to avoid exceeding the bounds of the solution space, a correction mechanism has also been implemented. If an is violating the bound constraints, it is relocated as:
(3.3.4)
in order to fulfil the relation: .
4. Experimental Results
All experiments were conducted using Matlab on a 4GB, 3.6GHz Intel Core i7 Windows 10 Pro. In our experiments, we used the “hypersphere function”, provided by Gianluca Dorini in Mathworks [21], to convert the angles vector into Cartesian coordinates vector. The maximum rotation angle was set to 20, the small value was set to 0.0008 and the value of Threshold of Hearing was for all runs.
In Table 1 is shown the performance of the algorithm on 12 known test functions with discrete solution space. The reason for selecting these functions as benchmarks is that the overwhelming majority of real world problems are described by decision variables that are defined within a discrete solution space. The parameters used for all test functions were:
- Number of iterations: 100
- Number of population: 100
- Number of independent runs: 100
Table 1. Results of SIO in benchmark test functions
Test Function Known Minimum min average std Ackley’s function
0.0003 0.0046 0.0023 Beale's function
2.86e-11 1.34e-08 1.33e-08 Goldstein–Price function
3.0000 3.0004 0.0004 Booth's function
1.81e-10 1.69e-08 1.79e-08 Bukin function N.6
0.0237 0.1107 0.0558 Matyas function
3.95e-12 3.55e-10 3.45e-10 Lévi function N.13
1.44e-11 3.31e-08 2.96e-08 Three-hump camel function
2.35e-11 5.28e-09 5.53e-09 Easom function
-1.0000 -0.0400 0.1969 Cross-in-tray function
-2.0626 -2.0609 0.0173 Schaffer function N. 2
5.29e-12 0.0627 0.0417 Schaffer function N. 4
0.2926 0.3134 0.0152 |
In the rightmost columns of Table 1, the minimum, the average and standard deviation of all algorithm’s runs are provided, while in the second column the known minimum of each function is provided.
As it can be seen in Table 1, the proposed algorithm is able to find a solution very close to the global minimum and in some cases succeeds in finding the minimum. In the majority of these functions, SIO has a small standard deviation of results. Also, in many cases it provides highly accurate solutions.
Table 2. Results of SIO in Test Functions Compared with BA
| SIO | BA |
Test Function | min | average | std | min | average | std |
Ackley's function | 0.0003 | 0.0046 | 0.0023 | 0.0333 | 0.3785 | 0.2284 |
Beale's function | 2.86e-11 | 1.34e-08 | 1.33e-08 | 3.94e-06 | 0.0005 | 0.0005 |
Goldstein–Price funct. | 3.0000 | 3.0004 | 0.0004 | 3.0006 | 3.0719 | 0.0777 |
Booth's function | 1.81e-10 | 1.69e-08 | 1.79e-08 | 1.31e-05 | 0.0007 | 0.0007 |
Bukin function N.6 | 0.0237 | 0.1107 | 0.0558 | 0.4454 | 20.3861 | 9.4734 |
Matyas function | 3.95e-12 | 3.55e-10 | 3.45e-10 | 1.24e-07 | 2.31e-05 | 2.18e-05 |
Lévi function N.13 | 1.44e-11 | 3.31e-08 | 2.96e-08 | 2.70e-05 | 0.0025 | 0.0026 |
Three-hump camel fun. | 2.35e-11 | 5.28e-09 | 5.53e-09 | 6.45e-07 | 0.0003 | 0.0003 |
Easom function | -1.0000 | -0.0400 | 0.1969 | -1.0000 | -0.8898 | 0.3144 |
Cross-in-tray function | -2.0626 | -2.0609 | 0.0173 | -2.0626 | -2.0626 | 3.16e-05 |
Schaffer function N. 2 | 5.29e-12 | 0.0627 | 0.0417 | 1.20e-09 | 3.19e-05 | 0.0003 |
Schaffer function N. 4 | 0.2926 | 0.3134 | 0.0152 | 0.2926 | 0.2926 | 1.26e-05 |
In Table 2 there is a comparison of the proposed algorithm with the Bat Algorithm (BA) [8], with which SIO shares a similar concept. For the experiments, the published BA code from Mathworks [22] was used and the alterations needed for each benchmark function and the parameters of loudness and pulse were made. Many optimization algorithms are presented and compared with other well-known schemes, but it is not clear if these schemes are altered in a way that affects their results, leading to the outperformance of the proposed algorithm. This is the reason why the version of the BA algorithm used for comparison, is mentioned. This version is tuned to provide better performance, so the parameters are not altered. What is more, the correction mechanism that is presented in equation (3.3.4) was added, in order to help BA remain in the solution space.
Table 3. Convergence analysis of SIO in benchmark test functions compared with BA
Test Function | Min iteration | Average iteration | Standard deviation |
| SIO | BA | SIO | BA | SIO | BA |
Ackley’s function | 16 | 1 | 70.78 | 55.56 | 21.83 | 31.35 |
Beale's function | 7 | 3 | 55.44 | 50.68 | 29.05 | 28.07 |
Goldstein–Price func. | 13 | 4 | 65.41 | 53.30 | 22.58 | 27.61 |
Booth's function | 11 | 2 | 55.98 | 56.40 | 23.91 | 27.2 |
Bukin function N.6 | 12 | 2 | 69.38 | 2.32 | 21.69 | 0.62 |
Matyas function | 5 | 6 | 51.39 | 59.84 | 28.57 | 27.26 |
Lévi function N.13 | 5 | 3 | 57.75 | 51.44 | 26.54 | 28.96 |
Three-hump camel func. | 4 | 2 | 51.74 | 50.74 | 26.23 | 29.25 |
Easom function | 22 | 8 | 74.32 | 56.34 | 24.31 | 24.28 |
Cross-in-tray function | 7 | 2 | 63.28 | 54.2 | 24.39 | 28.6 |
Schaffer function N. 2 | 19 | 14 | 88.03 | 62.97 | 19.60 | 22.39 |
Schaffer function N. 4 | 18 | 9 | 83.61 | 61.63 | 20.53 | 22.55 |
Average Performance | 11.58 | 4.67 | 65.59 | 51.29 | 24.10 | 24.85 |
Finally, in Table 3 a convergence analysis on the previous results is presented. As it can be seen, the proposed algorithm is fast in finding optimum. On the other hand, the high value of standard deviation shows that this optimum may updated even in the last iterations (scans) due to the fact that SIO searches a large area of the solution space. The results of previous tables prove that statement.
As mentioned above, the parameters of BA are chosen by its author. Increasing the population and the iterations would lead to new tuning of this algorithm to perform correctly. This will work in favor of the SIO, because the attraction-based concept of BA will trap the algorithm in optima, while the scattering method of the proposed scheme avoids this problem.
The Bat Algorithm seems to be faster in locating optimum, but taking into consideration the above results as well as the fact that an already well-tuned algorithm has been used for comparison regardless of the problem coping with, SIO is a bit slower but performs a wider search and thus its performance is encouraging.
5. Conclusions and Future Research
In this paper, a new meta-heuristic algorithm named SIO (Sonar Inspired Optimization) has been proposed, based on the concept of SONAR mechanism used by war ships to detect targets and mines. The proposed scheme is easy to implement and the limited parameterization makes this algorithm useful for a wide range of problems. Another significant feature of SIO is the balance between exploration and exploitation, as the only parameter affecting the strictness of the algorithm is the intensity that self-adapts during the algorithmic process.
SIO was tested in known benchmark functions and was also compared with the Bat Algorithm, an approach which is similar to the proposed algorithm, where was found statistically comparable or superior in most of the cases. Also, a convergence analysis of the proposed algorithm has been provided. It can be seen that SIO follows the concept of evolutionary algorithms that keep evolving solutions, due to its good exploration and exploitation balance.
Furthermore, the main advantages should be highlighted; the minimal parameterization and the higher exploration of the solution space. Especially the second feature, SIO’s agents search many possible positions around their current location in each iteration, while in other algorithms agents check only one new point. Resulting also in the main idea behind the introduction of Sonar Inspired Optimization, which is the increase of exploration but also ease of implementation. That will attract researchers to use this algorithm in opposition with another that is hard to implement or needs more parameterization.
Currently, work is underway on the application of Sonar Inspired Optimization in real world problems. The hybridization of this algorithm with other known algorithms in order to build effective hybrid schemes for specific problem areas is also a priority of research. Limited experiments have already taken place in this direction, in financial and industrial engineering problems. These experiments seem promising as the algorithm converges fast in the best (optimum or sub-optimum) solution found, while the concept of approaching optimum in steps, avoids the premature convergence that other competitive schemes face.
In addition, further improvement of the characteristics of SIO will be attempted, as well as proper parameter tuning of the rotation angles and the checkpoint value of the approach. Finally, the effect of Threshold of Hearing will be measured and studied.
Acknowledgments
The authors would like to thank Dr Jan Jantzen (Samso Energy Academy, Denmark and University of the Aegean, Chios, Greece), for his valuable comments and suggestions.
6. References
1. Yang, X. S.: Nature-inspired metaheuristic algorithms. Luniver press (2010)
2. Chiong, R. (Ed.): Nature-inspired algorithms for optimisation (Vol. 193). Springer (2009)
3. Liu, J., & Tsui, K. C.: Toward nature-inspired computing. Communications of the ACM, 49(10), 59-64 (2006)
4. Marrow, P.: Nature-inspired computing technology and applications. BT Technology Journal, 18(4), 13-23 (2000)
5. Yang, X. S.: Nature-inspired metaheuristic algorithms: Success and new challenges. arXiv preprint arXiv:1211.6658 (2012)
6. Kennedy, J. F., Kennedy, J., Eberhart, R. C., & Shi, Y.: Swarm intelligence. Morgan Kaufmann (2001)
7. Shah-Hosseini, H.: The intelligent water drops algorithm: a nature-inspired swarm-based optimization algorithm. Int. J. of Bio-Insp. Comp., 1(1-2), 71-79 (2009)
8. Nasir, A. N. K., Tokhi, M. O., Abd Ghani, N. M., & Raja Ismail, R. M. T.: Novel adaptive spiral dynamics algorithms for global optimization. In 11th IEEE International Conference on Cybernetic Intelligent Systems (CIS), pp. 99-104. IEEE Press, Ireland (2012, August)
9. Rashedi, E., Nezamabadi-Pour, H., & Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sc., 179(13), 2232-2248 (2009)
10. Kaveh, A., & Talatahari, S.: A novel heuristic optimization method: charged system search. Acta Mechanica, 213(3), 267-289 (2010)
11. Birbil, Ş. İ., & Fang, S. C.: An electromagnetism-like mechanism for global optimization. J. of Gl. Opt., 25(3), 263-282 (2003)
12. Yang, X. S., Deb, S., Loomes, M., & Karamanoglu, M.: A framework for self-tuning optimization algorithm. Neural Comp. and App., 23(7-8), 2051-2057 (2013)
13. Crawford, B., Valenzuela, C., Soto, R., Monfroy, E., & Paredes, F.: Parameter tuning of metaheuristics using metaheuristics. Adv. Science Letters, 19(12), 3556-3559 (2013)
14. Fallahi, M., Amiri, S., & Yaghini, M.: A parameter tuning methodology for metaheuristics based on design of experiments. Int. J. of Eng. and Tech. Sciences, 2(6), 497-521 (2014)
15. Kaveh, A., & Farhoudi, N.: A new optimization method: Dolphin echolocation. Adv. in Eng. Soft., 59, 53-70 (2013)
16. Yang, X. S.: A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010), pp. 65-74. Springer Berlin Heidelberg, Spain (2010)
17. Vassiliadis, V., & Dounias, G.: Nature–Inspired Intelligence: A review of selected methods and applications. Int. J. on Art. Int. Tools, 18(04), 487-516 (2009)
18. Fister Jr, I., Yang, X. S., Fister, I., Brest, J., & Fister, D.: A brief review of nature-inspired algorithms for optimization. arXiv preprint arXiv:1307.4186 (2013)
19. Lurton, X.: An introduction to underwater acoustics: principles and applications. Springer Science & Business Media (2002)
20. Nilsson, M., & Snoad, N.: Optimal mutation rates in dynamic environments. Bull. of Math. Biol., 64(6), 1033-1043 (2002)
21. Mathworks File Exchange, https://www.mathworks.com/matlabcentral/fileexchange/5397-hypersphere
[1] Equation (3.1.2) is formed for minimization problems. In maximization problems, and reverse signs.