2017 Volume 50 Issue 4 Pages 273-290
This paper presents a novel Teaching-Learning-Self-Study-Optimization (TLSO) algorithm which is not only fast converging according to the number of iterations, but also relatively consistent in converging with high accuracy to the global minimum in comparison with some other algorithms. The original Teaching-Learning-Based Optimization (TLBO) gives uniformly distributed and randomly selected weight to the amount of knowledge to a learner at each phase, i.e., teacher phase and learner phase. This uniformly distributed and randomly selected weight causes the algorithm to converge the average cost of learners in a moderate number of iterations. Li and his coworkers intensified the teacher and learner phases by introducing weight-parameters in order to improve the convergence speed in terms of iterations in 2013 and called it Ameliorated Teaching-Learning-Based Optimization (ATLBO). The criterion of a good evolutionary optimization algorithm is to be consistent in converging the cost of the objective function. For this, it should include intensification for local search as well as diversification for global search in order to reduce the chances of trapping in a local minimum. Some students naturally tend to study by themselves by the means of a library and internet academic resources in order to enhance their knowledge. This phenomenon is termed as self-study and is introduced in the proposed TLSO’s learner phase as a diversification factor (DF). Various other evolutionary algorithms such as ACO, PSO, TLBO, ATLBO and two variants of TLSO are also developed and compared with TLSO in terms of consistency to converge to the global minimum. Results reveal that the TLSO was found to be consistent not only for a higher number of functions among 20 benchmark functions, but also for NOx prediction application. Results also show that the predicted NOx emissions through LSSVM tuned with TLSO are comparable with the other algorithms considered in this work.