This paper focuses on swarm intelligence (SI) algorithms to tackle the dynamic optimization problems (DOPs), and aims at investigating the effectiveness of the proposed mechanisms by incorporating them with the conventional SI algorithms. For this purpose, this paper starts to divide DOPs into the two types, “sudden change” where an optimal solution changes one time and “continuous change” where an optimal solution changes over time, and addresses the latter change which is more difficult than the former change. In detail, this paper explores the mechanisms for “the solution change on the evaluation value axis” where the local solution change to the optimal solution and vice versa and for “the solution change on the design variable axis” where the optimal solution moves gradually in search space. To tackle these solution changes in continuous change, this thesis proposes the mechanism for the former solution change (called as the Adaptive Local Information Sharing (ALIS) mechanism which tracks the solution change by limiting the search range) and the mechanism for the latter solution change (called as the Jumping Over toward Future Best (JOFB) mechanism which explores the search area by estimating the moving direction and range of the future optimal solution). For the intensive experiments of the proposed mechanisms on the various functions which solution landscape changes over time, the proposed mechanisms are incorporated to three SI algorithms (Partical Swarm Optimization (PSO), Artificial Bee Colony (ABC) and Social Spider Optimization (SSO)) and the following implications have been revealed: (1) Algorithms incorporated ALIS mechanism (PSO-ALIS, ABC-ALIS, SSO-ALIS) can track the optimal solution change on the evaluation value axis by capturing the multiple local solutions simultaneously; and (2) Algorithms incorporated JOFB mechanism (PSO-JOFB, ABC-JOFB, SSO-JOFB) can track the optimal solution change on the design variable axis by searching the direction and range of the future optimal solution in advance; (3) ABC, PSO and SSO with ALIS and JOFB mechanishm can track to “continuous change” with two axial changes.
Genetic programming has been applied to various problems, and many derived methods have been proposed. Genetic Programming is one of the meta-heuristic methods that has to follow the No Free Lunch (NFL) theorem to improve accuracy. NFL theorem states the importance of using problem knowledge. GP with transfer learning has been proposed as methods of using knowledge. However, when this method solves a problem, it needs to select a source problem. Another approach is to use knowledge by multi-task learning, but it is necessary to solve multiple problems at the same time. In this paper, we propose a method of extracting knowledge from multiple source problems and selecting appropriate knowledge. This method uses an island model to extract knowledge, and a machine learning model to select knowledge. The advantage of this approach is the end-to-end method and that it does not require source problem selection: it automatically uses knowledge. The experimental results show that the proposed method higher rank of the test data than GP without transfer learning on average of 70 real-world regression problems. In addition, the proposed method performs as well as popular machine learning and has lower trial variance than the machine learning methods such as random forest, gradient boost, and XGBoost.
In recent years, deep neural networks have shown outstanding performance in a wide range of domains like computer vision and natural language processing, and so on. However, several studies have demonstrated that in the image classification domain, deep neural classification models are easily fooled by adversarial examples (AE). AE are inputs that are designed to cause poor performance to a predictive machine learning model. As one of the black-box attacks on computer vision, a method of generating adversarial examples using Differential Evolution (DE) has been reported. This attack method is very effective because the output of the model can be greatly changed by modifying a few pixels of the input image. However, even if the operation is only a perturbation of several pixels, if the change in the pixel value (amount of perturbation) at that time is large, it is possible to easily discriminate the AE with the naked eyes. Therefore, in this paper, not only inducing a misclassification but also the amount of perturbation given to the image is considered when searching for AE using DE. In other words, we formalize the AE generation as a constrained optimization problem that searches the AE under a constant amount of perturbation. For this problem, we apply DE with ε constraint method which is one of the constraint handling techniques. In addition, JADE, which is a kind of adaptive DE, is adopted to improve the search ability. In order to confirm the effectiveness of this approach, we carry out experiments using some typical machine learning models and show that the ε constraint JADE can generate AE that is difficult to detect with the naked eyes.