2022 年 30 巻 4 号 p. 394-401
This paper presents an approach to extracting visual features from practical rotating machines based on a convolutional neural network (CNN). We first generate synchronous reluctance motor images via topology optimization using two objectives: maximizing average torques and minimizing torque ripples. Each image is assigned two class labels based on its average-torque and torque-ripple values. Then, using the pairs of images and their two types of labels, we train a CNN based on multi-task learning that simultaneously predicts the two types of classes. Finally, we visualize the features learned by the CNN using a class activation mapping method.