In this study, we propose a method for generating facial videos of various people with various expressions from a single video of a certain person. The proposed method is based on First Order Motion Model (FOMM), but extended to enable to control the facial expression in a generated video by incorporating a mechanism to control facial Expression and gaze direction in FOMM. Besides, in order to save the effort for editing the facial expression on each frame, we also implement a function to interpolate the weight values for each frame from manually assigned weights just for sparse frames. We confirm that the proposed method generates natural facial videos with variously controlled expressions.
The Mini 4WD AI competition executive committee of Japan Society for Fuzzy Theory and Intelligent Informatics has been organizing competitions for Mini 4WD AI vehicles equipped with microcomputers, sensors, motor drivers, etc., since around 2014. Maintaining a high average speed is crucial for cars, including Mini 4WD, to run fast on the course. Additionally, to further advance Mini 4WD AI research, it is important to present the design principles for vehicles that can achieve the fastest speed using engineering theory, calculation formulas, and numerical values. The purpose of this research is to clarify the theory of the driving performance of Mini 4WD based on automotive engineering theory and to design an AI system using the Profit Sharing method, a sequential experience reinforcement learning algorithm, and study the learning results. The results showed that changing the vehicle speed in stages can produce a learning effect that maintains a high speed. Furthermore, from the perspective of maneuverability, it was shown that improving the ability to re-accelerate after decelerating on a slope is necessary to enhance the average speed.
A method has been proposed that combines policy gradient reinforcement learning with a fuzzy controller to create decision-making policies. The method directly learns the weights of fuzzy rules and the outputs of membership functions in a policy to maximize the expected value of the reward per episode. The advantages of the method include high flexibility in terms of fuzzy representation. Using the same framework for learning, it can automatically adjust both the shape of each membership function and the rule weight, which represents the reliability of each control rule. A study applied the method to the task of controlling the speed of an automobile and obtained appropriate policies by learning the rule weights. However, membership functions were not learned; they were designed in advance based on a priori human knowledge. Therefore, in this paper, we propose applying a neural network to the membership function and conducting reinforcement learning on its weight parameters using the combination method. Additionally, we illustrate the learning process with an example of controlling the speed of an automobile. As a result of computational experiments on automobile speed control, we confirmed that the proposed method is capable of learning appropriate membership functions. Consequently, it is believed that the proposed method can be generally applied to automatically acquire concepts using fuzzy representations, such as ‘long/short’ and ‘fast/slow’.