In recent years, model-based development (MBD) has been widely used in industry as an efficient product development methodology utilizing models. In MBD, model-based controller design, particularly known as model-based control (MBC), has been traditionally applied in various fields. In MBC, it is required that the characteristics of the model used in the design match those of the actual plant. However, the desired control performance may not be achieved, if there are disturbances and model errors in the actual plant due to various factors. A hierarchical control structure incorporating a compensator aimed at suppressing disturbances and model errors has been proposed to address this issue. This paper proposes a PID-type compensator based on generalized minimum variance control (GMVC) and verifies its effectiveness through numerical simulations and experiments using a slider-crank system. Furthermore, in the experimental verification, the design method for adjustable parameters to tune the response sensitivity to disturbances and model errors in the proposed method is also discussed.
In our previous study, we proposed a method for detecting communication anomalies in event-triggered control systems using an integrator with an auxiliary signal. However, this detection method sometimes takes a long time and degrades control performance. Therefore, this paper introduces a faster detection method using a neural network (NN). In this method, the NN predicts the time at which an event occurs (triggering time) and detects communication anomalies based on an error between actual and predicted triggering times. Furthermore, in this paper, we demonstrate the effectiveness of the proposed method by comparing it with conventional detection methods through numerical simulations.
In this paper, we propose a reinforcement learning-based one-shot algorithm for designing a dynamic output-feedback controller for unknown and locally uniformly observable discrete-time nonlinear systems. First, we show that the problem of designing a dynamic output-feedback controller for a nonlinear system with unmeasurable internal states can be equivalently transformed into the problem of designing a static state-feedback controller for a nonlinear input-affine system, where the internal state is defined as a finite-length input-output history. We then exploit the fact that the transformed system possesses three key properties: the internal state is measurable, the system is input-affine, and the input gain function is known. Based on these properties, we propose a one-shot policy iteration algorithm that learns a dynamic output-feedback controller from input-output data obtained in experiments with the initial controller, without collecting new data at each iteration. A key feature of the algorithm is that it reduces the number of subsequent experiments, thereby reducing the overall experimental cost. Furthermore, we theoretically prove that the proposed method converges to the optimal control law under ideal conditions. The effectiveness of the proposed approach is validated through numerical simulations.
We constructed a fishing condition prediction model that uses the temperature of the coastal areas of Kii channel. The constructed model had a high accuracy rate of 94.0% when predicting the fishing conditions of japanese spineless cuttlefish in Kii channel.