In recent years, Japanese primary and secondary schools have started conducting calligraphy classes, but such classes have been in decline due to a decreasing number of teachers. Moreover, it is difficult for students to improve their skills because an insufficient amount of time is allotted to the learning of calligraphy. A system for automatically evaluating each written character could potentially solve this problem, helping students to improve their calligraphy on their own, even without a teacher. Many methods for learning calligraphy have been proposed. However, most of these methods require a special device and are difficult to use. Therefore, we have suggested an offline, easily usable, alternative system. In this study, ‘offline’ refers to easy digitisation and visualisation using a smartphone. Our system targets primary school students.
We propose an adaptive stepsize rule for multi-agent based consensus optimization, to overcome drawbacks of the conventional diminishing stepsize rules. The proposed stepsize rule is based on an agreement degree of agent-wise gradients. It can archive both of fast approach and convergence at the early and last stages of iteration, respectively. Since the stepsize of each iteration is computed using a part of the global information of agents, a supervisory control architecture is required. We prove that the sequence generated by the consensus optimization algorithm using the proposed supervisory stepsize rule converges to the optimum solution. Moreover, we experimentally show that the proposed rule has better performance than the diminishing and constant stepsize rules.
In this paper, we propose a new automatic building extraction method from scenery images. The proposed method generates initial input data to GrabCut by analyzing color clusters based on VBGMM under the assumption that the building appears at the center of the image. In order to confirm the effectiveness of the proposed method, we conducted a comparison experiment with the conventional method using 106 images from public dataset ZuBuD. As a result, the proposed method improved the extraction accuracy by more than 12% compared with the conventional method. On the other hand, the proposed method degraded the extraction accuracy by more than 5% compared with GrabCut using initial input data determined manually. In order to solve this problem, we consider that post-processing focusing on building boundaries is necessary.