NIHON GAZO GAKKAISHI (Journal of the Imaging Society of Japan)
Online ISSN : 1880-4675
Print ISSN : 1344-4425
ISSN-L : 1344-4425
Volume 62, Issue 2
Displaying 1-10 of 10 articles from this issue
Regular Paper
  • Nobuyuki NAKAYAMA, Takeshi NAGAO, Shuhei KOBAYAKAWA, Toshihiko MITSUHA ...
    2023 Volume 62 Issue 2 Pages 100-107
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    Transfer density variation has been an issue resulting from surface unevenness of rough surface paper such as embossed paper, especially in the production print machine. We have investigated a prediction method of the transfer density variation. To reproduce variation in transferability depending on surface unevenness of the paper and the effect of the second transfer pressure, we have developed transferability prediction models and tools consist of compression deformation analysis of emboss paper and transfer force analysis using by 1-dimensional electric field analysis. To predict transfer efficiency, number of transferred toner particles were counted using by the estimated transfer force. The nonlinear deformation characteristics of emboss paper was clarified and the improvement in transferability by the compression deformation was presented.

    Download PDF (2983K)
  • Takeshi NAGAO, Wataru SUZUKI, Masao OHMORI, Nobuyuki NAKAYAMA
    2023 Volume 62 Issue 2 Pages 108-113
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    A discharge model in consideration of air gap discharge mode is newly developed. For discharge in electrophotography charging process and transfer process, electrostatic simulation with Paschen's law has been applied to analyze the performance. On the other hand, it has been reported that there are multiple modes in the air gap discharge and especially for Roller-plate system there are large discharge mode and minute discharge mode. New law for large discharge is found in additional to the Paschen's law by the measurements focused on the discharge mode. The new law is introduced to the simulation model and charging process performance is simulated with the model numerically. The results replicate the large discharge and minute discharge spatial distribution before the charging roller nip.

    Download PDF (1493K)
Imaging Today
  • Keisuke OZAWA
    2023 Volume 62 Issue 2 Pages 115-120
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    Originally, branding and embossing were the mainstream for expressing letters, logos, and patterns on the surface of food, and many of them were simple in color and shape, and it was often used in mass production. In recent years, the needs of the food industry have diversified, and especially in the domestic confectionery industry, edible printing, in which various patterns and characters are printed on food in order to attract the attention of consumers, is becoming mainstream. In particular, food printers equipped with edible ink are playing an important role in creating products specialized for individual needs in today's increasingly digital world. Our company has been developing and selling food printers for the edible printing market in the food industry since 2005. In this article, we will introduce the food printers we have developed under the title of “Food Printers that Create Added Value”.

    Download PDF (1376K)
  • Parinya PUNPONGSANON, Yamato MIYATAKE, Daisuke IWAI, Kosuke SATO
    2023 Volume 62 Issue 2 Pages 121-127
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    We present a method that utilizes the infill parameter in the 3D printing process to embed information inside the food that is difficult to recognize with the human eye. Our key idea is to utilize the air space to generate a specific pattern inside the food without changing the model geometry. As a result, our method exploits the patterns that appear as hidden edible tags to store the data and simultaneously adds them to a 3D printing pipeline. Our contribution also includes the framework that connects the user with a data-embedding interface through the food 3D printing process, and the decoding system allows the user to decode the information inside the 3D printed food through backlight illumination and a simple image processing technique. We demonstrate our method through the example application scenarios.

    Download PDF (1171K)
  • Kei NAKAMOTO, Kohei KUMAZAWA, Sosuke AMANO, Hiroaki KARASAWA, Yoko YAM ...
    2023 Volume 62 Issue 2 Pages 128-138
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    We attempted to read changes in mental states from food images using image recognition. We created a dataset of 19,012 food items, 7,834 food images, and the mental states of the same period. Experiments using the nutritional values estimated from these images showed a significant correlation between nutritional values and changes in mental states in a subgroup of people with the same life background, suggesting that nutritional values inferred from food images can also be used to predict mental states. In addition, we showed that changes in stress can be predicted with features directly extracted from the food images much better than the baseline in a dataset of people living alone.

    Download PDF (1672K)
  • Yuma HONBU, Keiji YANAI
    2023 Volume 62 Issue 2 Pages 139-145
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    Health management applications have become popular and health awareness has increased in recent years. With this trend, the identification of food parts in food images becomes an important factor when calculating the calorie. CNNs (convolutional neural network) have greatly improved the performance of semantic segmentation tasks. However, there is a problem that pixel-level annotation which requires to create segmentation training data costs a lot. In addition, the existence of a countless number of food categories has led to a problem of insufficient data.

    To address this problem, we propose Unseen Food Segmentation (USFoodSeg) which consists of pre-trained models trained on a large amount of food data. This model can segment any kinds of food masks with only category texts. The experiments showed it achieved 90% accuracy for the unseen food classes. In addition, we focus on the pre-trained knowledge of Stable Diffusion. We proposed StableSeg, which enables zero-shot segmentation for any class without using additional data, and the experiments showed that it reduced training cost and especially it was robust to food categories.

    Download PDF (1463K)
  • Yuanyuan WANG, Yukiko KAWAI, Kazutoshi SUMIYA
    2023 Volume 62 Issue 2 Pages 146-158
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    In recent years, many cooking recipe sharing services have become popular, allowing users to post their recipes with images and videos of their cooking. Users can search for keywords such as dish names in a vast amount of cooking recipe data. In addition to searching by dish name, various search needs such as “quick dishes,”“elaborate dishes,”“staple dishes,” and “surprising dishes” are increasing, and studies on cooking recipe data analysis are actively conducted both in Japan and overseas. Furthermore, as mobile devices become more prevalent, there is a rising desire in Japan and abroad to explore not only the text of cooking recipe data but also short cooking videos. In this article, the authors provide a technique for analyzing the unique characteristics of cooking recipe data and discuss effective recipe recommendation methods, in addition to any specific issues and future directions.

    Download PDF (2423K)
  • Atsushi OKAMOTO, Katsufumi INOUE, Michifumi YOSHIOKA
    2023 Volume 62 Issue 2 Pages 159-164
    Published: April 10, 2023
    Released on J-STAGE: April 10, 2023
    JOURNAL FREE ACCESS

    Recently, automatic instructional manual creation systems from videos have been focused on for supporting beginners. An automatic recipe creation system is one of them and it is created by recognizing cooking activities and objects having relation to the activities. To realize a more useful system, we need to recognize the activities from coarse to fine-grained, such as from “cut” to “slicing”, “cutting into small pieces”, etc. However, the recognition of such fine-grained cooking activities is a very challenging task because we utilize the same utensil such as kitchen knives, and employ similar hand motions among the activities. To solve this problem, we focus on the sequential transformation of ingredients. By using this information, in this paper, we introduce a new GAN (generative adversarial network) -based network model to recognize fine-grained cooking activities and investigate the effectiveness of the model by comparing it with a spatio-temporal network model.

    Download PDF (1201K)
Imaging Highlight
Lectures in Science
feedback
Top