Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
Current issue
Displaying 1-50 of 62 articles from this issue
 
  • Takefumi Ogawa
    Article type: Special Issue of Collaboration technologies and network services to support human well-being and prosperous lives
    2025Volume 33 Pages 1
    Published: 2025
    Released on J-STAGE: January 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (32K)
  • Akihiro Miyata
    Article type: Special Issue of Collaboration technologies and network services to support human well-being and prosperous lives
    Subject area: Medicine and Welfare
    2025Volume 33 Pages 2-8
    Published: 2025
    Released on J-STAGE: January 15, 2025
    JOURNAL FREE ACCESS

    This study compared an official accessibility map and a crowdsourced accessibility map for a university campus to identify differences between the two types of maps. By conducting a comprehensive analysis of the responses from the official map creator and accessibility problems reported by 35 crowd workers, we provided the following four suggestions: S1, it is desirable to avoid omitting the creation of official maps and to allow both types of maps to coexist; S2, it would be desirable for various stakeholders to participate as crowd workers in reporting accessibility problems; S3, problems along the routes leading to building entrances could be supplemented through crowdsourcing; and S4, dynamically changing accessibility problems could be captured through crowdsourcing. These suggestions based on detailed comparisons contribute to the improvement of both official and crowdsourced accessibility mapping.

    Download PDF (21042K)
  • Tengfei Shao, Yuya Ieiri, Shingo Takahashi
    Article type: Special Issue of Collaboration technologies and network services to support human well-being and prosperous lives
    Subject area: Information Systems for Society and Humans
    2025Volume 33 Pages 9-20
    Published: 2025
    Released on J-STAGE: January 15, 2025
    JOURNAL FREE ACCESS

    This study introduces a groundbreaking Motif and Time-Based Analysis Model to unravel the intricate dynamics within the e-commerce second-hand luxury goods market. By meticulously analyzing transactional data through the lens of network motifs and temporal patterns, our model unveils distinct consumer behaviors and market trends that traditional analyses often overlook. We focus on the evolving e-commerce model's impact on luxury goods transactions, highlighting the pivotal role of Return on Investment as an essential metric for assessing market efficacy. Utilizing e-commerce data collected in collaboration with leading companies, we identify statistically significant network motifs that reflect complex interaction patterns between consumers and goods. Our novel algorithm efficiently mines these motifs despite multiple constraints, offering new insights into transactional networks. Through rigorous statistical validation, our findings demonstrate the model's effectiveness in capturing the market's multifaceted nature. The study not only contributes to our understanding of the second-hand luxury goods market's dynamics but also provides actionable strategies for businesses aiming to enhance consumer experiences and market trend forecasting.

    Download PDF (8493K)
  • Tham Yik Foong, Danilo Vasconcellos Vargas
    Article type: Regular Paper
    Subject area: Information Mathematics
    2025Volume 33 Pages 21-30
    Published: 2025
    Released on J-STAGE: January 15, 2025
    JOURNAL FREE ACCESS

    Self-oscillation is an emergent behavior naturally occurring in biological neural circuits, facilitating the coordination of complex locomotion and cognitive functions. Recently, numerous discrete dynamical system models, termed Self-Oscillatory Networks (SONs), have been proposed to model the functional behavior of such neural circuits. In brief, SONs are recurrent neural networks that generate spontaneous, self-sustaining rhythmic patterns without any input. However, the internal dynamics of SONs, especially in systems of high dimensionality, remain unexplored due to their complexity. This paper analyzes the robust nonlinear dynamics that arise within SONs. Through numerical analyses, we examine the influence of spectral radius on the emergence of dynamic attractors, particularly limit cycles. Following that, we identify the critical value of the spectral radius that induces a supercritical Hopf bifurcation in the system of SONs. We also perform stability analysis using Lyapunov exponents and phase shift to demonstrate that SONs exhibit robust behavior against perturbations. Therefore, we conclude that SONs contain cyclic attractors that maintain stable limit cycles, even under perturbations.

    Download PDF (1670K)
  • Toshiki Onishi, Asahi Ogushi, Shunichi Kinoshita, Ryo Ishii, Atsushi F ...
    Article type: Regular Paper
    Subject area: Group Interaction Support and Groupware
    2025Volume 33 Pages 31-39
    Published: 2025
    Released on J-STAGE: January 15, 2025
    JOURNAL FREE ACCESS

    Opportunities to remotely communicate have been increasing since the start of the COVID-19 pandemic. Praising behavior is considered an important element of daily life and social activities. However, many people are uncertain about the best way to praise a partner. Such individuals may have difficulty understanding how to behave in order to improve their own praising skills. To solve this problem, we aim to develop a system that automatically evaluates whether a person is praising the other person in a remote dialogue, and reviews the utterances in which the person is praising a partner. As a first step toward achieving this goal, we attempted to detect praising behaviors from speaker's multimodal information in remote dialogues. Specifically, we constructed machine learning models for detecting praising behaviors using a dialogue corpus that contains remote dialogue data and the results of judgments about praising behaviors. As a result, we clarified that the praising behaviors are detectable based on multimodal information in remote dialogues. Furthermore, we clarified that the highest detection performance was achieved with the praiser's linguistic information and the receiver's linguistic information.

    Download PDF (12863K)
  • Atsushi Tagami
    Article type: Special Issue of Network Services and Distributed Processing
    2025Volume 33 Pages 40
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (32K)
  • Ryusei Shiiba, Satoru Kobayashi, Osamu Akashi, Hiroki Shirokura, Kensu ...
    Article type: Special Issue of Network Services and Distributed Processing
    Subject area: Distributed Systems Operation and Management
    2025Volume 33 Pages 41-54
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    In current large-scale networks (e.g., datacenter networks), packet forwarding is dynamically customized beyond traditional shortest-path routing to meet various application demands. Such forwarding behavior is tremendously complex to manage and sometimes causes serious network failures. We present Graft, a new realtime data plane verification framework to verify complex forwarding behavior on large-scale networks. For scalable realtime verification, we first propose an optimized algorithm to efficiently compute and manage large packet header spaces and their forwarding paths. Second, we propose a data plane model and algorithms with formal network semantics to precisely model the customized forwarding behavior. We validate its effectiveness using synthetic and production datacenter networks. To the best of our knowledge, we are the first to verify customized forwarding behavior in production large-scale networks. For scalability, we show that Graft is 100x faster than prior works in the synthetic networks and 20000x faster in the production network. For expressiveness, we demonstrate that Graft is enough to model the customized forwarding behavior by verifying the correctness of SRv6-based SFCs in the production network. Finally, we demonstrate that Graft verifies a real failure of a distributed NAT system in the production network.

    Download PDF (865K)
  • Yuta Shimamoto, Ryota Yoshimoto, Mitsuaki Akiyama, Toshihiro Yamauchi
    Article type: Special Issue of Network Services and Distributed Processing
    Subject area: Contingency Management/Risk Management
    2025Volume 33 Pages 55-65
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    The mainstream of existing security investigation for Internet of Things (IoT) devices is vulnerability testing based on the availability of security features and version numbers for all programs extracted from the firmware. However, it is known that IoT devices contain many programs in the file system that are never executed. Therefore, if many programs on IoT devices are not executed, a method that examines all programs may not allow for an accurate investigation of the programs that are actually used. This paper proposes an analysis method that combines static analysis and emulation to identify programs that are automatically executed in the startup process of IoT devices to conduct an efficient and accurate security investigation of IoT devices. This allows us to prioritize the programs for security feature investigation that are executed when IoT devices are started up. As a result of the evaluation, we confirmed that we could identify automatically executed programs in the startup process with high accuracy by the proposed method. In addition, we investigated 201 IoT devices firmware that use OpenWrt and found that prioritizing inspection targets greatly improves the efficiency of investigation and has a significant impact on accurate security investigations.

    Download PDF (467K)
  • Jun Munemori
    Article type: Special Issue of Network Services and Distributed Processing
    Subject area: Intellectual Creative Tasks
    2025Volume 33 Pages 66-77
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    The field of data science education is expanding, paralleled by a growing interest in decision science and a corresponding demand for education in this area. Decision science, grounded in data collection and idea generation, entails utilizing computer-supported technology for scientific decision-making. Various methods, such as brainstorming, have been explored extensively to foster creative thinking for decision-making. Brainstorming, a common ideation technique, typically involves groups of around 10 individuals. However, limitations in computer capacity and network speed often restrict the participation to only two to three people in system-supported ideation methods. Furthermore, there has been a lack of familiarity with screen sharing. Recently, with the widespread use of platforms like Zoom, screen sharing has become more familiar to the general public, enabling even larger groups of up to 10 people to hold virtual meetings regularly. In this study, the KJ method, a form of brainstorming, was applied to multiple groups of individuals (with 10 participants per group located remotely) using information and communication technology combined with GUNGEN-Web II, a distributed and collaborative support system for the KJ method, and Zoom. The study analyzed several metrics, including the number of idea labels, number of results from group organization, number of graphical symbols indicating relationships between results from group organization, number of characters in the sentences, and evaluation of sentences based on synthetic satisfaction using the analytical hierarchy process. The results indicated that sentences resulting from the ideation method received higher ratings when 10 people were involved. This increase in rating can be attributed to the enhanced fluency (i.e., the number of ideas generated) and flexibility (i.e., diverse viewpoints) resulting from the larger group size, leading to more thoughtful evaluations. Additionally, it is hypothesized that as the number of graphical symbols increases, the overall structure and comprehensibility of the diagram improve, allowing for a more holistic understanding of the content. This, in turn, may result in higher synthetic satisfaction levels as the diagram becomes more structured, facilitating the addition of new ideas in concluding sentences.

    Download PDF (2059K)
  • Kenji Hisazumi
    Article type: Special Issue of Embedded Systems Engineering
    2025Volume 33 Pages 78
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (30K)
  • Yutaro Nozaki, Ryo Okamura, Ryotaro Koike, Takuya Azumi
    Article type: Special Issue of Embedded Systems Engineering
    Subject area: Computing System Technology
    2025Volume 33 Pages 79-90
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    High-performance embedded systems such as autonomous driving systems, require platforms that reduce power consumption and perform high-performance processing. Multi-/many-core processors are attracting attention as they satisfy both requirements. This paper focuses on clustered many-core processors represented by Kalray MPPA. Clustered many-core processors regard multiple cores as a single cluster, with each cluster having private memory. By using separate memory for each core, communication conflicts between tasks can be reduced, and the worst-case response time can be improved. This paper describes how to apply federated scheduling, which allocates dedicated cores to high-load tasks, to clustered many-core processors. In a clustered many-core processor, each core uses separate memory, thereby eliminating memory contention between tasks. However, if a task communicates between two different clusters, the communication time increase. Studies on clustered many-core processors often assume that the local memory-capacity is sufficient for all tasks. However, the local memory is small. This paper also discusses the use of shared DDR memory. When cores are allocated across multiple clusters, the associated inter-cluster communication increases the execution time, which may change the number of cores required. Therefore, in such cases we have introduced a mechanism to recalculate the required number of cores including the communication time. We also propose a mechanism to reduce the number of cores required by analysing the tasks accessing DDR when DDR is used in cases of insufficient local memory capacity. The evaluation results indicate that clustered many-core processors greatly reduce the number of cores required for tasks with high communication burden. Tasks that require inter-cluster communication differ depending on the order in which the dedicated cores of the tasks are allocated. In a clustered many-core processor, scheduling depends on the method for allocating a dedicated core in each cluster. In addition, by utilizing DDR memory, it is possible to perform scheduling even if there is a task that exceeds the local memory capacity.

    Download PDF (771K)
  • Shota Yamanaka, Homei Miyashita
    Article type: Special Issue of Embedded Systems Engineering
    Subject area: Human-Interface Basics
    2025Volume 33 Pages 91-103
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    We propose spatial-error models for pen-stroking motions to predict variable error VE and total variability TV. These models predict stroke linearity and closeness under different movement distances and times which are crucial for (e.g.) determining timeout and variability thresholds in gesture recognition systems. Results from our line-tracing task with temporal constraints demonstrated a clear speed-accuracy tradeoff, i.e., spatial errors worsened with longer distances and shorter times. The best-fit model, which employs an exponential weight for time, yielded adjusted R2 values of 0.9278 for VE and 0.9339 for TV, outperforming the existing linear error-speed model.

    Download PDF (3008K)
  • Yixin Zhang, Yoko Yamakata, Keishi Tajima
    Article type: Regular Section
    Subject area: Special Section on Databases
    2025Volume 33 Pages 104-114
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    Recognizing ingredients in cooking images is a challenging task due to the significant visual changes that ingredients undergo throughout the cooking process. As ingredients are prepared, cooked, and served, their appearances vary greatly between the beginning, intermediate, and finishing stages. Traditional object recognition methods, which assume constant object appearances, struggle with this variability and are often not good at accurately identifying ingredients at different cooking stages. To address this challenge, we propose a stage-aware recognition method specifically designed for dynamically changing ingredients in cooking images. Our approach introduces two techniques: 1. Stage-Wise Model Learning: This technique involves training separate models for each stage of the cooking process. By adapting models to specific stages, we can better capture the distinct visual characteristics of ingredients as their appearances change. 2. Stage-Aware Curriculum Learning: This technique begins training with data from the beginning cooking stages and progressively incorporates data from later stages. This gradual approach helps the model adapt to the evolving appearances of ingredients. Our experimental results, using our published dataset, demonstrate that our stage-aware methods significantly outperform models trained without stage considerations, achieving higher accuracy in ingredient recognition.

    Download PDF (3115K)
  • Tianjia Ni, Kento Sugiura, Yoshiharu Ishikawa, Kejing Lu
    Article type: Regular Section
    Subject area: Special Section on Databases
    2025Volume 33 Pages 115-127
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    Approximate query processing (AQP) has gained traction as an effective technique for executing queries on big data. Bounded approximate query processing (BAQ) is a recently proposed framework that stores a summary of an original table as a synopsis and ensures that its approximation errors remain below a user-specified threshold. Based on the BAQ framework, we have extended it to BAQ± to guarantee strictly bounded errors for more diverse data. However, BAQ and BAQ± still have problems when constructing synopses. They require time-consuming data sorting for each numerical attribute and cannot summarize high-cardinality categorical attributes, such as spatiotemporal data. To overcome these problems, we propose a novel framework called Hierarchical BAQ (HBAQ) and a synopsis construction method in this paper. HBAQ constructs multiple synopses based on the dimension tables of several categorical attributes and uses them to answer OLAP queries efficiently. We also introduce a new bucket definition to summarize numerical attributes effectively and support incremental updates for synopses. We conducted extensive experiments with several datasets. The experimental results show that HBAQ achieved half the construction time of BAQ with lower memory consumption. Furthermore, HBAQ could answer OLAP queries more efficiently than BAQ using hierarchically constructed synopses.

    Download PDF (1212K)
  • Shidao Zhao, Tomochika Ozaki
    Article type: Regular Section
    Subject area: Special Section on Consumer Device & System
    2025Volume 33 Pages 128-138
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    This paper proposes a control method called virtual touchpad for AR glasses which mirrors the screen of a PC. By using our method, a user can control the mouse cursor by gestures similar to the ones of a physical touchpad. Different from a physical touchpad, it is hard for users to recognize the hand's position and operation status when using a virtual touchpad. To solve these problems, we introduce the flexible position mechanism and visual feedback. From the result of volunteer's test, we confirm that our virtual touchpad is an effective method to control AR glasses.

    Download PDF (3014K)
  • Wataru Matsuda, Mariko Fujimoto, Takuho Mitsunaga, Kenji Watanabe
    Article type: Regular Section
    Subject area: Special Section on digital practices
    2025Volume 33 Pages 139-155
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    In recent years, control systems have rapidly advanced and increasingly tend to be connected to IT networks and the Internet. In environments where IT and Industrial Control Systems (ICS) are interconnected, there is a risk of intrusion via the IT network. Nowadays, IT technologies are integrated into ICS, so it is crucial to consider IT attack risks in ICS environments in addition to ICS-specific attacks. A vast amount of information on attack tools and cyberattack reports has been published.Security analysts must analyze or meticulously read this information to determine if the attacks are relevant to their organization and how they should be defended against, necessitating a curation process. However, understanding the content of all published attack methods and reports properly requires significant resources, including costs and skills based on experience. Therefore, this research investigates the practical use of Large Language Models (LLMs) for extracting information beneficial to an organization's security measures efficiently. Specifically, we examined whether it is possible to identify protocols and ports from public information that could be exploited in attacks.These information are helpful in preventing or monitoring these attacks using tools such as firewalls, even if timely security updates are difficult. This examination was conducted from the following two perspectives:

    ・Extracting port numbers to be protected and monitored against attacks targeting IT networks, especially Windows environments, based on Proof of Concept (PoC) information on the Internet.

    ・From the perspective of ICS networks, extracting exploited protocols, port numbers, and product names from past ICS-related reports.

    The goal of the research is to prepare for attacks in advance, identify exploitable products and protocols. The results obtained from the proposed method can be utilized for mitigation and enhanced monitoring. Furthermore, they can also be applied to risk assessment and penetration testing. Using the proposed method, we were able to extract port numbers with a potential for misuse in IT attacks with a 60.0% correct response rate. For ICS, we achieved an 81.8% correct response rate in extracting potentially exploited port numbers and protocol names, and a 72.7% correct response rate in identifying target products.

    Download PDF (4292K)
  • Wataru Matsuda, Mariko Fujimoto, Takuho Mitsunaga, Kenji Watanabe
    Article type: Regular Section
    Subject area: Special Section on digital practices
    2025Volume 33 Pages 156-167
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    Active Directory (AD) is widely used as an authentication server in many organizations. AD centrally manages users and computers and their authentications. Thus, it is very useful but also tends to be leveraged by attackers. Attacks leveraging ADs such as a Silver Ticket have been observed, but they are challenging to detect since they abuse specifications of Kerberos authentication, not vulnerabilities.

    Recently, some organizations have been migrating their AD environment to Azure Active Directory (Azure AD) and operating hybrid environments on-premise AD and Azure AD. In some forms of hybrid environments, seamless single sign-on, called SSSO can be an option; SSSO allows the on-premise AD Kerberos Ticket to be used for authentication to some Azure services as well. In such an environment, attackers can compromise the on-premise AD and abuse the Silver Ticket to spread the compromise to Azure services.

    Not only in a hybrid environment but also in an on-premise environment, detection of the Silver Ticket exploits is difficult. In this study, we especially focus on the behavior that the client computers always request the Service Ticket to the Domain Controller every time they access the Azure services if the Kerberos Ticket expiration time setting is reduced to 10 minutes (default value is 600 minutes.) Our study introduces a method to detect the abuse of the Silver Ticket in SSSO environments with high detection accuracy. We also introduce a detection method for malicious access to the on-premise Domain Controller with the Silver Ticket.

    Download PDF (2113K)
  • Takashi Norimatsu, Yuichi Nakamura, Toshihiro Yamauchi
    Article type: Regular Section
    Subject area: Special Section on digital practices
    2025Volume 33 Pages 168-183
    Published: 2025
    Released on J-STAGE: February 15, 2025
    JOURNAL FREE ACCESS

    Developers of OAuth 2.0's authorization server or OpenID Connect 1.0's OpenID provider software that support multiple OAuth 2.0-based security profiles need their products to pass conformance tests provided by the OpenID Foundation. However, they usually encounter several challenges. Specifically, they require extensive man-hours to create programs other than the product targeted for the conformance tests, provide support for execution of a new conformance test if required by a new security profile, and execute multiple conformance tests. Together with the Open-source Software community OAuth Special Interest Group, we developed a conformance test execution platform to resolve these issues, using Keycloak as the target for conformance tests. We evaluated the platform and confirmed that it resolves these issues. Using the platform, we executed conformance tests of the Financial-grade API (FAPI) and Open Banking security profiles to Keycloak and confirmed that Keycloak passed the conformance tests of these security profiles. This implies that Keycloak complies with their specifications. We confirmed by the evaluation of the platform that automating execution of a conformance test reduced its completion time by 56.8%, parallelizing execution of nine conformance tests reduced its completion time by 62.4% and lines of code of programs the developer needs to write was reduced by 85.7% by the platform. Finally, we published the platform on the GitHub repository for public use.

    Download PDF (2802K)
  • Noriaki Yoshiura
    Article type: Special Issue of Network operational technologies with awareness raising
    2025Volume 33 Pages 184
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (33K)
  • Shuntaro Ema, Yuta Sato, Hinata Nishino, Keita Emura, Toshihiro Ohigas ...
    Article type: Special Issue of Network operational technologies with awareness raising
    Subject area: System Security
    2025Volume 33 Pages 185-196
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    In identity-based encryption (IBE), a key generation center (KGC) issues a secret key for an identity. Although any value can be used as a public key, the KGC has the potential to decrypt all ciphertexts even if it is not the actual destination. To solve this key escrow problem, Emura, Katsumata, and Watanabe (EKW) proposed an IBE scheme with security against the KGC (ESORICS 2019/TCS 2022) and proposed two schemes: a pairing-based construction by extending the Boneh-Franklin IBE scheme (CRYPTO 2001) and a lattice-based construction by extending the Gentry-Peikert-Vaikuntanathan (GPV) IBE scheme (STOC 2008), respectively. Though the KGC can issue a secret key without knowing the user's identity, an additional communication (between the user and the identity-certifying authority (ICA)) and computation by the KGC are required compared to the conventional IBE scheme. In this paper, we implement two EKW-IBE schemes and show that the additional costs are insignificant compared to the underlying IBE schemes. It should be noted that, instead of solving the key escrow problem, EKW-IBE required that an identity is sampled from a sufficiently high min-entropy source (e.g., a random value). Since any value (such as a name or an e-mail address) can be employed in IBE, this requirement detracts from the merit of IBE. Thus, we also consider an application of EKW-IBE schemes where the requirement does not cause a problem.

    Download PDF (1927K)
  • Yuka Kato
    Article type: Special Issue of Young Researchers' Papers
    2025Volume 33 Pages 197
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (33K)
  • Takuna Uemura, Yasunobu Sumikawa
    Article type: Special Issue of Young Researchers' Papers
    Subject area: Implementation Techniques for Programming Languages
    2025Volume 33 Pages 198-209
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    Partial redundancy elimination (PRE) eliminates redundant expressions that repeatedly compute the same values. Following redundancy elimination, the application of copy propagation reveals additional redundancy, known as second-order effects. Eliminating this type of redundancy requires the iterative application of PRE and copy propagation. To eliminate many second-order effects within a short analysis time, demand-driven PRE (DDPRE) has been proposed. However, existing DDPREs assume that all expressions are executed an equal number of times, potentially resulting in spending analysis time on expressions with limited impact even after redundancy elimination and generating spills in register allocation that reduce the effects of redundancy elimination. This study proposes a novel type of DDPRE called profile-guided DDPRE (PDPRE) that utilizes runtime information to selectively apply DDPRE to areas where redundancy elimination is effective. Additionally, to eliminate second-order effects without executing a combination of redundancy elimination and copy propagation, PDPRE initially applies global value numbering. Subsequently, it visits expressions in the order of high execution counts, and then analyzes the redundancy of each expression. To evaluate the effectiveness of PDPRE, we applied PDPRE and existing DDPREs to the programs of the SPEC CPU2000 benchmark. We found that PDPRE both achieves shorter analysis times compared to the existing DDPREs and yields better execution times in many programs compared to existing DDPREs.

    Download PDF (1223K)
  • Mury F. Dewantoro, Febri Abdullah, Ruck Thawonmas
    Article type: Special Issue of Young Researchers' Papers
    Subject area: Game Informatics
    2025Volume 33 Pages 210-218
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    This research proposes a procedural content generation (PCG) method for generating Angry Birds-like levels with controllable similarity to a given example. Ensuring the believability of generated levels is a key challenge in PCG. Using a believable example level, the challenge can be addressed by ensuring that a certain degree of similarity to the example is retained. Here, we focus on a method to control the degree of similarity. With controllable similarity, it is possible to adjust the resemblance of generated levels to an example level while maintaining diversity among the generated levels. To achieve this, our method partitions the example level into rows containing one or multiple blocks. Our method alters the said level by adding new blocks between rows. To control the similarity, we employ grammar-based rules extracted from the example to select the new blocks. Two evaluation metrics are employed: similarity score, which measures how well the generated levels retain similarity in a human perception, and diversity score, which measures the variety among generated structures. Our results suggest that the proposed method is capable of generating levels with controllable similarity to a given example while maintaining the diversity among the generated ones.

    Download PDF (1862K)
  • Tengfei Shao, Yuya Ieiri, Shingo Takahashi
    Article type: Special Issue of Young Researchers' Papers
    Subject area: Information Systems for Society and Humans
    2025Volume 33 Pages 219-230
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    This study introduces and validates the Network Motifs and Multiple Attributes (NMMA) model, an analytical approach designed to explore and analyze multi-attribute network motifs in the context of secondary luxury products markets by systematically constructing transaction topologies and analyzing interactions through various attributes such as profit, cost, Return on Investment (ROI), transaction frequency, brand, and item type. The model leverages real-world data collected in collaboration with a commercial partner encompassing both e-commerce (EC) and brick-and-mortar transactions. Statistical methods were employed to analyze the validation results, highlighting distinct performance and strategic implications of various trading types in EC versus traditional retail settings Findings suggest a generally higher ROI in EC, attributed to online sales' efficiency and lower operational costs. The study also examines how brand and item types influence consumer purchasing behavior and market trends through network motifs. Applying the NMMA model enhances understanding of market dynamics and supports optimizing business strategies, particularly in improving transaction efficiency and market share.

    Download PDF (21084K)
  • Yu Han, Kazuyuki Nakamura
    Article type: Regular Section
    Subject area: Information Mathematics
    2025Volume 33 Pages 231-244
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    Particle filtering methods are widely utilized for state estimation in nonlinear non-Gaussian state space models. However, traditional particle filtering methods often suffer from weight degeneracy issues in high-dimensional situations. To overcome this challenge, various particle filtering methods have been developed to enhance state estimation performance in high-dimensional situations. Among these, the Sequential Markov Chain Monte Carlo (SMCMC) methods, which utilize composite Metropolis-Hastings (MH) kernels, significantly improve state estimation performance. In SMCMC, different Markov Chain Monte Carlo (MCMC) samplers can be incorporated in composite MH kernels as the proposal distribution. Specifically, as a special type of MCMC samplers, the Piecewise Deterministic Markov Process (PDMP) samplers can construct more efficient proposal distributions by utilizing non-reversible Markov processes and have already been applied in the design of composite MH kernels within SMCMC. In this study, we propose a novel approach to further improve state estimation performance in high-dimensional situations. We integrated the Zig-Zag Sampler (ZZS) —a special PDMP sampler—into the SMCMC framework and employed it to construct proposal distributions for the composite MH kernels. This integration fully leverages the advantages of both the ZZS and SMCMC. We assess the efficacy of our proposal method through numerical experiments on challenging high-dimensional state estimation tasks. The results demonstrate that our method significantly improves estimation accuracy and computational efficiency compared to existing state-of-the-art filtering techniques in high-dimensional situations.

    Download PDF (1415K)
  • Daiki Kohama, Yoshiteru Nagata, Kazushige Yasutake, Shin Katayama, Ken ...
    Article type: Regular Section
    Subject area: Computer Graphics
    2025Volume 33 Pages 245-258
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    The manual creation of a ceiling plan consumes many human resources to confirm the current state of existing buildings for renovation. Identifying the positions and the types of existing fixtures is crucial for drawing a ceiling plan. Therefore, to assist in drawing a ceiling plan, an efficient method that generates a reliable photorealistic whole ceiling surface image is essential. We present a method of fixture-aware panoramic ceiling image synthesis using an omnidirectional camera. A fixture-aware panoramic ceiling image is a photorealistic image that depicts the whole ceiling surface and no seams on ceiling fixtures for its reliability regarding assistance in drawing a ceiling plan. The proposed method is remarkably effortless because it requires the users to work only 2 procedures for synthesizing the image, 1) shooting an omnidirectional video underneath the ceiling and 2) fixture measurement in the image. In the experiment, we evaluated the resulting image for the positional and shape accuracy of the ceiling fixtures and the time efficiency of the measurement. Our proposed method demonstrated a positional accuracy of 2.25 times higher, a shape accuracy of 5.87 times higher, and a time efficiency of the measurement 11.09 times higher than the baseline.

    Download PDF (91777K)
  • Toru Kano, Takako Akakura
    Article type: Regular Section
    Subject area: Education
    2025Volume 33 Pages 259-263
    Published: 2025
    Released on J-STAGE: March 15, 2025
    JOURNAL FREE ACCESS

    Congenital visually impaired individuals especially in Japan, have difficulty understanding some onomatopoeia because Japanese onomatopoeia is often formed from visual information. To solve this problem, in this paper, we developed a system that enables users to experience and understand the movements of objects through a haptic device. As a result of the evaluation experiment, it was suggested that our system could convey the image of onomatopoeia to users only using force feedback. In other words, the results indicate that congenital visually impaired individuals would be able to understand onomatopoeia based on visual information by using our system.

    Download PDF (1417K)
  • Jingjing Rao, Songpon Teerakanok, Tetsutaro Uehara
    Article type: Regular Section
    Subject area: ITS
    2025Volume 33 Pages 264-275
    Published: 2025
    Released on J-STAGE: April 15, 2025
    JOURNAL FREE ACCESS

    With the popularity of digital images in communications and media, image tampering detection has become an important research topic in the field of computer vision. This study uses the DeepLabV3+ model to explore the impact of dilated convolution rate changes and attention mechanisms on the accuracy of image tampering location and particularly emphasizes the application of independently created mobile image tampering datasets in experiments. First, we verified the effectiveness of DeepLabV3+ on basic image segmentation tasks and tried to apply it to more complex image tampering detection tasks. Through a series of experiments, we found that reducing the atrous convolution rate can reduce model complexity and improve training efficiency without significantly affecting accuracy. Furthermore, we integrate channel attention and spatial attention mechanisms, aiming to enhance the model's recognition accuracy of tampered areas. In particular, the mobile datasets we developed contain images shot with smartphones and then tampered with using the phone's built-in editing tools. These datasets play a key role in validating the model's ability to handle real-world tampering scenarios.

    Download PDF (8380K)
  • Yuji Suga
    Article type: Regular Section
    Subject area: Security Infrastructure
    2025Volume 33 Pages 276-283
    Published: 2025
    Released on J-STAGE: April 15, 2025
    JOURNAL FREE ACCESS

    Card-based cryptographic protocols are useful for performing secure computations using physical cards instead of digital systems and are well-suited for educational purposes, especially for those new to studying multi-party computation (MPC). In this paper, we investigate using cards (such as business cards or mahjong tiles) with the same design on the back, but the front sides can face different directions. These cards are defined as those whose backs are indistinguishable and whose fronts can be differentiated based on the top and bottom. Mahjong tiles, painted the same color on the back, cannot be differentiated from the back even when swapped. Thus, tiles whose fronts show different designs when swapped can be used as top and bottom cards. Here, we examine the practical feasibility of implementing such protocols, focusing on whether the shuffle is practical. We present a realistic method to determine if protocols using up-down cards can be implemented by replacing cards with mahjong tiles. Additionally, we introduce the construction of a new protocol specifically for shuffling mahjong tiles. This study aims to provide a practical approach to utilizing up-down cards in secure and efficient card protocols, demonstrating their versatility and applicability in real-world scenarios.

    Download PDF (3003K)
  • Hongkuan Zhang, Koichi Takeda, Ryohei Sasano
    Article type: Regular Section
    Subject area: Natural Language Processing
    2025Volume 33 Pages 284-294
    Published: 2025
    Released on J-STAGE: April 15, 2025
    JOURNAL FREE ACCESS

    Driving video captioning aims to automatically generate descriptions for videos from driving recorders. Driving video captions are generally required to describe first-person driving behaviors which implicitly characterize the driving videos but are challenging to anchor to concrete visual evidence. To generate captions with better driving behavior descriptions, existing work has introduced behavior-related in-vehicle sensors into a captioning model for behavior-aware captioning. However, a better method for fusing the sensor modality with visual modalities has not been fully investigated, and the accuracy and informativeness of generated behavior-related descriptions remain unsatisfactory. In this paper, we compare three modality fusion methods by using a Transformer-based video captioning model and propose two training strategies to improve both the accuracy and the informativeness of generated behavior descriptions: 1) joint training the captioning model with multilabel behavior classification by explicitly using annotated behavior tags; and 2) weighted training by assigning weights to reference captions (references) according to the informativeness of behavior descriptions in references. Experiments on a Japanese driving video captioning dataset, City Traffic (CT), show the efficacy and positive interaction of the proposed training strategies. Moreover, larger improvements on out-of-distribution data demonstrate the improved generalization ability.

    Download PDF (27038K)
  • Shogo Hiromatsu, Masahiro Yasugi, Tasuku Hiraishi, Kento Emoto
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 295-311
    Published: 2025
    Released on J-STAGE: April 15, 2025
    JOURNAL FREE ACCESS

    High productivity, scalability, load balancing, and fault tolerance are all important issues of massively parallel computing. A parallel execution model called HOPE, which we are proposing, addresses these issues by “hierarchical omission of redundant computations”. Every HOPE worker performs the entire divide-and-conquer computation with its own planned order, whereas it can omit subcomputations whose results are obtained from other workers at runtime, achieving fault tolerance and parallel efficiency as a team of workers. In most random number generation algorithms, a random number is generated on the basis of the internal state after the previous random number generation. Therefore, simple reordering does not preserve coherent results/conditions among workers based on the properties of utilized random numbers. In order to always use the same sequence for each hierarchical subcomputation, this study explores coherent random number utilization schemes, using Sobol' sequences or the PCG random number generator, which allow generation starting from the latter part by skipping the first part. In addition, the Monte Carlo method is an important application of random numbers. In this study, we examine a HOPE application that computes highly dimensional numerical integrals for pricing securities known as MBS (mortgage-backed securities) using the Monte Carlo method, where the convergence speed of integration errors is independent of dimensionality.

    Download PDF (1608K)
  • Akira Goto
    Article type: Special Issue of Information Systems
    2025Volume 33 Pages 312-313
    Published: 2025
    Released on J-STAGE: May 15, 2025
    JOURNAL FREE ACCESS
    Download PDF (34K)
  • Ayana Uematsu, Minshun Yang, Hironori Washizaki, Naoyasu Ubayashi, Jui ...
    Article type: Special Issue of Information Systems
    Subject area: Software Processes
    2025Volume 33 Pages 314-324
    Published: 2025
    Released on J-STAGE: May 15, 2025
    JOURNAL FREE ACCESS

    Agile software development (ASD) methods which are currently the primary development approach, use various metrics for quality management. However, few existing research studies directly measure improvements in user satisfaction as a result of the ASD process, even though the user is the top priority in ASD. Thus, it is important to identify those ASD processes that affect user satisfaction. To support such analysis, we constructed an information system, AS-ASD, which examines relationships between time-series ASD process metrics calculated on the basis of issues and pull requests in GitHub and user satisfaction calculated on the basis of user reviews. A case study evaluated AS-ASD to open-source software (OSS) development, which follows the same lines of thought and practices as ASD methods. The results revealed a correlation between issues processing time and user satisfaction, but no significant correlation between velocity and user satisfaction. These findings, supported by quantitative analysis of user reviews, suggest that ASD processes that actively detect and address user feedback, as well as those capable of responding swiftly to user concerns can improve user satisfaction.

    Download PDF (1606K)
  • Yui Maruyama, Tatsuya Amano, Hirozumi Yamaguchi
    Article type: Regular Section
    Subject area: Mobile Computing
    2025Volume 33 Pages 325-335
    Published: 2025
    Released on J-STAGE: May 15, 2025
    JOURNAL FREE ACCESS

    Hybrid style metaverses, integrating physical and virtual spaces, face a critical challenge in managing shared 3D object quality across multiple users with diverse preferences and limited network resources. This paper addresses the problem of allocating limited bandwidth for transmitting point cloud representations while maximizing overall user satisfaction. We propose a distributed optimization method that dynamically adjusts 3D object quality based on contextual importance, available resources, and user preferences. Our approach uses Input Convex Neural Networks (ICNN) to model user utility functions and employs the Alternating Direction Method of Multipliers (ADMM) for distributed optimization. Key advantages include scalability, adaptability, and improved quality of experience. Evaluation using real-world data captured by our team and open datasets demonstrate significant improvements in user satisfaction and resource utilization compared to baseline approaches. Our method achieves 93-94.6% accuracy in modeling user utility and shows up to 60% faster convergence for scenarios with 30 users, contributing to the balance between high-fidelity representation and efficient data management in hybrid-metaverses.

    Download PDF (29907K)
  • Ryota Matsui, Yutaka Yanagisawa, Yoshinari Takegawa
    Article type: Regular Section
    Subject area: Machine Learning & Data Mining
    2025Volume 33 Pages 336-344
    Published: 2025
    Released on J-STAGE: May 15, 2025
    JOURNAL FREE ACCESS

    In this study, we propose a new method for extracting differences between songs that takes advantage of the strengths of WAV and MIDI data. The proposed method converts acoustic signals into mel-spectrogram images and extracts differences by applying anomaly detection techniques. Although studies have been conducted to detect similarities between musical pieces, none have been conducted to identify differences in the musical performances of different pieces. In a verification experiment, we recorded 100 piano performances of a single piece of music for piano and compared them with a model performance to see if intentional mistakes could be detected. The results revealed an accuracy of 93.6% or higher in correctly identifying discrepancies The proposed method is compatible with traditional methods of instrumental performance instruction and proves to be more adaptable to the teaching field compared to evaluating performance using an absolute scale.

    Download PDF (16062K)
  • Shogo Nakashima, Tomoya Mori, Ruiming Li, Tatsuya Akutsu
    Article type: Regular Section
    Subject area: Information Mathematics
    2025Volume 33 Pages 345-356
    Published: 2025
    Released on J-STAGE: June 15, 2025
    JOURNAL FREE ACCESS

    Network theory has been applied in fields such as social networks to extract important information, and various graph theoretic concepts have been utilized. At the same time, it has also been applied to biological data. One widely applied concept is the feedback vertex set (FVS), which is useful for identifying driver nodes for network control in single-layer networks, with the problem of finding the minimum feedback vertex set (MFVS) being important for determining the smallest number of driver nodes. Recently, multilayer networks are being used to represent biological data as well as single-layer networks. In this paper, we study the minimum common feedback vertex set (MCFVS) as an extension of MFVS for multilayer networks and develop an integer linear programming-based method for computing MCFVS. In order to efficiently handle larger networks, we further introduce graph compression and cycle detection methods. To examine the usefulness of MCFVS, we compare the number of elements in the MCFVS to the number of elements in the sum set (Union) of the MFVS for each layer using artificially generated networks and real biological networks. The results suggest that MCFVS may offer fewer driver nodes than traditional methods, such as Union.

    Download PDF (1247K)
  • So Morozumi, Shigeyuki Sato, Kenjiro Taura
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 357-367
    Published: 2025
    Released on J-STAGE: June 15, 2025
    JOURNAL FREE ACCESS

    For creating presentation slides, in which visual expressions are emphasized, WYSIWYG editors, such as PowerPoint, are preferred because they provide immediate feedback on fine layout adjustments. In contrast, for typesetting documents, typesetting systems that enable us to program object rendering and placement in a high-level manner are more effective. Although programming environments for high-level interactive typesetting are desirable for leveraging their advantages in slide creation, such environments are, unfortunately, not well developed. In this work, we present SATYSFI NOTEBOOK by extending the typesetting system SATYSFI, which is based on a functional programming language, to support incremental evaluation and by integrating it into the interactive programming environment JupyterLab. SATYSFI NOTEBOOK enables the incremental creation of slide decks while interactively typesetting and previewing slides represented as high-level program fragments. In this paper, we provide the design and implementation of SATYSFI NOTEBOOK, along with observations obtained during its development.

    Download PDF (721K)
  • Katsuhiro Ueno, Haru Karato
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 368-376
    Published: 2025
    Released on J-STAGE: June 15, 2025
    JOURNAL FREE ACCESS

    Motivated by the goal of making functional reactive programming straightforward and accessible even to beginners in functional programming, this paper proposes a compact yet novel framework that allows users to write reactive animations simply as compositions of functions of time. In this framework, any value that changes over time—including the entire animation and external states such as keyboard and mouse input—is represented by a function of time, making its time dependency explicit. Since time-varying values are ordinary functions, they are naturally composed using standard language constructs without requiring specialized combinators, in contrast to Elliott and Hudak's functional reactive programming and its successors. To encapsulate external states as declarative functions of time, the framework defines a time value as a dynamically extensible record consisting of values observed from the external world at specific moments. A field selector of such records has a function type of time and hence represents a time-varying value. The framework provides two primitives to organize the records: event for creating an external event sources and stream for accumulating past values. This paper describes the framework's design and implementation in Standard ML and demonstrates its descriptive power through examples.

    Download PDF (250K)
  • Yoshikazu Sakamaki
    Article type: Regular Section
    Subject area: Machine Learning & Data Mining
    2025Volume 33 Pages 377-386
    Published: 2025
    Released on J-STAGE: July 15, 2025
    JOURNAL FREE ACCESS

    One of the widely used methods in machine learning is the support vector machine (SVM). The SVM is a data classification algorithm based on labeled training data; it is widely used in fields, such as image classification and machine learning. In the SVM, parameter estimation is frequently performed based on mathematical programming methods, such as Sequential Minimal Optimization (SMO), Studies have proposed various algorithms, including interior-point methods. However, although the variables used in an SVM model are generally interrelated, traditional SVMs based on optimization problems cannot consider the relationships between variables in the model, such as the variance-covariance relationship observed in regression analysis. To address this, we attempted to estimate the parameters of an SVM by considering the relationships between the variables through Markov Chain Monte Carlo (MCMC) simulations. Furthermore, through verification experiments using real data, we demonstrated the possibility of estimating the variance and covariance of parameters from sampled data and improving the estimation accuracy through data over-sampling.

    Download PDF (3191K)
  • Hirotoshi Tamori, Takeshi Fukaya, Takeshi Iwashita
    Article type: Regular Section
    Subject area: Special Section on Advanced Computing Systems
    2025Volume 33 Pages 387-397
    Published: 2025
    Released on J-STAGE: July 15, 2025
    JOURNAL FREE ACCESS

    This study investigates the effectiveness of the Error vector Sampling Subspace Correction (ES-SC) preconditioning method for asymmetric matrices in a standard multi-threaded parallel computation environment. The ES-SC preconditioning is a technique to accelerate the solution processes for a sequence of linear systems having an identical asymmetric coefficient matrix. In the present study, we develop a multi-threaded BiCGSTAB solver preconditioned by the ILU and ES-SC methods. Because the ILU preconditioning cannot be straightforwardly parallelized, the block Jacobi and multi-color ordering methods are employed for its parallelization. We examine the effect of the ES-SC preconditioning when used with the parallel ILU preconditioner for the BiCGSTAB method from the viewpoints of its convergence and solution time. Numerical results confirm the effectiveness of the ES-SC preconditioning in a parallel computation environment.

    Download PDF (676K)
  • Yuya Kudo, Yuki Satake, Takeshi Fukaya, Takeshi Iwashita
    Article type: Regular Section
    Subject area: Special Section on Advanced Computing Systems
    2025Volume 33 Pages 398-409
    Published: 2025
    Released on J-STAGE: July 15, 2025
    JOURNAL FREE ACCESS

    Various scientific computations require solving a system of linear equations with a large and sparse coefficient matrix, for which Krylov subspace methods like the CG method are widely used. In such iterative methods, a computed approximate solution is typically evaluated using the (relative) residual norm; however, the (relative) error norm is also of interest. There is a relationship between these two values, with the condition number playing a crucial role in this context. Motivated by this, the presented study focuses on a linear system with a large, sparse, real, symmetric, and positive definite coefficient matrix, and considers methods for estimating the condition number of the coefficient matrix during the solution process of the linear system. Specifically, we consider approaches based on the Lanczos method and the ES (Error vector Sampling) method. Through numerical experiments using sufficiently large sparse matrices, we evaluate the two approaches in terms of accuracy and execution time.

    Download PDF (486K)
  • Yuta Kawakami, Takumi Matsumoto, Yuichi Takano
    Article type: Regular Section
    Subject area: Information Mathematics
    2025Volume 33 Pages 410-418
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Datasets collected for analysis often contain a certain amount of incomplete instances, where some feature values are missing. Since many statistical analyses and machine learning algorithms depend on complete datasets, missing values need to be imputed in advance. Bertsimas et al. (2018) proposed a high-performance method that combines machine learning and mathematical optimization algorithms for imputing missing values. We extensively revise this imputation method based on the nearest neighbors algorithm by using not only neighborhoods of data instances but also neighborhoods of features. Specifically, we first formulate an optimization model using the instance-and-feature neighborhoods for missing value imputation. We next design an alternating optimization algorithm to find high-quality solutions to our optimization model for missing value imputation. We also develop a warm-start strategy to efficiently find a sequence of solutions for various neighborhood sizes. Experimental results demonstrate the excellent imputation accuracy of our method with instance-and-feature neighborhoods and the computational efficiency of our alternating optimization algorithm with the warm-start strategy.

    Download PDF (1460K)
  • Masahiro Suzuki, Yusuke Fukazawa
    Article type: Regular Section
    Subject area: Embedded System Technology
    2025Volume 33 Pages 419-428
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    In this paper, we propose a method for predicting hourly bike availability for the coming month at bikeshare stations, by utilizing a large language model (LLM). Our approach begins by converting historical bikeshare station data into text. The training dataset includes information such as station ID, date, time, timeslot, day of the week, weekday/holiday classification, national holidays, temperature, weather conditions, station location, days since establishment, station penetration in the area, comfort level, wind speed, precipitation, user registration type, movement between stations, and bike inflow/outflow at each station. These elements are combined into text format, with the number of available bikes per station per hour serving as the target label. We then fine-tune BERT, an LLM, to predict these labels. Our method achieved an approximate 3.1% improvement in average RMSE compared to machine learning models trained on text data, and an approximate 16.4% improvement in average RMSE compared to machine learning models trained on tabular data. These results demonstrate the effectiveness of converting historical bikeshare data into text and fine-tuning LLMs for demand prediction.

    Download PDF (8019K)
  • Thuy Thi Thanh Phan, Norihide Matsubara, Akihiro Yamamoto
    Article type: Regular Section
    Subject area: Application Systems
    2025Volume 33 Pages 429-444
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Various knowledge graphs are now available, and useful knowledge can be extracted from them. Measuring the similarity between two nodes in a knowledge graph is one of the fundamental steps required for extracting knowledge from it. In this paper, we propose a novel measure for representing the similarity between two nodes: LRoleSim. We define LRoleSim as an extension of RoleSim in a deductive manner, and we do not use inductive methods, such as embedding graphs into vector spaces. This means that LRoleSim inherits the axiomatic properties of RoleSim and functions as an admissible role similarity measure. The RoleSim measure was proposed for measuring the similarity between two nodes in homogeneous information networks, where nodes and edges are being treated either as of the same type or as of un-typed nodes or edges. In contrast, every node in a knowledge graph has its own type, and every edge has a direction and type. LRoleSim is designed so that it captures all of these types and directions as well as the topological structure of each node's neighbor. Experiments on real-world knowledge graph datasets verified that LRoleSim captures the semantic meaning of node similarity. Moreover, our experiments show that LRoleSim outperforms RoleSim in terms of computational efficiency.

    Download PDF (4138K)
  • Kenji Saotome, Koji Nakazawa
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 445-460
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Separation logic is an extension of Hoare logic for verifying memory-manipulating programs. Formulas for pre- and post-conditions in separation logic are often restricted to symbolic heaps with inductive predicates for automated verification. The entailment checking problem between symbolic heaps has been actively investigated in this context. One potential solution is to construct entailment provers based on cyclic-proof systems. Cyclic-proof systems are a reasonable way to reason entailments with inductive predicates. However, several cyclic-proof systems, including for symbolic-heap separation logic, do not satisfy the cut-elimination property. Hence, a cut-free proof search is insufficient, and a heuristic search for cut formulas is required to apply the cut rule. This paper investigates the search space for cut formulas in the cyclic-proof system for symbolic heap. We prove that the proof system does not satisfy the cut-restriction property with the initial signature cuts, of which cut formulas contain only the inductive predicates in the signature of the conclusion entailments. In other words, the provability is properly weakened by restricting cut formulas to those in the initial signature. From this, it follows that it may be necessary to introduce new inductive predicates while finding cut formulas for proof search.

    Download PDF (1547K)
  • Ya Mone Zin, Yukiyoshi Kameyama
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 461-470
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    This paper introduces Mayuzin, a new multi-stage programming (MSP) language designed to integrate both generative and analytical capabilities for run-time metaprogramming while preserving type safety. While code generation and code analysis play complementary roles in metaprogramming practice, theoretical foundation for code analysis has been largely overlooked in the literature, with a notable exception by Stucki et al., who studied code analysis in compile-time metaprogramming. In this study, we fill this gap by presenting a metaprogramming language that supports both runtime code generation and analysis, ensuring type safety throughout. Mayuzin introduces three key features: (1) a code analysis framework that leverages pattern matching to enable dynamic code inspection and transformation, (2) integration of ML-style let polymorphism within the multi-stage setting using the Hindley-Milner type system as a foundation, and (3) support for manipulation of open codes, which is crucial for generating efficient code. Our design extends traditional MSP by providing robust facilities for runtime code analysis while aiming to maintain type guarantees across stages. We will illustrate Mayuzin's capabilities through examples of runtime code optimization in domain-specific applications, such as eliminating redundant computations in generated numerical code. These examples demonstrate how Mayuzin's unified approach to metaprogramming facilitates efficient domain-specific optimizations, with rigorous proof of type safety, ensuring safe runtime code transformations.

    Download PDF (613K)
  • Satsuki Kasuya, Yudai Tanabe, Hidehiko Masuhara
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 471-486
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Programming with Version (PWV) is a programming paradigm that allows programmers to safely utilize multiple versions of the same package within a single program, facilitating flexible version updates of dependent packages. Existing PWV languages ensure consistent version usage so as not to break software behaviors by leveraging the type system of the base language. However, dynamically typed languages need a mechanism to support multiple versions with an efficient method of ensuring consistent version usage without a type system. To introduce PWV features into dynamically typed languages, we propose a dynamic version checking (DVC) mechanism. It records version information in a value, propagates it during evaluation, and checks inconsistency using version information recorded in values. When an inconsistency is detected, the mechanism suggests how to modify the program to resolve potential semantic errors from the inconsistency. We develop Vython, a Python-based PWV language with DVC, and implement its compiler. The compiler translates a Vython program into a Python program with bitwise operations. Our performance measurement shows the DVC mechanism's overhead is scalable and acceptable for small programs but requires further optimization for real-world use. Additionally, we conduct a case study and discuss future directions to facilitate smoother updates in practical development.

    Download PDF (955K)
  • Waka Ito, Yui Obara, Miyu Sato, Kimio Kuramitsu
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 487-493
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Large language models (LLMs) are expected to bring automation and efficiency to software development, including programming. However, an LLM encounters a challenge known as “hallucination, ” where it produces incorrect content or outputs that deviate from input requirements. SelfCheckGPT is one of the methods designed to detect hallucinations. Its key feature lies in its ability to infer the occurrence of hallucinations without requiring reference data or test cases. Although SelfCheckGPT has been evaluated and applied in natural language processing tasks such as text summarization and question answering, its performance in code generation has not yet been explored. In this study, we applied SelfCheckGPT to the HumanEval dataset, a standard benchmark for code generation, and investigated its evaluation performance by comparing it with execution-based evaluations. The results revealed that calculating similarity using BLEU, ROUGE-L, and EditSim is adequate for predicting the correctness of code or, in other words, hallucinations.

    Download PDF (1065K)
  • Jing Xu, Tasuku Hiraishi, Zhengyang Bai, Keiichiro Fukazawa, Masahiro ...
    Article type: Regular Section
    Subject area: Special Section on Programming
    2025Volume 33 Pages 494-506
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    General-purpose computing on graphics processing units (GPGPUs) has become increasingly prevalent, with hybrid CPU-GPU systems at the forefront of parallel computing. Dynamic load balancing is highly effective in maximizing CPU and GPU utilization in such environments. Backtracking-based load balancing, utilizing work-stealing, offers a promising strategy for task parallelism. However, Tascell, a task-parallel language that implements this mechanism, currently lacks GPU support, limiting its potential for hybrid CPU-GPU parallelism and constraining its application in computationally intensive tasks. In this paper, we demonstrate an experimental study on enabling Tascell to fully utilize the computational power of CPU-GPU hybrid environments by writing both CPU-oriented and GPU-oriented code for workers, allowing each worker to selectively execute either implementation based on the task size and GPU availability. Using this technique, we implemented hybrid CPU-GPU programs for three applications using Tascell: recursive block matrix multiplication, 2D stencil computations and Mandelbrot set calculations. The GPU-oriented code was implemented using OpenACC or the NVBLAS library. We conducted performance evaluations on both high-performance and workstation-grade CPU-GPU hybrid computing environments. The results demonstrated that in the workstation-grade environment, the hybrid approach outperformed both CPU-only and GPU-only configurations. Notably, hybrid CPU-GPU executions achieved performance improvements of up to 12.9% and 25.2% in 2D stencil applications compared to GPU-only and CPU-only executions, respectively. These findings provide valuable insight into the effective use of hybrid CPU-GPU systems within a backtracking-based load balancing framework.

    Download PDF (1542K)
  • Atsuki Maruta, Nao Tagai, Makoto P. Kato
    Article type: Regular Section
    Subject area: Special Section on Databases
    2025Volume 33 Pages 507-521
    Published: 2025
    Released on J-STAGE: August 15, 2025
    JOURNAL FREE ACCESS

    Data analysis is crucial for extracting valuable information from large datasets and making strategic decisions. Effective data analysis requires various types of knowledge, and a lack of user knowledge can lead to incorrect analyses or misinterpretations. Therefore, a data analysis system that provides appropriate information is of great benefit for users with sufficient expertise. However, existing studies have not focused on providing information during data analysis, and it remains unclear what information users need. To address this challenge, this study investigates the information needs that arise during spreadsheet data analysis to design data analysis tools that compensate for users' lack of knowledge. We aim to understand what information users search for, when and how they search, and what web pages they read. To this end, we conducted a laboratory study in which participants analyzed data and drafted reports on their findings. The behaviors of the participants were coded and the post-task interviews provided deeper insights into the information needs during the analysis. Our findings include: (1) Six categories of information needs arose during the spreadsheet data analysis, and each category had different levels of difficulty to satisfy. (2) Each information need category co-occurred with significantly different user behaviors, such as browsing web pages, reading spreadsheets, and writing a report. (3) Information need categories had significant effects on search behaviors. Participants faced different types of difficulties especially when searching for evidence to explain data trends and when searching for analysis methods. (4) Participants read web pages with significantly different readability for each information need category. Especially, when searching for analysis method, they read web pages containing more complex terms. (5) The overlap of words between the reports and the web pages they read showed significant differences for each information need category. Their reports were influenced by the evidence to explain data trends on the web. Based on these findings, we discuss suggestions to improve data analysis tools.

    Download PDF (907K)
feedback
Top