IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Current issue
Displaying 1-5 of 5 articles from this issue
Regular Section
  • Xiaoxiao ZHOU, Yukinori SATO
    Article type: PAPER
    Subject area: Computer System
    2025 Volume E108.D Issue 5 Pages 411-419
    Published: May 01, 2025
    Released on J-STAGE: May 01, 2025
    Advance online publication: November 13, 2024
    JOURNAL FREE ACCESS

    The neural simulation is a method that mimics features and functionalities of biophysical brains and reproduces these on computers. Adjusting parameters properly, all activities of neurons are able to be simulated based on equations derived by the underlying neuron model. Neuron models are represented by ordinary differential equations, and generally solved numerically on computers. In this paper, we focus on designing a dedicated FPGA accelerator that solves a numerical method for neural simulation, and evaluate the trade-off between performance and accuracy when we implement two different numerical methods, Euler method and Runge-Kutta method. Here, Euler is known to be a simple and easy to implement method while Runge-Kutta is a compute-intensive but more precise numerical method. We compare the performance of these FPGA accelerators for a neural network consisting of 32,768 neurons. From the results, we confirm that we can successfully realize fully-pipelined dedicated solvers on a single FPGA board. Further, the FPGA implementation of Runge-Kutta method achieves 9.92x performance gain compared with Euler method with the same accuracy. Compared with the baseline less-accurate Euler method, the FPGA-based Runge-Kutta is 1.98 times faster while the CPU implementation of it is 1.53 times slower than the baseline of Euler method. This trade-off among performance and accuracy on FPGA implementation is unique and differs from typical results seen in typical implementation on general-purpose CPUs.

    Download PDF (1322K)
  • Kento WATANABE, Masataka GOTO
    Article type: PAPER
    Subject area: Music Information Processing
    2025 Volume E108.D Issue 5 Pages 420-430
    Published: May 01, 2025
    Released on J-STAGE: May 01, 2025
    Advance online publication: November 13, 2024
    JOURNAL FREE ACCESS

    This paper proposes a text-to-lyrics generation method, aiming to provide lyric writing support by suggesting the generated lyrics to users who struggle to find the right words to convey their message. Previous studies on lyrics generation have focused on generating lyrics based on semantic constraints such as specific keywords, lyric styles, and topics. However, these methods had limitations because users could not freely input their intentions as text. Even if such intentions can be given as input text, the lyrics generated from the input tend to contain similar wording, making it difficult to inspire the user. Our method is therefore developed to generate lyrics that (1) convey a message similar to the input text and (2) contain wording different from the input text. A straightforward approach of training a text-to-lyrics encoder-decoder is not feasible since there is no text-lyric paired data for this purpose. To overcome this issue, we divide the text-to-lyrics generation process into a two-step pipeline, eliminating the need for text-lyric paired data. (a) First, we use an existing text-to-image generation technique as a text analyzer to obtain an image that captures the meaning of the input text, ignoring the wording. (b) Next, we use our proposed image-to-lyrics encoder-decoder (I2L) to generate lyrics from the obtained image while preserving its meaning. The training of this I2L model only requires pairs of “lyrics” and “images generated from lyrics”, which are readily prepared. In addition, we propose for the first time a lyrics generation method that reduces the risk of plagiarism by prohibiting the generation of uncommon phrases in the training data. Experimental results show that the proposed method can generate lyrics with phrasing different from the input text but conveying a message similar to that conveyed by the input text.

    Download PDF (4133K)
  • Takashi YOKOTA, Kanemitsu OOTSU
    Article type: LETTER
    Subject area: Computer System
    2025 Volume E108.D Issue 5 Pages 431-435
    Published: May 01, 2025
    Released on J-STAGE: May 01, 2025
    Advance online publication: November 12, 2024
    JOURNAL FREE ACCESS

    Communication performance is one of the common issues in parallel computers and many studies are seeking for better interconnection networks. These studies offer a wide range of discussions in topology and routing algorithms, however, they use limited traffic patterns in many cases. Typically, two contrasting categories of traffic patterns are commonly used, i.e., irregular and regular ones. Thus, we should interpolate the categories’ evaluation results for recognizing general performance characteristics. This paper intends to fill the gap between the regular and irregular traffic patterns. This paper addresses the regularity in traffic patterns and proposes a perturbation concept to relax the strong regularity. Our preliminary evaluation results on 2D-torus networks reveal that the perturbation can control the level of regularity/randomness by a simple parameter. Also, this paper shows a promising side effect which can improve collective communication performance in some regular traffic patterns.

    Download PDF (301K)
  • Wonho LEE, Jong Wook KWAK
    Article type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2025 Volume E108.D Issue 5 Pages 436-439
    Published: May 01, 2025
    Released on J-STAGE: May 01, 2025
    Advance online publication: November 14, 2024
    JOURNAL FREE ACCESS

    In this letter, we propose Asymmetric Padded Winograd called APW, designed to enhance the computational efficiency of Winograd-based convolution algorithms on SIMT architectures. This approach resolves thread divergence, which typically causes delays in execution due to uneven computational distribution across threads. By integrating asymmetric padding into both filters and inputs, APW unifies the size of sub-filters and sub-inputs. This uniformity maintains a consistent execution path for threads throughout Winograd-based convolution process, effectively minimizing thread divergence. Our experimental results demonstrate that APW substantially reduces thread divergence observed in previous work to nearly zero and cuts down the total execution time by up to 17.78%.

    Download PDF (1415K)
  • Zezhong LI, Jianjun MA, Fuji REN
    Article type: LETTER
    Subject area: Natural Language Processing
    2025 Volume E108.D Issue 5 Pages 440-443
    Published: May 01, 2025
    Released on J-STAGE: May 01, 2025
    Advance online publication: November 19, 2024
    JOURNAL FREE ACCESS

    The performance of Neural Machine Translation (NMT) heavily depends on the severity of data uncertainty existing in the training examples. In terms of its causes, data uncertainty can be categorized into intrinsic and extrinsic uncertainty, both of which can increase the learning difficulty in NMT, and lead to degradation in translation performance. To cope with this challenge, we propose a simple yet effective method to estimate the data uncertainty and incorporate it into the adaptive training of NMT, which can mitigate the hurt brought by the uncertain data. Our method consists of two modules: 1) we propose a mBERT based model for estimating token-level and sentence-level data uncertainties jointly, which is trained with multi-task learning; 2) we propose heterogeneous ways of incorporating the derived multi-level uncertainties into the NMT training by using soft-masked embedding and weighted loss. Extensive experiments on Japanese↔Chinese translation show that our proposed methods substantially outperform the strong baselines in terms of BLEU scores, and verify the effectiveness of modeling data uncertainty in NMT.

    Download PDF (180K)
feedback
Top