IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
E107.D 巻, 2 号
選択された号の論文の9件中1~9を表示しています
Regular Section
  • Ying ZHAO, Youquan XIAN, Yongnan LI, Peng LIU, Dongcheng LI
    原稿種別: PAPER
    専門分野: Data Engineering, Web Information Systems
    2024 年 E107.D 巻 2 号 p. 169-179
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    Record/replay is one essential tool in clouds to provide many capabilities such as fault tolerance, software debugging, and security analysis by recording the execution into a log and replaying it deterministically later on. However, in virtualized environments, the log file increases heavily due to saving a considerable amount of I/O data, finally introducing significant storage costs. To mitigate this problem, this paper proposes RR-Row, a redirect-on-write based virtual machine disk for record/replay scenarios. RR-Row appends the written data into new blocks rather than overwrites the original blocks during normal execution so that all written data are reserved in the disk. In this way, the record system only saves the block id instead of the full content, and the replay system can directly fetch the data from the disk rather than the log, thereby reducing the log size a lot. In addition, we propose several optimizations for improving I/O performance so that it is also suitable for normal execution. We implement RR-Row for QEMU and conduct a set of experiments. The results show that RR-Row reduces the log size by 68% compared to the currently used Raw/QCow2 disk without compromising I/O performance.

  • Zejing ZHAO, Bin ZHANG, Hun-ok LIM
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2024 年 E107.D 巻 2 号 p. 180-190
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    In this study, a Coanda-drone with length, width, and height of 121.6, 121.6, and 191[mm] was designed, and its total mass was 1166.7[g]. Using four propulsion devices, it could produce a maximum of 5428[g] thrust. Its structure is very different from conventional drones because in this study it combines the design of the jet engine of a jet fixed-wing drone with the fuselage structure layout of a rotary-wing drone. The advantage of jet drone's high propulsion is kept so that it can output greater thrust under the same variation of PWM waveform output. In this study, the propulsion device performs high-speed jetting, and the airflow around the propulsion device will also be jetted downward along the direction of the airflow.

  • Chengyang YE, Qiang MA
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2024 年 E107.D 巻 2 号 p. 191-200
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    Representation learning is a crucial and complex task for multivariate time series data analysis, with a wide range of applications including trend analysis, time series data search, and forecasting. In practice, unsupervised learning is strongly preferred owing to sparse labeling. However, most existing studies focus on the representation of individual subseries without considering relationships between different subseries. In certain scenarios, this may lead to downstream task failures. Here, an unsupervised representation learning model is proposed for multivariate time series that considers the semantic relationship among subseries of time series. Specifically, the covariance calculated by the Gaussian process (GP) is introduced to the self-attention mechanism, capturing relationship features of the subseries. Additionally, a novel unsupervised method is designed to learn the representation of multivariate time series. To address the challenges of variable lengths of input subseries, a temporal pyramid pooling (TPP) method is applied to construct input vectors with equal length. The experimental results show that our model has substantial advantages compared with other representation learning models. We conducted experiments on the proposed algorithm and baseline algorithms in two downstream tasks: classification and retrieval. In classification task, the proposed model demonstrated the best performance on seven of ten datasets, achieving an average accuracy of 76%. In retrieval task, the proposed algorithm achieved the best performance under different datasets and hidden sizes. The result of ablation study also demonstrates significance of semantic relationship in multivariate time series representation learning.

  • Koki TSUBOTA, Kiyoharu AIZAWA
    原稿種別: PAPER
    専門分野: Image Processing and Video Processing
    2024 年 E107.D 巻 2 号 p. 201-211
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    While deep image compression performs better than traditional codecs like JPEG on natural images, it faces a challenge as a learning-based approach: compression performance drastically decreases for out-of-domain images. To investigate this problem, we introduce a novel task that we call universal deep image compression, which involves compressing images in arbitrary domains, such as natural images, line drawings, and comics. Furthermore, we propose a content-adaptive optimization framework to tackle this task. This framework adapts a pre-trained compression model to each target image during testing for addressing the domain gap between pre-training and testing. For each input image, we insert adapters into the decoder of the model and optimize the latent representation extracted by the encoder and the adapter parameters in terms of rate-distortion, with the adapter parameters transmitted per image. To achieve the evaluation of the proposed universal deep compression, we constructed a benchmark dataset containing uncompressed images of four domains: natural images, line drawings, comics, and vector arts. We compare our proposed method with non-adaptive and existing adaptive compression methods, and the results show that our method outperforms them. Our code and dataset are publicly available at https://github.com/kktsubota/universal-dic.

  • Jie LUO, Chengwan HE, Hongwei LUO
    原稿種別: PAPER
    専門分野: Natural Language Processing
    2024 年 E107.D 巻 2 号 p. 212-219
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    Text classification is a fundamental task in natural language processing, which finds extensive applications in various domains, such as spam detection and sentiment analysis. Syntactic information can be effectively utilized to improve the performance of neural network models in understanding the semantics of text. The Chinese text exhibits a high degree of syntactic complexity, with individual words often possessing multiple parts of speech. In this paper, we propose BRsyn-caps, a capsule network-based Chinese text classification model that leverages both Bert and dependency syntax. Our proposed approach integrates semantic information through Bert pre-training model for obtaining word representations, extracts contextual information through Long Short-term memory neural network (LSTM), encodes syntactic dependency trees through graph attention neural network, and utilizes capsule network to effectively integrate features for text classification. Additionally, we propose a character-level syntactic dependency tree adjacency matrix construction algorithm, which can introduce syntactic information into character-level representation. Experiments on five datasets demonstrate that BRsyn-caps can effectively integrate semantic, sequential, and syntactic information in text, proving the effectiveness of our proposed method for Chinese text classification.

  • Yiping TANG, Kohei HATANO, Eiji TAKIMOTO
    原稿種別: PAPER
    専門分野: Biocybernetics, Neurocomputing
    2024 年 E107.D 巻 2 号 p. 220-228
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    We introduce the Hexagonal Convolutional Neural Network (HCNN), a modified version of CNN that is robust against rotation. HCNN utilizes a hexagonal kernel and a multi-block structure that enjoys more degrees of rotation information sharing than standard convolution layers. Our structure is easy to use and does not affect the original tissue structure of the network. We achieve the complete rotational invariance on the recognition task of simple pattern images and demonstrate better performance on the recognition task of the rotated MNIST images, synthetic biomarker images and microscopic cell images than past methods, where the robustness to rotation matters.

  • Sunwoo JANG, Young-Kyoon SUH, Byungchul TAK
    原稿種別: LETTER
    専門分野: Software System
    2024 年 E107.D 巻 2 号 p. 229-233
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    This letter presents a technique that observes system call mapping behavior of the proxy kernel layer of secure container runtimes. We applied it to file system operations of a secure container runtime, gVisor. We found that gVisor's operations can become more expensive than the native by 48× more syscalls for open, and 6× for read and write.

  • Zhuo ZHANG, Donghui LI, Lei XIA, Ya LI, Xiankai MENG
    原稿種別: LETTER
    専門分野: Software Engineering
    2024 年 E107.D 巻 2 号 p. 234-238
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    With the growing complexity and scale of software, detecting and repairing errant behaviors at an early stage are critical to reduce the cost of software development. In the practice of fault localization, a typical process usually includes three steps: execution of input domain test cases, construction of model domain test vectors and suspiciousness evaluation. The effectiveness of model domain test vectors is significant for locating the faulty code. However, test vectors with failing labels usually account for a small portion, which inevitably degrades the effectiveness of fault localization. In this paper, we propose a data augmentation method PVaug by using fault propagation context and variational autoencoder (VAE). Our empirical results on 14 programs illustrate that PVaug has promoted the effectiveness of fault localization.

  • Zhishu SUN, Zilong XIAO, Yuanlong YU, Luojun LIN
    原稿種別: LETTER
    専門分野: Image Recognition, Computer Vision
    2024 年 E107.D 巻 2 号 p. 239-243
    発行日: 2024/02/01
    公開日: 2024/02/01
    ジャーナル フリー

    Facial Beauty Prediction (FBP) is a significant pattern recognition task that aims to achieve consistent facial attractiveness assessment with human perception. Currently, Convolutional Neural Networks (CNNs) have become the mainstream method for FBP. The training objective of most conventional CNNs is usually to learn static convolution kernels, which, however, makes the network quite difficult to capture global attentive information, and thus usually ignores the key facial regions, e.g., eyes, and nose. To tackle this problem, we devise a new convolution manner, Dynamic Attentive Convolution (DyAttenConv), which integrates the dynamic and attention mechanism into convolution in kernel-level, with the aim of enforcing the convolution kernels adapted to each face dynamically. DyAttenConv is a plug-and-play module that can be flexibly combined with existing CNN architectures, making the acquisition of the beauty-related features more globally and attentively. Extensive ablation studies show that our method is superior to other fusion and attention mechanisms, and the comparison with other state-of-the-arts also demonstrates the effectiveness of DyAttenConv on facial beauty prediction task.

feedback
Top