Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
Current issue
Showing 1-41 articles out of 41 articles from the selected issue
  • Masashi Saito
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    2020 Volume 28 Pages 1-2
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (38K)
  • Tomoya Kitani
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: Invited Papers
    2020 Volume 28 Pages 3-15
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    In this paper, we describe the current situation of the motorcycle industry, the world's markets, and the trend of motorcycle researches. The purpose is to promote researches and developments of motorcycles so that motorcycles become safer and more convenient, and people will obtain good user-experiences (UX) with their motorcycle in their life. We summarize the current research issues and the current social issues about motorcycles, and then we introduce a solution for them with informatics and its related science and engineering. To make a motorcycle safer and to enhance its mobility, it is essential to investigate the motorcycle dynamics. The motion of a rider is also essential, and it affects the motorcycle motion, whereas the motion of a car driver seldom affects the car motion. It is because the weight of a rider is large enough, comparing to the weight of a motorcycle, and a rider usually moves widely to operate the motorcycle. In order to investigate the dynamics of the motorcycle system, which consists of a motorcycle itself and a rider, it is required to obtain appropriate sensing data of both the motorcycle and the rider to improve the knowledge of the dynamics by the data. Such data and knowledge can be applied to other applications and services such as sensing road traffic conditions. In this paper, we introduce the concept of the research project, Bikeinformatics, and the capability of GNSS precise positioning to append adequate labels to measured data with low-cost sensors.

    Download PDF (2040K)
  • Yusuke Fukazawa, Naoki Yamamoto, Takashi Hamatani, Keiichi Ochiai, Aki ...
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: Invited Papers
    2020 Volume 28 Pages 16-30
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    Monitoring mental health has received considerable attention as a countermeasure against the increasing occurrence of mental illness worldwide. However, current monitoring services incur costs because users are required to attach wearable devices or answer questions. To reduce such costs, many studies have used smartphone-based passive sensing technology to capture a user's mental state. This paper reviews those studies from the perspective of machine learning and statistical analysis. Forty-four studies published since 2011 have been reviewed and summarized from three perspectives: designed features, machine learning algorithm, and evaluation method. The features considered include location and mobility, activity, speech, sleep, phone usage, and context features. Tasks are classified as correlation analysis, regression tasks, and classification tasks. The machine learning algorithm used for each task is summarized. Evaluation metrics and cross validation methods are also summarized. For those who are not necessarily machine learning experts, we aim to provide information on typical machine learning framework for smartphone-based mental state estimation. For experts in the field, we hope this review will be a helpful tool to check for potential omissions.

    Download PDF (421K)
  • Tetsushi Matsuda, Susumu Ishihara
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: Network Quality and Control
    2020 Volume 28 Pages 31-43
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    When multiple applications at two sites connected by a best-effort service network through gateway (GW) equipment communicate simultaneously, the receive rate of a high-priority application communication flow can be smaller than its necessary bandwidth (BW). In this paper, we refer to this problem as the deficit in bandwidth of a high-priority flow (DBHPF) problem. In order to handle this problem, we consider controlling the BW assigned to each flow based on the available bandwidth (ABW) estimated by the GW. The estimated ABW can be larger than the actual ABW due to the error in the estimation. Thus, the receive rate of a high-priority application can be smaller than its necessary BW, even if the actual ABW is larger than the necessary BW. In this paper, we propose a priority-based BW control method that estimates the receive rate of each flow using estimated ABW and related information, and mitigates the effect of the DBHPF problem by controlling the transmission BW of each flow in order to compensate for the difference between the estimated receive rate and the necessary BW according to the priorities of flows. We call the proposed method estimated-receive-rate-based bandwidth control (eR2BC). We also propose a method for ABW estimation with less overhead than existing methods. We conducted experiments using the proposed methods in a virtual network constructed with virtual machines and confirmed that the proposed methods can mitigate the effect of the DBHPF problem better than existing methods.

    Download PDF (2791K)
  • Kosuke Yotsuya, Katsuhiro Naito, Naoya Chujo, Tadanori Mizuno, Katsuhi ...
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: Mobile Computing
    2020 Volume 28 Pages 44-54
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    Building structure information is essential for achieving various indoor location-based services (ILBSs). Our approach integrates a large amount of pedestrian trajectories acquired by pedestrian dead reckoning (PDR) for generating a pedestrian network structure. To generate highly accurate pedestrian network structures, the accuracy of each trajectory must be improved. In this paper, we propose a method to improve the accuracy of indoor PDR trajectories by using many such trajectories. First, we select reliable trajectories based on the stability of the sensing data. Next by analyzing the trend of the step lengths, we correct the length of the trajectories. Finally, with same-route trajectories, we generate average trajectories for each route. We experimentally used HASC-IPSC and found that our proposed method improved the accuracy of the trajectories. The cumulative error rate of the original pedestrian trajectories was 0.1111m/s. After adapting our proposed method, the rate improved to 0.0622m/s.

    Download PDF (3180K)
  • Yusuke Hara, Ryosuke Hasegawa, Akira Uchiyama, Takaaki Umedu, Teruo Hi ...
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: ITS
    2020 Volume 28 Pages 55-64
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    In this paper, we propose FlowScan: a pedestrian flow estimation technique based on a dashboard camera. Grasping flows of people is important for various purposes such as city planning and event detection. FlowScan can estimate pedestrian flows on sidewalks without taking much cost. Currently, dashboard cameras have been becoming so popular for preserving the evidence of traffic accidents and security reasons. FlowScan assumes that an application which analyzes video from the camera is installed on an on-board device. To realize such an application, we need to design a method for pedestrian recognition and occlusion-proof tracking of pedestrians. For pedestrian recognition, the application uses Deep Learning-based techniques; CNN (Convolutional Neural Networks) and LSTM (Long-Short-Term-Memory). In this process, the faces and backs of their heads are searched in the video separately to detect not only the number of pedestrians but also their directions. Then, a series of detected positions of heads are arranged into tracks depending on the similarity of locations and colors considering the knowledge about the movement of the vehicle and pedestrians. We have evaluated FlowScan using real video data recorded by a dashboard camera. The mean absolute error rate for people flow estimation of both directions was 18.5%, highlighting its effectiveness compared with the state-of-the-art.

    Download PDF (4098K)
  • Jun Yajima, Yasuhiko Abe, Takayuki Hasebe, Takao Okubo
    Type: Special Issue of Intelligent Transportation Systems and Mobile Communication for Realizing Smart Cities
    Subject area: Network Security
    2020 Volume 28 Pages 65-74
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    This paper proposes cumulative sum detection, which can detect cyberattacks on Controller Area Network (CAN). Well-known existing attack detection techniques cause false positives and false negatives when there are long delays or early arrivals involving usual periodic message reception. The proposed technique can detect attacks with almost no false positives or false negatives, that is highly accurate even when there are a long delays or early arrivals. This paper evaluates the detection accuracy of existing techniques and the proposed technique using computer simulation with CAN data obtained from actual vehicles. By considering the evaluation result and the ease of parameter adjustment, we show that the cumulative sum detection is the best of these techniques.

    Download PDF (702K)
  • Yusuke Sakumoto, Ittetsu Taniguchi
    Type: Regular Papers
    Subject area: Algorithm Theory
    2020 Volume 28 Pages 75-85
    Published: 2020
    Released: January 15, 2020
    JOURNALS FREE ACCESS

    In order to utilize renewable energy effectively, the generated surplus energy should be stored in batteries, and transferred to distant places with high demand in a microgrid. As a scalable mechanism for such energy transfer (energy interchange), we proposed an autonomous decentralized mechanism (ADM) based on Markov Chain Monte Carlo (MCMC), and clarified that our ADM accomplishes the global objective to quickly supply energy appropriately for energy demand all over the microgrid. In this paper, toward a resilient microgrid, we propose a method of directional energy interchange used in our ADM. We first design a method of the directional energy interchange to be able to quickly transfer energy in an appropriate direction on the basis of the advection-diffusion equation used in physics. Then, we investigate the performance of the proposed method through a simulation experiment considering energy shortage and emergency situations. Simulation results show that the proposed method (a) can quickly supply energy from a traditional centralized grid to a microgrid under energy shortage situations, and (b) can quickly gather distributed energy to a specific place (e.g., safe shelter) under emergency situations.

    Download PDF (1135K)
  • Akihito Kitadai
    Type: Special Issue of Computer and Humanities
    2020 Volume 28 Pages 86
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (39K)
  • Takeshi Miura, Katsubumi Tajima
    Type: Special Issue of Computer and Humanities
    Subject area: ITS
    2020 Volume 28 Pages 87-90
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    Distance cartograms are deformed maps in which the distance of each of the preselected point pairs in the geographic map is changed in step with a specified value. In distance cartogram construction, the preselected points such as train stations are fixed in the first step, and other points such as those comprising railroads are fixed in the second step. This paper proposes a new point location conversion approach for the second step. In the approach, a triangle in the geographic map which consists of two points already fixed in the first step and a point to be fixed in the second step, is converted into a similar triangle in the cartogram. The experimental results demonstrate its effectiveness.

    Download PDF (762K)
  • Tatsuki Sekino
    Type: Special Issue of Computer and Humanities
    Subject area: Applications in Humanities
    2020 Volume 28 Pages 91-99
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    Time periods are frequently used to specify time in metadata and retrieval. However, it is not easy to describe and retrieve information about periods, because the temporal ranges represented by periods are often ambiguous. This is because these temporal ranges do not have fixed beginning and end points. To solve this problem, basic logics to describe and process uncertain time intervals were developed in this study. An uncertain time interval is represented as a set of time intervals that indicate states when the uncertain time interval is determined. Based on this concept, a logic to retrieve uncertain time intervals satisfying a given condition was established, and it was revealed that retrieval results belong to three states: reliable, impossible, and possible matches. Additionally, to describe data about uncertain periods, an ontology (the HuTime Ontology) was constructed based on the logic. This ontology is characterized by the fact that uncertain time intervals can be defined recursively. It is expected that more data about time periods will be created and released using the result of this study.

    Download PDF (1129K)
  • Keiichi Yasumoto
    Type: Special Issue of Network Services and Distributed Processing
    2020 Volume 28 Pages 100-101
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (36K)
  • Hiroko Nagashima, Yuka Kato
    Type: Special Issue of Network Services and Distributed Processing
    Subject area: Mobile Computing
    2020 Volume 28 Pages 102-111
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    Large volumes of data are typically used during analyses. Data preprocessing, which involves detecting outliers, handling missing data, data formatting, integration, and normalization, is essential for achieving accurate results. Many tools and methods are available for reducing preprocessing time. However, most analysts face difficulties when using them. This paper proposes a method for handling outliers and missing data, called Automated PRE-Processing for Sensor Data (APREP-S). For reducing analysis resources, we combine programming by example and machine learning via Bayesian inference, inputting human knowledge to APREP-S as an example and calculating a proper proportion by machine learning via Bayesian inference. We also define k-Shape as the calculation of the rate of similarity of time-series data. In evaluation, we use sensor data of temperature and humidity and compare the sum of the square of the errors of four methods, between original data and outputs of each methods, (1) APREP-S, (2) mean of the entire data, (3) mean of the around-the-target imputation data, and (4) spline interpolation. It is verified that APREP-S is a more suitable method for humidity data than temperature data. preprocessing method. we consider the reason is that humidity data have more changing points.

    Download PDF (1997K)
  • Hikaru Ichise, Yong Jin, Katsuyoshi Iida, Yoshiaki Takai
    Type: Special Issue of Network Services and Distributed Processing
    Subject area: Network Security
    2020 Volume 28 Pages 112-122
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    DNS (Domain Name System) based name resolution is one of the most fundamental Internet services for both of the Internet users and Internet service providers. In normal DNS based name resolution process, the corresponding NS (Name Server) records are required prior to sending a DNS query to the authoritative DNS servers. However, in recent years, DNS based botnet communication has been observed in which botnet related network traffic is transferred via DNS queries and responses. In particular, it has been observed that, in some types of malware, DNS queries will be sent to the C&C servers using an IP address directly without obtaining the corresponding NS records in advance. In this paper, we propose a novel mechanism to detect and block abnormal DNS traffic by analyzing the achieved NS record history in intranet. In the proposed mechanism, all DNS traffic of an intranet will be captured and analyzed in order to extract the legitimate NS records and the corresponding glue A records (the IP address(es) of a name server) which will be stored in a white list database. Then all the outgoing DNS queries will be checked and those destined to the IP addresses that are not included in the white list will be blocked as abnormal DNS traffic. We have implemented a prototype system and evaluated the functionality in an SDN-based experimental network. The results showed that the prototype system worked well as we expected and accordingly we consider that the proposed mechanism is capable of detecting and blocking some specific types of abnormal DNS-based botnet communication.

    Download PDF (1808K)
  • Hironori Nakajo
    Type: Special Issue of Embedded Systems Engineering
    2020 Volume 28 Pages 123
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (32K)
  • Shuichi Sato, Shogo Hattori, Hiroyuki Seki, Yutaka Inamori, Shoji Yuen
    Type: Special Issue of Embedded Systems Engineering
    Subject area: Software Analysis and Design
    2020 Volume 28 Pages 124-135
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    We propose a method to automate the detection of signal disturbance for a given unsafe property. To incorporate a signal disturbance, we introduce an auxiliary variable, called a cushion variable, for each signal variable to store a value altered by the disturbance that causes unintended state transitions. The signal disturbance is defined to negate the equalities between signal variables and their cushion variables. We develop a method to efficiently detect the signal disturbance by using a weighted partial maximum satisfiability modulo theories (Max-SMT) technique as a set of variables altered by faults resulting in an undesirable condition. By assigning the weights properly to the equations, we control the derivation of signal disturbance patterns with the required property. We present an experimental application of our method to a simplified cruise control system as a practical case study in two well-known methods of safety analysis, namely system theoretic process analysis (STPA) and fault tree analysis (FTA), for the automatic detection of time-series signal disturbances.

    Download PDF (1053K)
  • Takehiro Wakabayashi, Shuji Morisaki, Norimitsu Kasai, Noritoshi Atsum ...
    Type: Special Issue of Embedded Systems Engineering
    Subject area: Development Environments and Automated Technologies
    2020 Volume 28 Pages 136-149
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    This article proposes a tool supported approach to detect omitted requirements that are not implemented in a corresponding architectural design document using difference sets of words or word senses between a software requirements specification document and a software architectural design document. First, the proposed approach extracts sets of single-words, multi-words, and word senses that appear in a requirements specification document but do not appear in the corresponding design document using a natural language processing tool. Then, an architectural design document inspector validates whether each of the specified document with the single-words, multi-words, or word senses are implemented in the corresponding architectural design document using the sets as guides. Evaluation 1 investigated whether omitted requirements can be detected in design documents using the proposed approach. Evaluation 2 investigated the numbers of words that inspectors need to check for the proposed approach. The result of Evaluation 1 shows that omitted requirements are detected in all three pairs for real requirements specification documents and design documents. The result of Evaluation 2 shows that the numbers of words in the difference sets to those in the requirements specification documents vary from 18 to 83 % for the nine pairs of requirements specification documents and design documents.

    Download PDF (1063K)
  • Hideki Takase, Tomoya Mori, Kazuyoshi Takagi, Naofumi Takagi
    Type: Special Issue of Embedded Systems Engineering
    Subject area: Embedded System Technology
    2020 Volume 28 Pages 150-160
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    The Robot Operating System (ROS) has attracted attention as a design platform for robot software development. One of the problems of ROS is that it is necessary to employ high-performance and power-hunger devices since ROS requires a Linux environment for operation. This paper proposes a novel solution called mROS, which is a lightweight runtime environment of ROS nodes, to execute robot software components on mid-range embedded devices. mROS consists of a real-time operating system (RTOS) and a TCP/IP protocol stack to provide a tiny ROS communication library. It provides connectivity from the edge node to the host and other nodes through the native ROS protocol. Additionally, we design mROS APIs that are compatible with ROS 1. Therefore, native ROS nodes can be ported from Linux-based systems to RTOS-based systems as mROS nodes. Experimental results confirmed that mROS meets the performance requirement for practical applications. Moreover, we showed the size of the library constituting mROS is small for target embedded devices. We further conducted a case study to validate the portability of mROS from ROS nodes. Our work is expected to contribute to the power saving and real-time performance enhancement of mobile robot systems.

    Download PDF (3095K)
  • Neda Gholami, Mohammad Mahdi Dehshibi, Andrew Adamatzky, Antonio Rueda ...
    Type: Regular Papers
    Subject area: Computational Theory
    2020 Volume 28 Pages 161-168
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    For reconstructing CT images in the clinical setting, ‘effective energy’ is usually used instead of the total X-ray spectrum. This approximation causes an accuracy decline. We proposed to quantize the total X-ray spectrum into irregular intervals to preserve accuracy. A phantom consisting of the skull, rib bone, and lung tissues was irradiated with CT configuration in GATE/GEANT4. We applied inverse Radon transform to the obtained Sinogram to construct a Pixel-based Attenuation Matrix (PAM). PAM was then used to weight the calculated Hounsfield unit scale (HU) of each interval's representative energy. Finally, we multiplied the associated normalized photon flux of each interval to the calculated HUs. The performance of the proposed method was evaluated in the course of Complexity and Visual analysis. Entropy measurements, Kolmogorov complexity, and morphological richness were calculated to evaluate the complexity. Quantitative visual criteria (i.e., PSNR, FSIM, SSIM, and MSE) were reported to show the effectiveness of the fuzzy C-means approach in the segmenting task.

    Download PDF (5978K)
  • Tomoharu Ugawa, Taiki Fujimoto
    Type: Regular Papers
    Subject area: Special Section on Programming
    2020 Volume 28 Pages 169-177
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    For accurate garbage collection (GC), all pointers belonging to the root set must be found. In a virtual machine (VM) implemented in C language, local variables of C language may contain pointers. Thus, some VMs add the values or addresses of local variables to a table that is visible to GC. However, this approach is error-prone because it requires adding local variables and removing them correctly though the entire source code of the VM. In this research, we checked if local variables are added and removed correctly by pattern matching against control flow graphs of the source code of the VM. We applied this check to the VM of a subset of JavaScript we are developing and found that it could identify many cases of missed adding and redundant adding.

    Download PDF (1115K)
  • Renzhi Wang, Mizuho Iwaihara
    Type: Regular Papers
    Subject area: Special Section on Databases
    2020 Volume 28 Pages 178-191
    Published: 2020
    Released: February 15, 2020
    JOURNALS FREE ACCESS

    Wikipedia is the largest online encyclopedia, in which articles are edited by different volunteers with different thoughts and styles. Sometimes two or more articles' titles are different but the themes of these articles are exactly the same or strongly similar. Administrators and editors are supposed to detect such article pairs and determine whether they should be merged together. We call an article pair is mergeable if it is discussed for possible merge, and a merged article pair is such that the pair is actually merged. In this paper, we propose a method to automatically determine whether an article pair is mergeable or merged. According to Wikipedia Guidelines for article merge, in the duplicate case, the article pairs are covering exactly the same contents. In the overlap case, the article pairs are covering related subjects that have a significant overlap. The content of an overlapped part is similar but the words in the pair can be extensively different, so methods that exploit semantic relatedness are necessary. We consider various textual similarities and semantic relatedness. For integrating word embeddings on the target dataset and the global large corpus, we propose linear and non-linear combinations of multiple embedding results and rebuilding word vectors for evaluating semantic relatedness. We clarify the differences between our method and previous researches for combining multiple word embeddings. We also deal with overlap cases by computing Jaccard similarity between article pairs. We combine Jaccard similarity, common-link article count and word embedding-based relatedness together, to predict whether the article pair should be merged. We explore the relationship between segment-level (paragraph-level) similarity and mergeable/merged article pairs, then propose Multimodal Similarity-Based Merge Prediction (MSBMP) which combines the proposed new features by Random Forest, to predict mergeable/merged article pairs. Our evaluations are performed on real mergeable and merged article pairs. Remarkable superiorities of MSBMP are shown, with apparent improvement from baselines of WikiSearch, TFIDF and word embeddings.

    Download PDF (2373K)
  • Rin-ichiro Taniguchi
    Type: Special Issue of Young Researchers' Papers
    2020 Volume 28 Pages 192
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (34K)
  • Nattaon Techasarntikul, Photchara Ratsamee, Jason Orlosky, Tomohiro Ma ...
    Type: Special Issue of Young Researchers' Papers
    Subject area: Application Systems
    2020 Volume 28 Pages 193-202
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    Packing optimization is a challenging and time-consuming task for a number of industry and logistics applications. Efficient packing can reduce the cost of storage and shipping and also guarantee that damage will not occur during shipping. To help address this problem, we propose a spatial augmented reality-based support system for assisting workers with packing optimization. Our packing support system first uses an RGB-D camera to acquire color and depth information of the items to be packed and the destination container. Then, object segmentation and dimension estimation are simultaneously carried out, and the position and orientation of packing items inside the container are calculated using a bin-packing algorithm. Finally, the optimized packing instructions are projected onto the user's work area. We then developed and tested two user interfaces (UI) for visualizing instructions called Rotation and Object Movement. Experimental results showed that both methods help reduce packing time up to 57.89% in Rotation and 55.63% in Object Movement, compared to a non-UI method.

    Download PDF (3560K)
  • Takashi Imaizumi
    Type: Special Issue of the Internet and operation technologies for utilization of IoT
    2020 Volume 28 Pages 203
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (34K)
  • Takashi Yamanoue
    Type: Special Issue of the Internet and operation technologies for utilization of IoT
    Subject area: Operation
    2020 Volume 28 Pages 204-213
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    This paper describes a method of monitoring servers or server rooms by an Internet of Things (IoT) system that can configure and control terminal sensors behind a network address translation (NAT) router through a Wiki page on the Internet. This IoT system consists of Wiki pages and a bot (Wiki Bot) that runs on Raspberry Pi with sensors. A Wiki Bot can be placed behind the NAT router to resist various online attacks. The IoT system can monitor servers behind a NAT router over the Internet. A Wiki Bot is controlled by sending commands from the Wiki page. It acquires data from its sensors and processes the data via a command sequence of commands. The sensors settings and the data sampling rate can be remotely changed by changing the commands on the Wiki page.

    Download PDF (2871K)
  • Motoyuki Ohmori, Koji Okamura
    Type: Special Issue of the Internet and operation technologies for utilization of IoT
    Subject area: Operation
    2020 Volume 28 Pages 214-221
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    Even in the era of Software Defined Network (SDN) or Software Defined Infrastructure (SDI), network edge switches still require to be rebooted for some reasons, e.g., updating a firmware, configuring a special behavior and so on. It may be necessary to clarify how one can shorten downtime of a campus network when many switches in the network require reboots. To this end, this paper proposes the equal deepest vertex first reboot with vertex contraction that can simultaneously reboot many network switches with less overhead downtime. This paper tries to express a campus network in a graph theory fashion, reduce downtime overhead by vertex contraction, and proves that all switches can be rebooted within a finite number of rebooting procedures. This paper presents an implementation of the proposed procedures, and evaluates the proposed method in an actual campus network. The equal deepest vertex first reboot with vertex contraction has appeared to reboot all switches by only 16-second additional overhead out of 109-second downtime in total where the ideal minimum downtime was 93 seconds in an actual campus network where there were more than 300 network switches installed.

    Download PDF (311K)
  • Sanouphab Phomkeona, Koji Okamura
    Type: Special Issue of the Internet and operation technologies for utilization of IoT
    Subject area: Security Infrastructure
    2020 Volume 28 Pages 222-229
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    Cyber hackers use email as a tool to trick, inject or drop malicious software into the recipient's device. Everyday users have to face off against, phishing or malicious emails and it would be a huge problem for whole organizations even if only one user clicked on a single link from this malicious email. The difficult issue is how to classify and detect those malicious emails from ordinary, especially spear phishing emails, which are designed for a particular target, or zero-day malicious emails that no one has ever found until now. In this paper, we introduce a way to classify and detect zero-day malicious emails by using deep-learning with data investigated from the email header and body itself, combined with dynamic analysis information as a group of features. Four different language email datasets can be used to train and test the system to simulate real-world diversity and zero-day malicious email attack situations. We succeeded in obtaining a satisfactory accuracy rate for detection results for both zero-day malicious email types and normal spam.

    Download PDF (393K)
  • Ariel Rodriguez, Koji Okamura
    Type: Special Issue of the Internet and operation technologies for utilization of IoT
    Subject area: Machine Learning & Data Mining
    2020 Volume 28 Pages 230-238
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    The Internet is constantly evolving, producing many new data sources that can be used to help us gain insights into the cyber threat landscape and in turn, allow us to better prepare for cyberattacks. With this in mind, we present an end-to-end real-time cyber situational awareness system which aims to retrieve security-relevant information from the social networking site Twitter.com. This system classifies and aggregates the data extracted and provides real-time cyber situational awareness information based on sentiment analysis and data analytics techniques. This research will assist security analysts in rapidly and efficiently evaluating the level of cyber risk in their organization and allow them to proactively take actions to plan and prepare for potential attacks before they happen.

    Download PDF (1135K)
  • Masato Yamashita, Minoru Nakazawa, Yukinobu Nishikawa, Noriyuki Abe
    Type: Regular Papers
    Subject area: System Security
    2020 Volume 28 Pages 239-246
    Published: 2020
    Released: March 15, 2020
    JOURNALS FREE ACCESS

    Recently, the technology of BMI that communicates with humans and operates a robot using human brain information has been actively studied. The authentification function using BMI has been studied by previous research. Although many studies focus on feature extraction and learning model creation, there are few studies that discuss the effectiveness of preprocessing. In this study, we implemented an EEG biometric function using image stimulation method. In this paper, we proposed biometric authentication system system using EEG at time of image stimulus. At the same time, we evaluated the change in authentication accuracy in order to verify the preprocessing (digital filter, artifact countermeasure, epoch) method in the authentication system. As a result, authentication accuracy is improved by performing the proposed preprocessing. In addition, it was shown that convenience and security were improved when using the system.

    Download PDF (2900K)
  • Tetsuro Kitahara
    Type: Special Issue of Increasingly Developing Music Informatics
    2020 Volume 28 Pages 247
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (35K)
  • Kosuke Nakamura, Takashi Nose, Yuya Chiba, Akinori Ito
    Type: Special Issue of Increasingly Developing Music Informatics
    Subject area: Music composition/arrangement
    2020 Volume 28 Pages 248-257
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS

    In this paper, we deal with melody completion, a technique which smoothly completes partially-masked melodies. Melody completion can be used to help people compose or arrange pieces of music in several ways, such as editing existing melodies or connecting two other melodies. In recent years, various methods have been proposed for realizing high-quality completion via neural networks. Therefore, in this research, we examine a method of melody completion based on an image completion network. We represent melodies as images and train a completion network to complete those images. The completion network consists of convolution layers and is trained in the framework of generative adversarial networks. We also consider chord progression from musical pieces as conditions. From the experimental result, it was confirmed that the network could generate original melody as a completion result and the quality of the generated melody was not significantly worse than the result of a simple example-based melody completion method.

    Download PDF (2845K)
  • Christoph M. Wilk, Shigeki Sagayama
    Type: Special Issue of Increasingly Developing Music Informatics
    Subject area: Music composition/arrangement
    2020 Volume 28 Pages 258-266
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS

    In this paper, we propose harmony generation according to user input parameters based on fundamental harmonic properties as a new approach to the problem of automatic music completion (the automatic generation of music pieces from any incomplete fragments of music), which we have proposed as a generalization of conventional music information problems such as automatic melody generation and harmonization. The goal is enabling possibly inexperienced users to turn partial musical ideas into complete pieces for quick exploration of musical possibilities. Therefore, the focus lies on response to intuitive modes of input, allowing the user to intentionally shape the generated music. To that end, parameterized harmony generation utilizes fundamental musical principles which are understandable by both user and computer, instead of conventional probabilistic models (which imply imitation of a style or data corpus) or restrictive rule-based models. We apply this approach to the automatic completion of four-part chorales, using the harmonic concepts of active tones, cadences and key modulation. We implemented a system that jointly optimizes harmony and voicing considering both user input and music theory. Our system was evaluated by a professional composer and in a subjective evaluation experiment. We also invite the reader to use our system at http://160.16.202.131/automatic_music_completion.

    Download PDF (563K)
  • Takashi Ishio
    Type: Special Issue of Software Engineering
    2020 Volume 28 Pages 267
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS
    Download PDF (34K)
  • Haruto Tanno, Yu Adachi, Yu Yoshimura, Katsuyuki Natsukawa, Hideya Iwa ...
    Type: Special Issue of Software Engineering
    Subject area: Testing and Maintenance
    2020 Volume 28 Pages 268-278
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS

    Visual regression testing (VRT) is a useful method for confirming that application screens are correctly displayed. VRT systems detect differences between the screens of an old version and a new version of an application to support the tester in detecting failures on the screen of the new version. One approach to VRT is image-based; i.e., before and after screenshot images are compared. It is particularly promising because screenshots are independent of the application's environment (operating system, web browser, etc.). Existing image-based VRT systems simply compare two images in pixel units and highlight pixels with differences, so if there are changes that affect the entire screen (e.g., parallel movements of screen elements), a large number of unessential differences are detected, and the essential differences are buried within them. An image-based VRT method named ReBDiff is presented that solves this problem. Before and after screen images are each divided into multiple regions, and appropriate matchings are made between corresponding regions in the two images. For each matching, differences such as shift, alteration, and addition, if any, are detected. In addition, suitable views are provided on the basis of the detected differences. By observing these views, the tester can efficiently identify the essential differences even when there are changes that affect the entire screen, e.g., parallel movements of screen elements. Experiments on a prototype system using websites for PCs and smartphones and an application screen of an Electron application demonstrated the effectiveness of the proposed method.

    Download PDF (1017K)
  • Peraphon Sopahtsathit
    Type: Special Issue of Software Engineering
    Subject area: Evaluation and Management
    2020 Volume 28 Pages 279-291
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS

    Software Engineering is a diverse and highly flexible discipline that can be practiced using a development model of the developer's choosing. Unfortunately, existing state-of-the-practice software engineering development models do not take human effort into consideration since there are no applicable metrics to gauge the associated manual activity. This study presents a novel discretization technique as a software analytic to estimate the manual effort expended on software development process. The proposed technique classifies three manual activity domains, namely, abstract, concrete, and unclassified. The units of classification are called Developer Work Elements (DevWE). The sequence of DevWE denotes a development analytic in three visual aids, namely, symbolic flow map, operation chart, and workload breakdown chart. These give rise to the determination of efforts expended which are measured by COSMIC Function Point. The result can be combined with those traditional software measurable activities to yield accurate total project effort estimation. Major contributions of this prospectus encompass (1) discretization DevWE analytic for manual effort estimation, (2) visual chart aids for operation tracing, monitoring, improving, and control, and (3) discovering that almost half of the estimation effort stems from manual activity.

    Download PDF (2915K)
  • Hideo Nishimura, Yoshihiko Omori, Takao Yamashita
    Type: Regular Papers
    Subject area: Network Security
    2020 Volume 28 Pages 292-301
    Published: 2020
    Released: April 15, 2020
    JOURNALS FREE ACCESS

    Public-key-based Web authentication can be securely implemented using modern mobile devices as secure storage of private keys with hardware-assisted trusted environments, such as a trusted execution environment (TEE). Since a private key is strictly kept secret within the TEE and never leaves the device, the user must register the key separately for each combination of device and Web account, which is burdensome for users who want to switch devices. The aim of this research was to provide a solution for key management with enhanced usability by relaxing the restriction that keys can never leave the device and allowing private keys to be shared across devices while still maintaining an acceptable level of security. We propose a secure method for sharing keys across the TEEs of devices. The method has two functions: 1) trusted third party (TTP)-based device owner identification, which involves a TTP that is responsible for supervising key sharing across devices in an authentication system, and 2) secure key copy, which enables the duplication of keys in a device that were originally stored in another device through a direct secure transport channel between the TEEs of the devices. A TTP identifies the owner of each device to mitigate the risk of the keys being illegally shared. In this study, we evaluated the secure-key-copy function of our proposed method by implementing it in the ARM TrustZone-based TEE, showing that this function is feasible for commercially available smartphones.

    Download PDF (1234K)
  • Junji Fukuhara, Munehiro Takimoto
    Type: Regular Papers
    Subject area: Special Section on Programming
    2020 Volume 28 Pages 302-309
    Published: 2020
    Released: May 15, 2020
    JOURNALS FREE ACCESS

    The Single Instruction Multiple Data (SIMD) execution model on GPUs enables a program to execute efficiently. Nevertheless, the efficiency may decrease via branch divergence that occurs when SIMD threads follow different paths in branches. Once the branch divergence occurs, some threads have to wait until the completion of other threads. This inefficiency on GPU is caused by instructions included in branches, which may be increased by some traditional code optimizations based on code motion. Partial Redundancy Elimination (PRE) is one of such code motions methods. The PRE causes some insertions of expressions into some paths in branches and increases branch divergence. Thus, we propose a new PRE approach, called Speculative Sparse Code Motion (SSCM), which not only removes redundant expressions but also reduces branch divergence. The SSCM achieves them based on both properties of Sparse Code Motion (SCM) that reduces the static numbers of expressions in addition to PRE and speculative code motion that hoists some expressions in branches out of them. The SCM property of SSCM reduces branch divergence since it also hoists all the expressions in the true and false paths in a branch as a single expression. Moreover, the speculation property helps to hoist all the expressions not hoisted by the SCM, which removes more redundant expressions where speculation is not harmful in branches with branch divergence. Furthermore, the SSCM also enables the selective application of speculative code motion to improve programs with divergent and or non-divergent branches. To prove the effectiveness of our method, we applied it to some benchmarks with divergent branches. Our experimental results demonstrate more than 8% improvement in some program efficiency.

    Download PDF (812K)
  • Ryuichi Saito, Shinichiro Haruyama
    Type: Regular Papers
    Subject area: Special Section on Databases
    2020 Volume 28 Pages 310-319
    Published: 2020
    Released: May 15, 2020
    JOURNALS FREE ACCESS

    Since 2010, in-memory cluster computing platform has been increasingly used in firms and research institutions to analyze large amounts of datasets within a short amount of time. In these methods, unexpected errors cause the load to exceed the assumption for computer infrastructures such as a monitoring system, owing to the execution of multithreading, assigning divided datasets to multiple nodes, and storing them in in-memory spaces. In this research, we propose a method that notifies administrators with only information needed to understand the situation in a short period by eliminating duplications of numerous application error logs for that period and clustering messages using an unsupervised learning k-means method with an in-memory cluster computing framework “Apache Spark.” By implementing this method, we can demonstrate that it is possible to eliminate duplications of error messages by 93% on an average compared with conventional methods. Further, we can extract significant messages from the application error messages and notify the administrators in an average of 4.2min from the time of occurrence of the error.

    Download PDF (2724K)
  • Jun-Li Lu, Makoto P. Kato, Takehiro Yamamoto, Katsumi Tanaka
    Type: Regular Papers
    Subject area: Special Section on Databases
    2020 Volume 28 Pages 320-332
    Published: 2020
    Released: May 15, 2020
    JOURNALS FREE ACCESS

    We address the problem of searching for microblogs referring to events, which are difficult to find because microblogs may refer to events without using event's contents and a searcher may not use suitable queries for a search engine. We therefore propose a dynamic search process based on MDP that takes query strategies optimized for the current search state. As key components of the dynamic search process, we propose an RNN-based model for predicting long-term returns of a search process, and a DNN-based model that tries to match between the representations of microblogs and those of events for identifying relevant microblogs. Experimental results suggest that the dynamic search process could effectively search for microblogs, especially for implicitly referred events. Moreover, we show high applicability of our proposed approach to unseen events for which any relevant microblogs were not available in the training phase.

    Download PDF (1153K)
  • Thinh Minh Do, Yasuko Matsubara, Yasushi Sakurai
    Type: Regular Papers
    Subject area: Special Section on Databases
    2020 Volume 28 Pages 333-342
    Published: 2020
    Released: May 15, 2020
    JOURNALS FREE ACCESS

    Given a large, online stream of multiple co-evolving online activities, such as Google search queries, which consist of d keywords/activities for l locations of duration n, how can we analyze temporal patterns and relationships among all these activities? How do we go about capturing non-linear evolutions and forecasting long-term future patterns? For example, assume that we have the online search volume for multiple keywords, e.g., “HTML/Java/SQL/HTML5” or “Iphone/Samsung Galaxy/Nexus/HTC” for 236 countries/territories, from 2004 to 2015. Our goal is to capture important patterns and rules, to find the answer for the following issues: (a) Are there any periodical/seasonal activities? (b) How can we automatically and incrementally detect the sign of competition between two different keywords from the data streams? (c) Can we achieve a real-time snapshot of the stream and forecast long-range future dynamics in both global and local level? In this paper, we present RFCAST, a unifying adaptive non-linear method for forecasting future patterns of co-evolving data streams. Extensive experiments on real datasets show that RFCAST does indeed perform long-range forecasts and it surpasses other state-of-the-art forecasting tools in terms of accuracy and execution speed.

    Download PDF (2476K)
  • Akitoshi Okumura, Susumu Handa, Takamichi Hoshino, Naoki Tokunaga, Mas ...
    Type: Paper (Consumer Systems)
    Subject area: Special Section on Consumer Device & System
    2020 Volume 28 Pages 343-353
    Published: 2020
    Released: June 15, 2020
    JOURNALS FREE ACCESS

    This paper proposes an identity-verification system for attendees of large-scale events using continuous face recognition improved by managing facial directions and eye contact (eyes are open or closed) of the attendees. Identity-verification systems have been required to prevent illegal resale such as ticket scalping. The problem in verifying ticket holders is how to simultaneously verify identities efficiently and prevent individuals from impersonating others at a large-scale event at which tens of thousands of people participate. We previously developed two ticket ID systems for identifying the purchaser and holder of a ticket. These systems use two face-recognition systems, i.e., one-stop face-recognition system with a single camera and non-stop face-recognition system with two cameras. The average face-recognition accuracy was respectively 90 and 91%, and the average time for identity verification from check-in to entry admission was respectively 7 and 2.8 seconds per person. One-stop systems have lower equipment cost than non-stop systems because they require fewer cameras for face recognition. Since both systems were proven effective for preventing illegal resale by verifying attendees of large concerts, they have been used at more than 110 concerts. The problem with both systems is regarding face-recognition accuracy. This can be mitigated by securing clear facial photos because face recognition fails when unclear facial photos are obtained, i.e., when event attendees have their eyes closed, are not looking directly forward, or have their faces covered with hair or items such as facemasks and mufflers. In this paper, we propose a system for securing facial photos of attendees directly facing a camera by leading them to scan their check-in codes on a code-reader placed close to the camera just before executing face recognition. The system also takes two photos of attendees with the single camera after an interval of about 0.5 seconds to obtain facial photos with their eyes open. The system achieved 93% face-recognition accuracy with an average time of 2.7 seconds per person for identity verification when they were used for verifying 8, 461 attendees of a concert of a popular music singer. The system made it possible to complete identity verification with higher accuracy than previous systems and with shorter average time than the non-stop system using a single camera, i.e., with low equipment cost. Survey results obtained from the attendees showed that 96.4% felt it provided more equity in ticket purchasing than methods without face recognition, 87.1% felt it provided added convenience in verification, and 95.4% felt it would effectively prevent illegal resale.

    Download PDF (2347K)
feedback
Top