-
Shigeo KANEDA
2018 Volume E101.D Issue 7 Pages
1723-1724
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
-
Yuma MATSUMOTO, Takayuki OMORI, Hiroya ITOGA, Atsushi OHNISHI
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1725-1732
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
In order to verify the correctness of functional requirements, we have been developing a verification method of the correctness of functional requirements specification using the Requirements Frame model. In this paper, we propose a verification method of non-functional requirements specification in terms of time-response requirements written with a natural language. We established a verification method by extending the Requirements Frame model. We have also developed a prototype system based on the method using Java. The extended Requirements Frame model and the verification method will be illustrated with examples.
View full abstract
-
Natthawute SAE-LIM, Shinpei HAYASHI, Motoshi SAEKI
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1733-1742
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Code smells are indicators of design flaws or problems in the source code. Various tools and techniques have been proposed for detecting code smells. These tools generally detect a large number of code smells, so approaches have also been developed for prioritizing and filtering code smells. However, lack of empirical data detailing how developers filter and prioritize code smells hinders improvements to these approaches. In this study, we investigated ten professional developers to determine the factors they use for filtering and prioritizing code smells in an open source project under the condition that they complete a list of five tasks. In total, we obtained 69 responses for code smell filtration and 50 responses for code smell prioritization from the ten professional developers. We found that Task relevance and Smell severity were most commonly considered during code smell filtration, while Module importance and Task relevance were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
View full abstract
-
Cheng-Zen YANG, Cheng-Min AO, Yu-Han CHUNG
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1743-1750
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Bug report summarization has been explored in past research to help developers comprehend important information for bug resolution process. As text mining technology advances, many summarization approaches have been proposed to provide substantial summaries on bug reports. In this paper, we propose an enhanced summarization approach called TSM by first extending a semantic model used in AUSUM with the anthropogenic and procedural information in bug reports and then integrating the extended semantic model with the shallow textual information used in BRC. We have conducted experiments with a dataset of realistic software projects. Compared with the baseline approaches BRC and AUSUM, TSM demonstrates the enhanced performance in achieving relative improvements of 34.3% and 7.4% in the F1 measure, respectively. The experimental results show that TSM can effectively improve the performance.
View full abstract
-
Kunihiro NODA, Takashi KOBAYASHI, Noritoshi ATSUMI
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1751-1765
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Behaviors of an object-oriented system can be visualized as reverse-engineered sequence diagrams from execution traces. This approach is a valuable tool for program comprehension tasks. However, owing to the massiveness of information contained in an execution trace, a reverse-engineered sequence diagram is often afflicted by a scalability issue. To address this issue, many trace summarization techniques have been proposed. Most of the previous techniques focused on reducing the vertical size of the diagram. To cope with the scalability issue, decreasing the horizontal size of the diagram is also very important. Nonetheless, few studies have addressed this point; thus, there is a lot of needs for further development of horizontal summarization techniques. We present in this paper a method for identifying core objects for trace summarization by analyzing reference relations and dynamic properties. Visualizing only interactions related to core objects, we can obtain a horizontally compactified reverse-engineered sequence diagram that contains system's key behaviors. To identify core objects, first, we detect and eliminate temporary objects that are trivial for a system by analyzing reference relations and lifetimes of objects. Then, estimating the importance of each non-trivial object based on their dynamic properties, we identify highly important ones (i.e., core objects). We implemented our technique in our tool and evaluated it by using traces from various open-source software systems. The results showed that our technique was much more effective in terms of the horizontal reduction of a reverse-engineered sequence diagram, compared with the state-of-the-art trace summarization technique. The horizontal compression ratio of our technique was 134.6 on average, whereas that of the state-of-the-art technique was 11.5. The runtime overhead imposed by our technique was 167.6% on average. This overhead is relatively small compared with recent scalable dynamic analysis techniques, which shows the practicality of our technique. Overall, our technique can achieve a significant reduction of the horizontal size of a reverse-engineered sequence diagram with a small overhead and is expected to be a valuable tool for program comprehension.
View full abstract
-
Panita MEANANEATRA, Songsakdi RONGVIRIYAPANISH, Taweesup APIWATTANAPON ...
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1766-1779
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
An important step for improving software analyzability is applying refactorings during the maintenance phase to remove bad smells, especially the long method bad smell. Long method bad smell occurs most frequently and is a root cause of other bad smells. However, no research has proposed an approach to repeating refactoring identification, suggestion, and application until all long method bad smells have been removed completely without reducing software analyzability. This paper proposes an effective approach to identifying refactoring opportunities and suggesting an effective refactoring set for complete removal of long method bad smell without reducing code analyzability. This approach, called the long method remover or LMR, uses refactoring enabling conditions based on program analysis and code metrics to identify four refactoring techniques and uses a technique embedded in JDeodorant to identify extract method. For effective refactoring set suggestion, LMR uses two criteria: code analyzability level and the number of statements impacted by the refactorings. LMR also uses side effect analysis to ensure behavior preservation. To evaluate LMR, we apply it to the core package of a real world java application. Our evaluation criteria are 1) the preservation of code functionality, 2) the removal rate of long method characteristics, and 3) the improvement on analyzability. The result showed that the methods that apply suggested refactoring sets can completely remove long method bad smell, still have behavior preservation, and have not decreased analyzability. It is concluded that LMR meets the objectives in almost all classes. We also discussed the issues we found during evaluation as lesson learned.
View full abstract
-
Shinpei HAYASHI, Fumiki MINAMI, Motoshi SAEKI
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1780-1789
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components' responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the inferred role of a code fragment does not include the component that the code fragment currently belongs to, then it is detected as a violation. We have implemented our technique for the Model-View-Controller for Web Application architecture pattern. By applying the technique to web applications implemented using Play Framework, we obtained accurate detection results. We also investigated how much does each inference rule contribute to the detection of violations.
View full abstract
-
Junko SHIROGANE, Misaki MATSUZAWA, Hajime IWATA, Yoshiaki FUKAZAWA
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1790-1800
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Various applications have been realized on mobile computers such as smart phones and tablet computers. Because mobile computers have smaller monitors than conventional computers, strategies to develop user interfaces differ from conventional computer applications. For example, contents in a window are reduced or divided into multiple windows on mobile computers. To realize usable applications in this situation, usability evaluations are important. Although various usability evaluation methods for mobile computers have been proposed, few evaluate applications and identify problems automatically. Herein we propose a systematic usability evaluation method. In our method, operation histories by users are recorded and analyzed to identify steps with usability problems. Our method automatically analyzes usability problems, allowing usability evaluations in software development to be implemented easily and economically. As a case study, the operation histories were recorded and analyzed when 20 subjects operated an application on a tablet computer. Our method automatically identified many usability problems, confirming its effectiveness.
View full abstract
-
Takafumi TANAKA, Hiroaki HASHIURA, Atsuo HAZEYAMA, Seiichi KOMIYA, Yuk ...
Article type: PAPER
2018 Volume E101.D Issue 7 Pages
1801-1810
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Conceptual data modeling is an important activity in database design. However, it is difficult for novice learners to master its skills. In the conceptual data modeling, learners are required to detect and correct errors of their artifacts by themselves because modeling tools do not assist these activities. We call such activities self checking, which is also an important process. However, the previous research did not focus on it and/or the data collection of self checks. The data collection of self checks is difficult because self checking is an internal activity and self checks are not usually expressed. Therefore, we developed a method to help learners express their self checks by reflecting on their artifact making processes. In addition, we developed a system, KIfU3, which implements this method. We conducted an evaluation experiment and showed the effectiveness of the method. From the experimental results, we found out that (1) the novice learners conduct self checks during their conceptual data modeling tasks; (2) it is difficult for them to detect errors in their artifacts; (3) they cannot necessarily correct the errors even if they could identify them; and (4) there is no relationship between the numbers of self checks by the learners and the quality of their artifacts.
View full abstract
-
Ping ZENG, Qingping TAN, Haoyu ZHANG, Xiankai MENG, Zhuo ZHANG, Jianju ...
Article type: LETTER
2018 Volume E101.D Issue 7 Pages
1811-1815
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
The deep neural named entity recognition model automatically learns and extracts the features of entities and solves the problem of the traditional model relying heavily on complex feature engineering and obscure professional knowledge. This issue has become a hot topic in recent years. Existing deep neural models only involve simple character learning and extraction methods, which limit their capability. To further explore the performance of deep neural models, we propose two character feature learning models based on convolution neural network and long short-term memory network. These two models consider the local semantic and position features of word characters. Experiments conducted on the CoNLL-2003 dataset show that the proposed models outperform traditional ones and demonstrate excellent performance.
View full abstract
-
Yaohui CHANG, Chunhua GU, Fei LUO, Guisheng FAN, Wenhao FU
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 7 Pages
1816-1827
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Virtual Machine Placement (VMP) plays an important role in ensuring efficient resource provisioning of physical machines (PMs) and energy efficiency in Infrastructure as a Service (IaaS) data centers. Efficient server consolidation assisted by virtual machine (VM) migration can promote the utilization level of the servers and switch the idle PMs to sleep mode to save energy. The trade-off between energy and performance is difficult, because consolidation may cause performance degradation, even service level agreement (SLA) violations. A novel residual available capacity (RAC) resource model is proposed to resolve the VM selection and allocation problem from the cloud service provider (CSP) perspective. Furthermore, a novel heuristic VM selection policy for server consolidation, named Minimized Square Root available Resource (MISR) is proposed. Meanwhile, an efficient VM allocation policy, named Balanced Selection (BS) based on RAC is proposed. The effectiveness validation of the BS-MISR combination is conducted on CloudSim with real workloads from the CoMon project. Evaluation results of experiments show that the proposed combinationBS-MISR can significantly reduce the energy consumption, with an average of 36.35% compared to the Local Regression and Minimum Migration Time (LR-MMT) combination policy. Moreover, the BS-MISR ensures a reasonable level of SLAs compared to the benchmarks.
View full abstract
-
Jenn-Yang KE
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 7 Pages
1828-1834
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
A vertex subset F ⊆ V(G) is called a cyclic vertex-cut set of a connected graph G if G-F is disconnected such that at least two components in G-F contain cycles. The cyclic vertex connectivity is the cardinality of a minimum cyclic vertex-cut set. In this paper, we show that the cyclic vertex connectivity of the trivalent Cayley graphs TGn is equal to eight for n ≥ 4.
View full abstract
-
Lijing ZHU, Kun WANG, Duan ZHOU, Liangkai LIU, Huaxi GU
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 7 Pages
1835-1842
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Ring-based topology is popular for optical network-on-chip. However, the network congestion is serious for ring topology, especially when optical circuit-switching is employed. In this paper, we proposed an algorithm to build a low congestion multi-ring architecture for optical network-on-chip without additional wavelength or scheduling overhead. A network congestion model is established with new network congestion factor defined. An algorithm is developed to optimize the low congestion multi-ring topology. Finally, a case study is shown and the simulation results by OPNET verify the superiority over the traditional ONoC architecture.
View full abstract
-
Huiming ZHANG, Junzo WATADA
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 7 Pages
1843-1859
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
This paper focuses mainly on issues related to the pricing of American options under a fuzzy environment by taking into account the clustering of the underlying asset price volatility, leverage effect and stochastic jumps. By treating the volatility as a parabolic fuzzy number, we constructed a Levy-GJR-GARCH model based on an infinite pure jump process and combined the model with fuzzy simulation technology to perform numerical simulations based on the least squares Monte Carlo approach and the fuzzy binomial tree method. An empirical study was performed using American put option data from the Standard & Poor's 100 index. The findings are as follows: under a fuzzy environment, the result of the option valuation is more precise than the result under a clear environment, pricing simulations of short-term options have higher precision than those of medium- and long-term options, the least squares Monte Carlo approach yields more accurate valuation than the fuzzy binomial tree method, and the simulation effects of different Levy processes indicate that the NIG and CGMY models are superior to the VG model. Moreover, the option price increases as the time to expiration of options is extended and the exercise price increases, the membership function curve is asymmetric with an inclined left tendency, and the fuzzy interval narrows as the level set α and the exponent of membership function n increase. In addition, the results demonstrate that the quasi-random number and Brownian Bridge approaches can improve the convergence speed of the least squares Monte Carlo approach.
View full abstract
-
Ruicong ZHI, Ghada ZAMZMI, Dmitry GOLDGOF, Terri ASHMEADE, Tingting LI ...
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 7 Pages
1860-1869
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
The accurate assessment of infants' pain is important for understanding their medical conditions and developing suitable treatment. Pediatric studies reported that the inadequate treatment of infants' pain might cause various neuroanatomical and psychological problems. The fact that infants can not communicate verbally motivates increasing interests to develop automatic pain assessment system that provides continuous and accurate pain assessment. In this paper, we propose a new set of pain facial activity features to describe the infants' facial expression of pain. Both dynamic facial texture feature and dynamic geometric feature are extracted from video sequences and utilized to classify facial expression of infants as pain or no pain. For the dynamic analysis of facial expression, we construct spatiotemporal domain representation for texture features and time series representation (i.e. time series of frame-level features) for geometric features. Multiple facial features are combined through both feature fusion and decision fusion schemes to evaluate their effectiveness in infants' pain assessment. Experiments are conducted on the video acquired from NICU infants, and the best accuracy of the proposed pain assessment approaches is 95.6%. Moreover, we find that although decision fusion does not perform better than that of feature fusion, the False Negative Rate of decision fusion (6.2%) is much lower than that of feature fusion (25%).
View full abstract
-
Yukihiro TAGAMI, Hayato KOBAYASHI, Shingo ONO, Akira TAJIMA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 7 Pages
1870-1879
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Modeling user activities on the Web is a key problem for various Web services, such as news article recommendation and ad click prediction. In our work-in-progress paper[1], we introduced an approach that summarizes each sequence of user Web page visits using Paragraph Vector[3], considering users and URLs as paragraphs and words, respectively. The learned user representations are used among the user-related prediction tasks in common. In this paper, on the basis of analysis of our Web page visit data, we propose Backward PV-DM, which is a modified version of Paragraph Vector. We show experimental results on two ad-related data sets based on logs from Web services of Yahoo! JAPAN. Our proposed method achieved better results than those of existing vector models.
View full abstract
-
Zhe LIU, Xinjun MAO, Shuo YANG
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 7 Pages
1880-1893
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Certain open issues challenge the software engineering of autonomous robot software (ARS). One issue is to provide enabling software technologies to support autonomous and rational behaviours of robots operating in an open environment, and another issue is the development of an effective engineering approach to manage the complexity of ARS to simplify the development, deployment and evolution of ARS. We introduce the software framework AutoRobot to address these issues. This software provides abstraction and a model of accompanying behaviours to formulate the behaviour patterns of autonomous robots and enrich the coherence between task behaviours and observation behaviours, thereby improving the capabilities of obtaining and using the feedback regarding the changes. A dual-loop control model is presented to support flexible interactions among the control activities to support continuous adjustments of the robot's behaviours. A multi-agent software architecture is proposed to encapsulate the fundamental software components. Unlike most existing research, in AutoRobot, the ARS is designed as a multi-agent system in which the software agents interact and cooperate with each other to accomplish the robot's task. AutoRobot provides reusable software packages to support the development of ARS and infrastructure integrated with ROS to support the decentralized deployment and running of ARS. We develop an ARS sample to illustrate how to use the framework and validate its effectiveness.
View full abstract
-
Kazuaki KONDO, Genki MIZUNO, Yuichi NAKAMURA
Article type: PAPER
Subject area: Human-computer Interaction
2018 Volume E101.D Issue 7 Pages
1894-1905
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
This study proposes a mathematical model of a gesture-based pointing interface system for simulating pointing behaviors in various situations. We assume an interaction between a pointing interface and a user as a human-in-the-loop system and describe it using feedback control theory. The model is formulated as a hybrid of a target value follow-up component and a disturbance compensation one. These are induced from the same feedback loop but with different parameter sets to describe human pointing characteristics well. The two optimal parameter sets were determined individually to represent actual pointing behaviors accurately for step input signals and random walk disturbance sequences, respectively. The calibrated model is used to simulate pointing behaviors for arbitrary input signals expected in practical situations. Through experimental evaluations, we quantitatively analyzed the performance of the proposed hybrid model regarding how accurately it can simulate actual pointing behaviors and also discuss the advantage regarding the basic non-hybrid model. Model refinements for further accuracy are also suggested based on the evaluation results.
View full abstract
-
Takashi WATANABE, Takumi TADANO
Article type: PAPER
Subject area: Rehabilitation Engineering and Assistive Technology
2018 Volume E101.D Issue 7 Pages
1906-1914
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Rehabilitation training with pedaling wheelchair in combination with functional electrical stimulation (FES) can be effective for decreasing the risk of falling significantly. Automatic adjustment of cycling speed and making a turn without standstill has been desired for practical applications of the training with mobile FES cycling. This study aimed at developing closed-loop control system of cycling speed with the pedaling wheelchair. Considering clinical practical use with no requirement of extensive modifications of the wheelchair, measurement method of cycling speed with inertial motion measurement units (IMUs) was introduced, and fuzzy controller for adjusting stimulation intensity to regulate cycling speed was designed. The developed prototype of closed-loop FES control system achieved appropriately cycling speed for the different target speeds in most of control trials with neurologically intact subjects. In addition, all the control trials of low speed cycling including U-turn achieved maintaining the target speed without standstill. Cycling distance and cycling time increased with the closed-loop control of low cycling speed compensating decreasing of cycling speed caused by muscle fatigue. From these results, the developed closed-loop fuzzy FES control system was suggested to work reliably in mobile FES cycling.
View full abstract
-
Hao ZHU, Qing YOU, Wenjie CHEN
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 7 Pages
1915-1923
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
A lot of vision systems have been embedded in devices around us, like mobile phones, vehicles and UAVs. Many of them still need interactive operations of human users. However, specifying accurate object information could be a challenging task due to video jitters caused by camera shakes and target motions. In this paper, we first collect practical hand drawn bounding boxes on real-life videos which are captured by hand-held cameras and UAV-based cameras. We give a deep look into human-computer interactive operations on unstable images. The collected data shows that human input suffers heavy deviations which are harmful to interaction accuracy. To achieve robust interactions on unstable platforms, we propose a target-focused video stabilization method which utilizes a proposal-based object detector and a tracking-based motion estimation component. This method starts with a single manual click and outputs stabilized video stream in which the specified target stays almost stationary. Our method removes not only camera jitters but also target motions simultaneously, therefore offering an comfortable environment for users to do further interactive operations. The experiments demonstrate that the proposed method effectively eliminates image vibrations and significantly increases human input accuracy.
View full abstract
-
Suk-Hwan LEE
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 7 Pages
1924-1932
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Real-time weather radar imaging technology is required for generating short-time weather forecasts. Moreover, such technology plays an important role in critical-weather warning systems that are based on vast Doppler weather radar data. In this study, we propose a weather radar imaging method that uses multi-layer contour detection and segmentation based on MAP-MRF estimation. The proposed method consists of three major steps. The first step involves generating reflectivity and velocity data using the Doppler radar in the form of raw data images of sweep unit in the polar coordinate system. Then, contour lines are detected on multi-layers using the adaptive median filter and modified Canny's detector based on curvature consistency. The second step interpolates contours on the Cartesian coordinate system using 3D scattered data interpolation and then segments the contours based on MAP-MRF prediction and the metropolis algorithm for each layer. The final step involves integrating the segmented contour layers and generating PPI images in sweep units. Experimental results show that the proposed method produces a visually improved PPI image in 45% of the time as compared to that for conventional methods.
View full abstract
-
Hui BI, Yibo JIANG, Hui LI, Xuan SHA, Yi WANG
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 7 Pages
1933-1937
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
The ultrasound image segmentation is a crucial task in many clinical applications. However, the ultrasound image is difficult to segment due to image inhomogeneity caused by the ultrasound imaging technique. In this paper, to deal with image inhomogeneity with considering ultrasound image properties the Local Rayleigh Distribution Fitting (LRDF) energy term is introduced into the traditional level set method newly. While the curve evolution equation is derived for energy minimization, and self-driven uterus contour is achieved on the ultrasound images. The experimental segmentation results on synthetic images and in-vivo ultrasound images present that the proposed approach is effective and accurate, with the Dice Score Coefficient (DSC) of 0.95 ± 0.02.
View full abstract
-
Jianyong DUAN, Tianxiao JI, Hao WANG
Article type: PAPER
Subject area: Natural Language Processing
2018 Volume E101.D Issue 7 Pages
1938-1945
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Automatic error correction of users' search terms for search engines is an important aspect of improving search engine retrieval efficiency, accuracy and user experience. In the era of big data, we can analyze and mine massive search engine logs to release the hidden mind with big data ideas. It can obtain better results through statistical modeling of query errors in search engine log data. But when we cannot find the error query in the log, we can't make good use of the information in the log to correct the query result. These undiscovered error queries are called Bad Case. This paper combines the error correction algorithm model and search engine query log mining analysis. First, we explored Bad Cases in the query error correction process through the search engine query logs. Then we quantified the characteristics of these Bad Cases and built a model to allow search engines to automatically mine Bad Cases with these features. Finally, we applied Bad Cases to the N-gram error correction algorithm model to check the impact of Bad Case mining on error correction. The experimental results show that the error correction based on Bad Case mining makes the precision rate and recall rate of the automatic error correction improved obviously. Users experience is improved and the interaction becomes more friendly.
View full abstract
-
Xiayang CHEN, Chaojing TANG, Jian WANG, Lei ZHANG, Qingkun MENG
Article type: LETTER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 7 Pages
1946-1949
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Although Wolf Pack Algorithm (WPA) is a novel optimal algorithm with good performance, there is still room for improvement with respect to its convergence. In order to speed up its convergence and strengthen the search ability, we improve WPA with the Differential Evolution (DE) elite set strategy. The new proposed algorithm is called the WPADEES for short. WPADEES is faster than WPA in convergence, and it has a more feasible adaptability for various optimizations. Six standard benchmark functions are applied to verify the effects of these improvements. Our experiments show that the performance of WPADEES is superior to the standard WPA and other intelligence optimal algorithms, such as GA, DE, PSO, and ABC, in several situations.
View full abstract
-
Pilsung KANG
Article type: LETTER
Subject area: Software Engineering
2018 Volume E101.D Issue 7 Pages
1950-1953
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
We present a modular way of implementing adaptive decisions in performing scientific simulations. The proposed method employs modern software engineering mechanisms to allow for better software management in scientific computing, where software adaptation has often been implemented manually by the programmer or by using in-house tools, which complicates software management over time. By applying the aspect-oriented programming (AOP) paradigm, we consider software adaptation as a separate concern and, using popular AOP constructs, implement adaptive decision separately from the original code base, thereby improving software management. We demonstrate the effectiveness of our approach with applications to stochastic simulation software.
View full abstract
-
Yuma NAGAO, Nobutaka SUZUKI
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2018 Volume E101.D Issue 7 Pages
1954-1958
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
MathML is a standard markup language for describing math expressions. MathML consists of two sets of elements: Presentation Markup and Content Markup. The former is widely used to display math expressions in Web pages, while the latter is more suited to the calculation of math expressions. In this letter, we focus on the former and consider classifying Presentation MathML expressions. Identifying the classes of given Presentation MathML expressions is helpful for several applications, e.g., Presentation to Content MathML conversion, text-to-speech, and so on. We propose a method for classifying Presentation MathML expressions by using multilayer perceptron. Experimental results show that our method classifies MathML expressions with high accuracy.
View full abstract
-
Lei ZHANG, Qingfu FAN, Guoxing ZHANG, Zhizheng LIANG
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2018 Volume E101.D Issue 7 Pages
1959-1962
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Existing trajectory prediction methods suffer from the “data sparsity” and neglect “time awareness”, which leads to low accuracy. Aiming to the problem, we propose a fast time-aware sparse trajectories prediction with tensor factorization method (TSTP-TF). Firstly, we do trajectory synthesis based on trajectory entropy and put synthesized trajectories into the original trajectory space. It resolves the sparse problem of trajectory data and makes the new trajectory space more reliable. Then, we introduce multidimensional tensor modeling into Markov model to add the time dimension. Tensor factorization is adopted to infer the missing regions transition probabilities to further solve the problem of data sparsity. Due to the scale of the tensor, we design a divide and conquer tensor factorization model to reduce memory consumption and speed up decomposition. Experiments with real dataset show that TSTP-TF improves prediction accuracy generally by as much as 9% and 2% compared to the Baseline algorithm and ESTP-MF algorithm, respectively.
View full abstract
-
Yu ZHANG, Pengyuan ZHANG, Qingwei ZHAO
Article type: LETTER
Subject area: Speech and Hearing
2018 Volume E101.D Issue 7 Pages
1963-1967
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
In this letter, we explored the usage of spatio-temporal information in one unified framework to improve the performance of multichannel speech recognition. Generalized cross correlation (GCC) is served as spatial feature compensation, and an attention mechanism across time is embedded within long short-term memory (LSTM) neural networks. Experiments on the AMI meeting corpus show that the proposed method provides a 8.2% relative improvement in word error rate (WER) over the model trained directly on the concatenation of multiple microphone outputs.
View full abstract
-
Chun-Yu LIU, Wei-Hao LIAO, Shanq-Jang RUAN
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 7 Pages
1968-1971
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
The abnormal crowd behavior detection is an important research topic in computer vision to improve the response time of critical events. In this letter, we introduce a novel method to detect and localize the crowd gathering in surveillance videos. The proposed foreground stillness model is based on the foreground object mask and the dense optical flow to measure the instantaneous crowd stillness level. Further, we obtain the long-term crowd stillness level by the leaky bucket model, and the crowd gathering behavior can be detected by the threshold analysis. Experimental results indicate that our proposed approach can detect and locate crowd gathering events, and it is capable of distinguishing between standing and walking crowd. The experiments in realistic scenes with 88.65% accuracy for detection of gathering frames show that our method is effective for crowd gathering behavior detection.
View full abstract
-
Xiaoyuan REN, Libing JIANG, Xiaoan TANG, Junda ZHANG
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 7 Pages
1972-1975
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
Extracting 3D information from a single image is an interesting but ill-posed problem. Especially for those artificial objects with less texture such as smooth metal devices, the decrease of object detail makes the problem more challenging. Aiming at the texture-less object with symmetric structure, this paper proposes a novel method for 3D pose estimation from a single image by introducing implicit structural symmetry and context constraint as priori-knowledge. Firstly, by parameterized representation, the texture-less object is decomposed into a series of sub-objects with regular geometric primitives. Accordingly, the problem of 3D pose estimation is converted to a parameter estimation problem, which is implemented by primitive fitting algorithm. Then, the context prior among sub-objects is introduced for parameter refinement via the augmentedLagrange optimization. The effectiveness of the proposed method is verified by the experiments based on simulated and measured data.
View full abstract
-
Jingjie YAN, Bei WANG, Ruiyu LIANG
Article type: LETTER
Subject area: Multimedia Pattern Processing
2018 Volume E101.D Issue 7 Pages
1976-1979
Published: July 01, 2018
Released on J-STAGE: July 01, 2018
JOURNAL
FREE ACCESS
In this paper, we establish a novel bimodal emotion database from physiological signals and facial expression, which is named as PSFE. The physiological signals and facial expression of the PSFE database are respectively recorded by the equipment of the BIOPAC MP 150 and the Kinect for Windows in the meantime. The PSFE database altogether records 32 subjects which include 11 women and 21 man, and their age distribution is from 20 to 25. Moreover, the PSFE database records three basic emotion classes containing calmness, happiness and sadness, which respectively correspond to the neutral, positive and negative emotion state. The general sample number of the PSFE database is 288 and each emotion class contains 96 samples.
View full abstract