IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E91.D , Issue 4
Showing 1-41 articles out of 41 articles from the selected issue
Special Section on Knowledge-Based Software Engineering
  • Takahira YAMAGUCHI
    2008 Volume E91.D Issue 4 Pages 879-880
    Published: April 01, 2008
    Released: July 01, 2018
    JOURNALS FREE ACCESS
    Download PDF (60K)
  • Atsushi OHNISHI
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 881-887
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a method to generate exceptional scenarios from a normal scenario written with a scenario language. This method includes (1) generation of exceptional plans and (2) generation of exceptional scenario by a user's selection of these plans. The proposed method enables users to decrease the omission of the possible exceptional scenarios in the early stages of development. The method will be illustrated with some examples.
    Download PDF (3622K)
  • Osamu MIZUNO, Tohru KIKUNO
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 888-896
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper describes a novel approach for detecting fault-prone modules using a spam filtering technique. Fault-prone module detection in source code is important for the assurance of software quality. Most previous fault-prone detection approaches have been based on using software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. Because of the increase in the need for spam e-mail detection, the spam filtering technique has progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in such a way that the source code modules are considered text files and are applied to the seam filter directly. To show the applicability of our approach, we conducted experimental applications using source code repositories of Java based open source developments. The result of experiments shows that our approach can correctly predict 78% of actual fault-prone modules as fault-prone.
    Download PDF (5061K)
  • Haruhiko KAIYA, Akira OSADA, Kenji KAIJIRI
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 897-906
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.
    Download PDF (2171K)
  • Junko SHIROGANE, Hajime IWATA, Kazuhiro FUKAYA, Yoshiaki FUKAZAWA
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 907-920
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    To develop usable software, it is necessary to develop Graphical User Interfaces (GUIs) in iterative steps, such as evaluating the usability of GUIs and improving GUIs. In improving GUIs, developers are often required to modify both the GUI and the logic code of the software. In our research, to facilitate GUI improvement, we propose a method of auto-matically searching for code to be modified and suggesting how to modify them. To search for appropriate code to be modified, we define the roles of widgets according to their purpose and the patterns for how to change GUIs. In our method, how to change GUIs is specified, and then the parts of source programs that are required to be modified are searched for. Also, we classify methods for each widget according to their functions. Using this classification, a method of modifying the code that is searched for is suggested.
    Download PDF (5672K)
  • Kazuma YAMAMOTO, Motoshi SAEKI
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 921-932
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    During software requirements analysis, developers and stakeholders have many alternatives of requirements to be achieved and should make decisions to select an alternative out of them. There are two significant points to be considered for supporting these decision making processes in requirements analysis; 1) dependencies among alternatives and 2) evaluation based on multi-criteria and their trade-off. This paper proposes the technique to address the above two issues by using an extended version of goal-oriented analysis. In goal-oriented analysis, elicited goals and their dependencies are represented with an AND-OR acyclic directed graph. We use this technique to model the dependencies of the alternatives. Furthermore we associate attribute values and their propagation rules with nodes and edges in a goal graph in order to evaluate the alternatives with them. The attributes and their calculation rules greatly depend on the characteristics of a development project. Thus, in our approach, we select and use the attributes and their rules that can be appropriate for the project. TOPSIS method is adopted to show alternatives and their resulting attribute values.
    Download PDF (5181K)
  • Shinpei HAYASHI, Junya KATADA, Ryota SAKAMOTO, Takashi KOBAYASHI, Moto ...
    Type: PAPER
    Subject area: Software Engineering
    2008 Volume E91.D Issue 4 Pages 933-944
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    One of the approaches to improve program understanding is to extract what kinds of design pattern are used in existing object-oriented software. This paper proposes a technique for efficiently and accurately detecting occurrences of design patterns included in source codes. We use both static and dynamic analyses to achieve the detection with high accuracy. Moreover, to reduce computation and maintenance costs, detection conditions are hierarchically specified based on Pree's meta patterns as common structures of design patterns. The usage of Prolog to represent the detection conditions enables us to easily add and modify them. Finally, we have implemented an automated tool as an Eclipse plug-in and conducted experiments with Java programs. The experimental results show the effectiveness of our approach.
    Download PDF (3212K)
  • Takeshi MORITA, Naoki FUKUTA, Noriaki IZUMI, Takahira YAMAGUCHI
    Type: PAPER
    Subject area: Knowledge Engineering
    2008 Volume E91.D Issue 4 Pages 945-958
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose an interactive domain ontology development environment called DODDLE-OWL. DODDLE-OWL refers to existing ontologies and supports the semi-automatic construction of taxonomic and other relationships in domain ontologies from documents. Integrating several modules, DODDLE-OWL is a practical and interactive domain ontology development environment. In order to evaluate the efficiency of DODDLE-OWL, we compared DODDLE-OWL with popular manual-building method. In order to evaluate the scalability of DODDLE-OWL, we constructed a large sized ontology over 34,000 concepts in the field of rocket operation using DODDLE-OWL. Through the above evaluation, we confirmed the efficiency and the scalability of DODDLE-OWL. Currently, DODDLE-OWL is open source software in Java and has 100 and more users from 20 and more countries.
    Download PDF (3533K)
  • Hiroyuki SAKAI, Shigeru MASUYAMA
    Type: PAPER
    Subject area: Knowledge Engineering
    2008 Volume E91.D Issue 4 Pages 959-968
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.
    Download PDF (2532K)
  • Hyo-Jung OH, Bo-Hyun YUN
    Type: PAPER
    Subject area: Knowledge Engineering
    2008 Volume E91.D Issue 4 Pages 969-975
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a knowledge acquisition method using sentence topics for question answering. We define templates for information extraction by the Korean concept network semi-automatically. Moreover, we propose the two-phase information extraction model by the hybrid machine learning such as maximum entropy and conditional random fields. In our experiments, we examined the role of sentence topics in the template-filling task for information extraction. Our experimental result shows the improvement of 18% in F-score and 434% in training speed over the plain CRF-based method for the extraction task. In addition, our result shows the improvement of 8% in F-score for the subsequent QA task.
    Download PDF (3621K)
  • Christian HOAREAU, Ichiro SATOH
    Type: PAPER
    Subject area: Ubiquitous Computing
    2008 Volume E91.D Issue 4 Pages 976-985
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.
    Download PDF (3359K)
  • Masanobu TSURUTA, Hiroyuki SAKAI, Shigeru MASUYAMA
    Type: LETTER
    2008 Volume E91.D Issue 4 Pages 986-989
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We propose a method of informative DOM* subtree identification from a Web page in an unfamiliar Web site. Our method uses layout data of DOM nodes generated by a generic Web browser. The results show that our method outperforms a baseline method, and was able to identify informative DOM subtrees from Web pages robustly.
    Download PDF (4998K)
Regular Section
  • Kenya UENO
    Type: PAPER
    Subject area: Computation and Computational Models
    2008 Volume E91.D Issue 4 Pages 990-995
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We characterize the gap between time and space complexity of functions by operators and completeness. First, we introduce a new notion of operators for function complexity classes based on recursive function theory and construct an operator which generates FPS PACE from FP. Then, we introduce new function classes composed of functions whose output lengths are bounded by the input length plus some constant. We characterize FP and FPS PACE by using these classes and operators. Finally, we define a new notion of completeness for FPS PACE and show a FPS PACE-complete function.
    Download PDF (1372K)
  • Yasuhiko TAKENAGA, Nao KATOUGI
    Type: PAPER
    Subject area: Algorithm Theory
    2008 Volume E91.D Issue 4 Pages 996-1002
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    A tree-shellable function is a positive Boolean function which can be represented by a binary decision tree whose number of paths from the root to a leaf labeled 1 equals the number of prime implicants. In this paper, we consider the tree-shellability of DNFs with restrictions. We show that, for read-k DNFs, the number of terms in a tree-shellable function is at most k2. We also show that, for k-DNFs, recognition of ordered tree-shellable functions is NP-complete for k=4 and tree-shellable functions can be recognized in polynomial time for constant k.
    Download PDF (1607K)
  • Yasuto SUZUKI, Keiichi KANEKO
    Type: PAPER
    Subject area: Algorithm Theory
    2008 Volume E91.D Issue 4 Pages 1003-1009
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm.
    Download PDF (1318K)
  • Jun YAO, Shinobu MIWA, Hajime SHIMADA, Shinji TOMITA
    Type: PAPER
    Subject area: Computer Systems
    2008 Volume E91.D Issue 4 Pages 1010-1022
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Recently, a method called pipeline stage unification (PSU) has been proposed to reduce energy consumption for mobile processors via inactivating and bypassing some of the pipeline registers and thus adopt shallow pipelines. It is designed to be an energy efficient method especially for the processors under future process technologies. In this paper, we present a mechanism for the PSU controller which can dynamically predict a suitable configuration based on the program phase detection. Our results show that the designed predictor can achieve a PSU degree prediction accuracy of 84.0%, averaged from the SPEC CPU2000 integer benchmarks. With this dynamic control mechanism, we can obtain 11.4% Energy-Delay-Product (EDP) reduction in the processor that adopts a PSU pipeline, compared to the baseline processor, even after the application of complex clock gating.
    Download PDF (5157K)
  • Koji KOBATAKE, Shigeaki TAGASHIRA, Satoshi FUJITA
    Type: PAPER
    Subject area: Computer Systems
    2008 Volume E91.D Issue 4 Pages 1023-1031
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.
    Download PDF (2690K)
  • Guangwei WANG, Kenji ARAKI
    Type: PAPER
    Subject area: Data Mining
    2008 Volume E91.D Issue 4 Pages 1032-1041
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose an improved SO-PMI (Semantic Orientation Using Pointwise Mutual Information) algorithm, for use in Japanese Weblog Opinion Mining. SO-PMI is an unsupervised approach proposed by Turney that has been shown to work well for English. When this algorithm was translated into Japanese naively, most phrases, whether positive or negative in meaning, received a negative SO. For dealing with this slanting phenomenon, we propose three improvements: to expand the reference words to sets of words, to introduce a balancing factor and to detect neutral expressions. In our experiments, the proposed improvements obtained a well-balanced result: both positive and negative accuracy exceeded 62%, when evaluated on 1,200 opinion sentences sampled from three different domains (reviews of Electronic Products, Cars and Travels from Kakaku. com). In a comparative experiment on the same corpus, a supervised approach (SA-Demo) achieved a very similar accuracy to our method. This shows that our proposed approach effectively adapted SO-PMI for Japanese, and it also shows the generality of SO-PMI.
    Download PDF (3776K)
  • Hieu Trung HUYNH, Yonggwan WON
    Type: PAPER
    Subject area: Data Mining
    2008 Volume E91.D Issue 4 Pages 1042-1049
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    The single-hidden-layer feedforward neural networks (SLFNs) are frequently used in machine learning due to their ability which can form boundaries with arbitrary shapes if the activation function of hidden units is chosen properly. Most learning algorithms for the neural networks based on gradient descent are still slow because of the many learning steps. Recently, a learning algorithm called extreme learning machine (ELM) has been proposed for training SLFNs to overcome this problem. It randomly chooses the input weights and hidden-layer biases, and analytically determines the output weights by the matrix inverse operation. This algorithm can achieve good generalization performance with high learning speed in many applications. However, this algorithm often requires a large number of hidden units and takes long time for classification of new observations. In this paper, a new approach for training SLFNs called least-squares extreme learning machine (LS-ELM) is proposed. Unlike the gradient descent-based algorithms and the ELM, our approach analytically determines the input weights, hidden-layer biases and output weights based on linear models. For training with a large number of input patterns, an online training scheme with sub-blocks of the training set is also introduced. Experimental results for real applications show that our proposed algorithm offers high classification accuracy with a smaller number of hidden units and extremely high speed in both learning and testing.
    Download PDF (1673K)
  • Dong Seong KIM, Jong Sou PARK
    Type: PAPER
    Subject area: Application Information Security
    2008 Volume E91.D Issue 4 Pages 1050-1057
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.
    Download PDF (1718K)
  • Jabeom GU, Jaehoon NAH, Hyeokchan KWON, Jonsoo JANG, Sehyun PARK
    Type: PAPER
    Subject area: Application Information Security
    2008 Volume E91.D Issue 4 Pages 1058-1073
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Various advantages of cooperative peer-to-peer networks are strongly counterbalanced by the open nature of a distributed, serverless network. In such networks, it is relatively easy for an attacker to launch various attacks such as misrouting, corrupting, or dropping messages as a result of a successful identifier forgery. The impact of an identifier forgery is particularly severe because the whole network can be compromised by attacks such as Sybil or Eclipse. In this paper, we present an identifier authentication mechanism called random visitor, which uses one or more randomly selected peers as delegates of identity proof. Our scheme uses identity-based cryptography and identity ownership proof mechanisms collectively to create multiple, cryptographically protected indirect bindings between two peers, instantly when needed, through the delegates. Because of these bindings, an attacker cannot achieve an identifier forgery related attack against interacting peers without breaking the bindings. Therefore, our mechanism limits the possibility of identifier forgery attacks efficiently by disabling an attacker's ability to break the binding. The design rationale and framework details are presented. A security analysis shows that our scheme is strong enough against identifier related attacks and that the strength increases if there are many peers (more than several thousand) in the network.
    Download PDF (6038K)
  • Tsang-Long PAO, Yu-Te CHEN, Jun-Heng YEH
    Type: PAPER
    Subject area: Human-computer Interaction
    2008 Volume E91.D Issue 4 Pages 1074-1081
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.
    Download PDF (2125K)
  • Lu ZHEN, Zuhua JIANG
    Type: PAPER
    Subject area: Educational Technology
    2008 Volume E91.D Issue 4 Pages 1082-1090
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper is mainly concerned with a knowledge supply model in the environment of knowledge grid to realize the knowledge sharing globally. By integrating members, roles, and tasks in a workflow, three sorts of knowledge demands are gained. Based on knowledge demand information, a knowledge supply model is proposed for the purpose of delivering the right knowledge to the right persons. Knowledge grid, acting as a platform for implementing the knowledge supply, is also discussed mainly from the view of knowledge space. A prototype system of knowledge supply has been implemented and applied in product development.
    Download PDF (6621K)
  • Lina, Tomokazu TAKAHASHI, Ichiro IDE, Hiroshi MURASE
    Type: PAPER
    Subject area: Pattern Recognition
    2008 Volume E91.D Issue 4 Pages 1091-1100
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We propose the construction of an appearance manifold with embedded view-dependent covariance matrix to recognize 3D objects which are influenced by geometric distortions and quality degradation effects. The appearance manifold is used to capture the pose variability, while the covariance matrix is used to learn the distribution of samples for gaining noise-invariance. However, since the appearance of an object in the captured image is different for every different pose, the covariance matrix value is also different for every pose position. Therefore, it is important to embed view-dependent covariance matrices in the manifold of an object. We propose two models of constructing an appearance manifold with view-dependent covariance matrix, called the View-dependent Covariance matrix by training-Point Interpolation (VCPI) and View-dependent Covariance matrix by Eigenvector Interpolation (VCEI) methods. Here, the embedded view-dependent covariance matrix of the VCPI method is obtained by interpolating every training-points from one pose to other training-points in a consecutive pose. Meanwhile, in the VCEI method, the embedded view-dependent covariance matrix is obtained by interpolating only the eigenvectors and eigenvalues without considering the correspondences of each training image. As it embeds the covariance matrix in manifold, our view-dependent covariance matrix methods are robust to any pose changes and are also noise invariant. Our main goal is to construct a robust and efficient manifold with embedded view-dependent covariance matrix for recognizing objects from images which are influenced with various degradation effects.
    Download PDF (5569K)
  • Lazaro S. P. BUSAGALA, Wataru OHYAMA, Tetsushi WAKABAYASHI, Fumitaka K ...
    Type: PAPER
    Subject area: Pattern Recognition
    2008 Volume E91.D Issue 4 Pages 1101-1109
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Feature transformation in automatic text classification (ATC) can lead to better classification performance. Furthermore dimensionality reduction is important in ATC. Hence, feature transformation and dimensionality reduction are performed to obtain lower computational costs with improved classification performance. However, feature transformation and dimension reduction techniques have been conventionally considered in isolation. In such cases classification performance can be lower than when integrated. Therefore, we propose an integrated feature analysis approach which improves the classification performance at lower dimensionality. Moreover, we propose a multiple feature integration technique which also improves classification effectiveness.
    Download PDF (3421K)
  • Chang-Chu CHEN, Chin-Chen CHANG
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2008 Volume E91.D Issue 4 Pages 1110-1116
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.
    Download PDF (3866K)
  • Sang-Neon LEE, Hyuk-Jae LEE
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2008 Volume E91.D Issue 4 Pages 1117-1126
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Motion-compensated frame interpolation (MCFI) is widely used to smoothly display low frame rate video sequences by synthesizing and inserting new frames between existing frames. The temporal shift interpolation technique (TSIT) is popular for frame interpolation of video sequences that are encoded by a block-based video coding standard such as MPEG-4 or H.264/AVC. TSIT assumes the existence of a motion vector (MV) and may not result in high-quality interpolation for intra-mode blocks that do not have MVs. This paper proposes a new frame interpolation algorithm mainly designed for intra-mode blocks. In order to improve the accuracy of pixel interpolation, the new algorithm proposes sub-pixel interpolation and the reuse of MVs for their refinement. In addition, the new algorithm employs two different interpolation modes for inter-mode blocks and intra-mode blocks, respectively. The use of the two modes reduces ghost artifacts but potentially increases blocking effects between the blocks interpolated by different modes. To reduce blocking effects, the proposed algorithm searches the boundary of an object and interpolates all blocks in the object in the same mode. Simulation results show that the proposed algorithm improves PSNR by an average of 0.71dB compared with the TSIT with MV refinement and also significantly improves the subjective quality of pictures by reducing ghost artifacts.
    Download PDF (9653K)
  • Yoshinori SUZUKI, Choong Seng BOON, Thiow Keng TAN
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2008 Volume E91.D Issue 4 Pages 1127-1134
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In video compression, the information transmitted from the encoder to the decoder can be classified into two categories: side information, which carries action instructions to be performed, and data such as the residual error of the texture. As video compression technology has matured, better compression has been achieved by increasing the ratio of side information to data, while reducing the overall bit rate. However, there is a limit to this method because the side information becomes a significant fraction of the overall bit rate. In recent video compression technologies, the decoder tends to share the burden of the decision making in order to achieve a higher compression ratio. To further improve the coding efficiency, we tried to provide the decoder with a more active role in reducing the amount of data. According to this approach, by using reconstructed pixels that surround a target block to produce a better sample predictor of the target block, the amount of side information and the residual error of the texture are reduced. Furthermore, multiple candidates of the sample predictor are utilized to create a better sample predictor without increasing the amount of side information. In this paper, we employ a template matching method that makes the decoder more active. The template matching method is applied to the conventional video codec to improve the prediction performance of intra, inter, and bi-directional pictures in video. The results show that improvements in coding efficiency up to 5.8% are achieved.
    Download PDF (5517K)
  • Akinobu MAEJIMA, Shuhei WEMLER, Tamotsu MACHIDA, Masao TAKEBAYASHI, Sh ...
    Type: PAPER
    Subject area: Computer Graphics
    2008 Volume E91.D Issue 4 Pages 1135-1148
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We have developed a visual entertainment system called “Future Cast” which enables anyone to easily participate in a pre-recorded or pre-created film as an instant CG movie star. This system provides audiences with the amazing opportunity to join the cast of a movie in real-time. The Future Cast System can automatically perform all the processes required to make this possible, from capturing participants' facial characteristics to rendering them into the movie. Our system can also be applied to any movie created using the same production process. We conducted our first experimental trial demonstration of the Future Cast System at the Mitsui-Toshiba pavilion at the 2005 World Exposition in Aichi Japan.
    Download PDF (13261K)
  • Bo ZHENG, Jun TAKAMATSU, Katsushi IKEUCHI
    Type: PAPER
    Subject area: Computer Graphics
    2008 Volume E91.D Issue 4 Pages 1149-1158
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    When large-scale and complex 3D objects are obtained by range finders, it is often necessary to represent them by algebraic surfaces for such purposes as data compression, multi-resolution, noise elimination, and 3D recognition. Representing the 3D data with algebraic surfaces of an implicit polynomial (IP) has proved to offer the advantages that IP representation is capable of encoding geometric properties easily with desired smoothness, few parameters, algebraic/geometric invariants, and robustness to noise and missing data. Unfortunately, generating a high-degree IP surface for a whole complex 3D shape is impossible because of high computational cost and numerical instability. In this paper we propose a 3D segmentation method based on a cut-and-merge approach. Two cutting procedures adopt low-degree IPs to divide and fit the surface segments simultaneously, while avoiding generating high-curved segments. A merging procedure merges the similar adjacent segments to avoid over-segmentation. To prove the effectiveness of this segmentation method, we open up some new vistas for 3D applications such as 3D matching, recognition, and registration.
    Download PDF (7343K)
  • Hiroshi YASUDA, Ryota KAIHARA, Suguru SAITO, Masayuki NAKAJIMA
    Type: PAPER
    Subject area: Computer Graphics
    2008 Volume E91.D Issue 4 Pages 1159-1167
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Because motion capture system enabled us to capture a number of human motions, the demand for a method to easily browse the captured motion database has been increasing. In this paper, we propose a method to generate simple visual outlines of motion clips, for the purpose of efficient motion data browsing. Our method unfolds a motion clip into a 2D stripe of keyframes along a timeline that is based on semantic keyframe extraction and the best view point selection for each keyframes. With our visualization, timing and order of actions in the motions are clearly visible and the contents of multiple motions are easily comparable. In addition, because our method is applicable for a wide variety of motions, it can generate outlines for a large amount of motions fully automatically.
    Download PDF (5762K)
  • Tadayoshi HORITA, Itsuo TAKANAMI, Masatoshi MORI
    Type: PAPER
    Subject area: Biocybernetics, Neurocomputing
    2008 Volume E91.D Issue 4 Pages 1168-1175
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Two simple but useful methods, called the deep learning methods, for making multilayer neural networks tolerant to multiple link-weight and neuron-output faults, are proposed. The methods make the output errors in learning phase smaller than those in practical use. The abilities of fault-tolerance of the multilayer neural networks in practical use, are analyzed in the relationship between the output errors in learning phase and in practical use. The analytical result shows that the multilayer neural networks have complete (100%) fault-tolerance to multiple weight-and-neuron faults in practical use. The simulation results concerning the rate of successful learnings, the ability of fault-tolerance, and the learning time, are also shown.
    Download PDF (2162K)
  • Supaporn KIATTISIN, Kosin CHAMNONGTHAI
    Type: PAPER
    Subject area: Biological Engineering
    2008 Volume E91.D Issue 4 Pages 1176-1184
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    Bone Mineral Density (BMD) is an indicator of osteoporosis that is an increasingly serious disease, particularly for the elderly. To calculate BMD, we need to measure the volume of the femur in a noninvasive way. In this paper, we propose a noninvasive bone volume measurement method using x-ray attenuation on radiography and medical knowledge. The absolute thickness at one reference pixel and the relative thickness at all pixels of the bone in the x-ray image are used to calculate the volume and the BMD. First, the absolute bone thickness of one particular pixel is estimated by the known geometric shape of a specific bone part as medical knowledge. The relative bone thicknesses of all pixels are then calculated by x-ray attenuation of each pixel. Finally, given the absolute bone thickness of the reference pixel, the absolute bone thickness of all pixels is mapped. To evaluate the performance of the proposed method, experiments on 300 subjects were performed. We found that the method provides good estimations of real BMD values of femur bone. Estimates shows a high linear correlation of 0.96 between the volume Bone Mineral Density (vBMD) of CT-SCAN and computed vBMD (all P<0.001). The BMD results reveal 3.23% difference in volume from the BMD of CT-SCAN.
    Download PDF (8057K)
  • Youbean KIM, Kicheol KIM, Incheol KIM, Hyunwook SON, Sungho KANG
    Type: LETTER
    Subject area: Computer Components
    2008 Volume E91.D Issue 4 Pages 1185-1188
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a new low power BIST TPG scheme for reducing scan transitions. It uses a transition freezing and melting method which is implemented of the transition freezing block and a MUX. When random test patterns are generated from an LFSR, transitions of those patterns satisfy pseudo-random Gaussian distribution. The proposed technique freezes transitions of patterns using a freezing value. Experimental results show that the proposed BIST TPG schemes can reduce average power reduction by about 60% without performance loss and peak power by about 30% in ISCAS'89 benchmark circuits.
    Download PDF (2317K)
  • Myeong-Hoon OH, Seongwoon KIM
    Type: LETTER
    Subject area: VLSI Systems
    2008 Volume E91.D Issue 4 Pages 1189-1192
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    For a globally asynchronous locally synchronous (GALS) system, data transfer mechanisms based on a current-mode multiple valued logic (CMMVL) has been studied to reduce complexity and power dissipation of wires. However, these schemes consume considerable amount of power even in idle states because of the static power caused by their inherent structure. In this paper, new encoder and decoder circuits using CMMVL are suggested to reduce the static power. The effectiveness of the proposed data transfer is validated by comparisons with the previous CMMVL scheme and conventional voltage-mode schemes such as dual-rail and 1-of-4 encodings through simulation with a 0.25-μm CMOS technology. Simulation results demonstrate that the proposed CMMVL scheme significantly reduces power consumption of the previous one and is superiorr to dual-rail and 1-of-4 schemes over wire length of 2mm and 4mm, respectively.
    Download PDF (2184K)
  • Huazhi GONG, Kitae NAHM, JongWon KIM
    Type: LETTER
    Subject area: Networks
    2008 Volume E91.D Issue 4 Pages 1193-1196
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In IEEE 802.11 networks, the access point (AP) selection based on the strongest signal strength often results in the extremely unfair bandwidth allocation among mobile users (MUs). In this paper, we propose a distributed AP selection algorithm to achieve a fair bandwidth allocation for MUs. The proposed algorithm gradually balances the AP loads based on max-min fairness for the available multiple bit rate choices in a distributed manner. We analyze the stability and overhead of the proposed algorithm, and show the improvement of the fairness via computer simulation.
    Download PDF (838K)
  • Dong-Sup SONG, Jin-Ho AHN, Tae-Jin KIM, Sungho KANG
    Type: LETTER
    Subject area: Dependable Computing
    2008 Volume E91.D Issue 4 Pages 1197-1200
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes the minimum transition random X-filling (MTR-fill) technique, which is a new X-filling method, to reduce the amount of power dissipation during scan-based testing. In order to model the amount of power dissipated during scan load/unload cycles, the total weighted transition metric (TWTM) is introduced, which is calculated by the sum of the weighted transitions in a scan-load of a test pattern and a scan-unload of a test response. The proposed MTR-fill is implemented by simulated annealing method. During the annealing process, the TWTM of a pair of test patterns and test responses are minimized. Simultaneously, the MTR-fill attempts to increase the randomness of test patterns in order to reduce the number of test patterns needed to achieve adequate fault coverage. The effectiveness of the proposed technique is shown through experiments for ISCAS' 89 benchmark circuits.
    Download PDF (684K)
  • Jong Shill LEE, Baek Hwan CHO, Young Joon CHEE, In Young KIM, Sun I. K ...
    Type: LETTER
    Subject area: Application Information Security
    2008 Volume E91.D Issue 4 Pages 1201-1205
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    We propose a new approach to personal identification using derived vectorcardiogram (dVCG). The dVCG was calculated from recorded ECG using inverse Dower transform. Twenty-one features were extracted from the resulting dVCG. To analyze the effect of each feature and to improve efficiency while maintaining the performance, we performed feature selection using the Relief-F algorithm using these 21 features. Each set of the eight highest ranked features and all 21 features were used in SVM learning and in tests, respectively. The classification accuracy using the entire feature set was 99.53%. However, using only the eight highest ranked features, the classification accuracy was 99.07%, indicating only a 0.46% decrease in accuracy compared with the accuracy achieved using the entire feature set. Using only the eight highest ranked features, the conventional ECG method resulted in a 93% recognition rate, whereas our method achieved >99% recognition rate, over 6% higher than the conventional ECG method. Our experiments show that it is possible to perform a personal identification using only eight features extracted from the dVCG.
    Download PDF (788K)
  • Hernán AGUIRRE, Masahiko SATO, Kiyoshi TANAKA
    Type: LETTER
    Subject area: Artificial Intelligence and Cognitive Science
    2008 Volume E91.D Issue 4 Pages 1206-1210
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.
    Download PDF (2314K)
  • Kyung-Mi PARK, Hae-Chang RIM
    Type: LETTER
    Subject area: Natural Language Processing
    2008 Volume E91.D Issue 4 Pages 1211-1214
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose new external context features for the semantic classification of bio-entities. In the previous approaches, the words located on the left or the right context of bio-entities are frequently used as the external context features. However, in our prior experiments, the external contexts in a flat representation did not improve the performance. In this study, we incorporate predicate-argument features into training the ME-based classifier. Through parsing and argument identification, we recognize biomedical verbs that have argument relations with the constituents including a bio-entity, and then use the predicate-argument structures as the external context features. The extraction of predicate-argument features can be done by performing two identification tasks: the biomedically salient word identification which determines whether a word is a biomedically salient word or not, and the target verb identification which identifies biomedical verbs that have argument relations with the constituents including a bio-entity. Experiments show that the performance of semantic classification in the bio domain can be improved by utilizing such predicate-argument features.
    Download PDF (2217K)
  • Jeong-Yong AHN, Kill-Sung MUN, Young-Hyun KIM, Sun-Young OH, Beom-Soo ...
    Type: LETTER
    Subject area: Biological Engineering
    2008 Volume E91.D Issue 4 Pages 1215-1217
    Published: April 01, 2008
    Released: March 01, 2010
    JOURNALS FREE ACCESS
    In this note we propose a fuzzy diagnosis of headache. The method is based on the relations between symptoms and diseases. For this purpose, we suggest a new diagnosis measure using the occurrence information of patient's symptoms and develop an improved interview chart with fuzzy degrees assigned according to the relation among symptoms and three labels of headache. The proposed method is illustrated by two examples.
    Download PDF (567K)
feedback
Top