-
Takako NAKATANI
2012 Volume E95.D Issue 4 Pages
909-910
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
-
Ryo TAKAOKA, Masayuki SHIMOKAWA, Toshio OKAMOTO
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
911-920
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Many studies and systems that incorporate elements such as “pleasure” and “fun” in the game to improve a learner's motivation have been developed in the field of learning environments. However, few are the studies of situations where many learners gather at a single computer and participate in a game-based learning environment (GBLE), and where the GBLE designs the learning process by controlling the interactions between learners such as competition, collaboration, and learning by teaching. Therefore, the purpose of this study is to propose a framework of educational control that induces and activates interaction between learners intentionally to create a learning opportunity that is based on the knowledge understanding model of each learner. In this paper, we explain the design philosophy and the framework of our GBLE called “Who becomes the king in the country of mathematics?” from a game viewpoint and describe the method of learning support control in the learning environment. In addition, we report the results of the learning experiment with our GBLE, which we carried out in a junior high school, and include some comments by a principal and a teacher. From the results of the experiment and some comments, we noticed that a game may play a significant role in weakening the learning relationship among students and creating new relationships in the world of the game. Furthermore, we discovered that learning support control of the GBLE has led to activation of the interaction between learners to some extent.
View full abstract
-
Theerayut THONGKRAU, Pattarachai LALITROJWONG
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
921-931
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
The development of ontology at the instance level requires the extraction of the terms defining the instances from various data sources. These instances then are linked to the concepts of the ontology, and relationships are created between these instances for the next step. However, before establishing links among data, ontology engineers must classify terms or instances from a web document into an ontology concept. The tool for help ontology engineer in this task is called ontology population. The present research is not suitable for ontology development applications, such as long time processing or analyzing large or noisy data sets. OntoPop system introduces a methodology to solve these problems, which comprises two parts. First, we select meaningful features from syntactic relations, which can produce more significant features than any other method. Second, we differentiate feature meaning and reduce noise based on latent semantic analysis. Experimental evaluation demonstrates that the OntoPop works well, significantly out-performing the accuracy of 49.64%, a learning accuracy of 76.93%, and executes time of 5.46 second/instance.
View full abstract
-
Makoto NAKATSUJI, Akimichi TANAKA, Toshio UCHIYAMA, Ko FUJIMURA
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
932-941
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Users recently find their interests by checking the contents published or mentioned by their immediate neighbors in social networking services. We propose semantics-based link navigation; links guide the active user to potential neighbors who may provide new interests. Our method first creates a graph that has users as nodes and shared interests as links. Then it divides the graph by link pruning to extract practical numbers, that the active user can navigate, of interest-sharing groups, i.e. communities of interests (COIs). It then attaches a different semantic tag to the link to each representative user, which best reflects the interests of COIs that they are included in, and to the link to each immediate neighbor of the active user. It finally calculates link attractiveness by analyzing the semantic tags on links. The active user can select the link to access by checking the semantic tags and link attractiveness. User interests extracted from large scale actual blog-entries are used to confirm the efficiency of our proposal. Results show that navigation based on link attractiveness and representative users allows the user to find new interests much more accurately than is otherwise possible.
View full abstract
-
Naoyasu UBAYASHI, Yasutaka KAMEI
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
942-958
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
AspectM, an aspect-oriented modeling (AOM) language, provides not only basic modeling constructs but also an extension mechanism called metamodel access protocol (MMAP) that allows a modeler to modify the metamodel. MMAP consists of metamodel extension points, extension operations, and primitive predicates for navigating the metamodel. Although the notion of MMAP is useful, it needs tool support. This paper proposes a method for implementing a MMAP-based AspectM support tool. It consists of model editor, model weaver, and model verifier. We introduce the notion of edit-time structural reflection and extensible model weaving. Using these mechanisms, a modeler can easily construct domain-specific languages (DSLs). We show a case study using the AspectM support tool and discuss the effectiveness of the extension mechanism provided by MMAP. As a case study, we show a UML-based DSL for describing the external contexts of embedded systems.
View full abstract
-
Kunihiro NODA, Takashi KOBAYASHI, Shinichiro YAMAMOTO, Motoshi SAEKI, ...
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
959-969
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Program comprehension using dynamic information is one of key tasks of software maintenance. Software visualization with sequence diagrams is a promising technique to help developer comprehend the behavior of object-oriented systems effectively. There are many tools that can support automatic generation of a sequence diagram from execution traces. However it is still difficult to understand the behavior because the size of automatically generated sequence diagrams from the massive amounts of execution traces tends to be beyond developer's capacity. In this paper, we propose an execution trace slicing and visualization method. Our proposed method is capable of slice calculation based on a behavior model which can treat dependencies based on static and dynamic analysis and supports for various programs including exceptions and multi-threading. We also introduce our tool that perform our proposed slice calculation on the Eclipse platform. We show the applicability of our proposed method by applying the tool to two Java programs as case studies. As a result, we confirm effectiveness of our proposed method for understanding the behavior of object-oriented systems.
View full abstract
-
Sombut FOITONG, Ouen PINNGERN, Boonwat ATTACHOO
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
970-981
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Feature selection (FS) plays an important role in pattern recognition and machine learning. FS is applied to dimensionality reduction and its purpose is to select a subset of the original features of a data set which is rich in the most useful information. Most existing FS methods based on rough set theory focus on dependency function, which is based on lower approximation as for evaluating the goodness of a feature subset. However, by determining only information from a positive region but neglecting a boundary region, most relevant information could be invisible. This paper, the maximal lower approximation (Max-Certainty) - minimal boundary region (Min-Uncertainty) criterion, focuses on feature selection methods based on rough set and mutual information which use different values among the lower approximation information and the information contained in the boundary region. The use of this idea can result in higher predictive accuracy than those obtained using the measure based on the positive region (certainty region) alone. This demonstrates that much valuable information can be extracted by using this idea. Experimental results are illustrated for discrete, continuous, and microarray data and compared with other FS methods in terms of subset size and classification accuracy.
View full abstract
-
Lidong WANG, Yuan JIE
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
982-988
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In Digital Library (DL) applications, digital book clustering is an important and urgent research task. However, it is difficult to conduct effectively because of the great length of digital books. To do the correct clustering for digital books, a novel method based on probabilistic topic model is proposed. Firstly, we build a topic model named LDAC. The main goal of LDAC topic modeling is to effectively extract topics from digital books. Subsequently, Gibbs sampling is applied for parameter inference. Once the model parameters are learned, each book is assigned to the cluster which maximizes the posterior probability. Experimental results demonstrate that our approach based on LDAC is able to achieve significant improvement as compared to the related methods.
View full abstract
-
Osamu TAKAKI, Izumi TAKEUTI, Noriaki IZUMI, Koiti HASIDA
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
989-1002
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this paper, we discuss a fundamental theory of incremental verification for workflows. Incremental verification is a method to help multiple designers share and collaborate on huge workflows while maintaining their consistency. To this end, we introduce
passbacks in workflows and their consistency property in the control flow perspective. passbacks indicate redoing of works. Workflows with passbacks are useful to naturally represent human works. To define the consistency property above, we define normality of workflows with passbacks and total correctness of normal workflows based on transition system-based semantics of normal workflows. We further extend workflows to sorted workflows and define their vertical division and composition. We also extend total correctness to normal sorted workflows, for the sake of incremental verification of a large-scale workflow with passbacks via vertical division and composition.
View full abstract
-
Abelyn Methanie R. LAURITO, Shingo TAKADA
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
1003-1011
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
The identification of functional and non-functional concerns is an important activity during requirements analysis. However, there may be conflicts between the identified concerns, and they must be discovered and resolved through trade-off analysis. Aspect-Oriented Requirements Engineering (AORE) has trade-off analysis as one of its goals, but most AORE approaches do not actually offer support for trade-off analysis; they focus on describing concerns and generating their composition. This paper proposes an approach for trade-off analysis based on AORE using use cases and the Requirements Conflict Matrix (RCM) to represent compositions. RCM shows the positive or negative effect of non-functional concerns over use cases and other non-functional concerns. Our approach is implemented within a tool called E-UCEd (Extended Use Case Editor). We also show the results of evaluating our tool.
View full abstract
-
Shinpei HAYASHI, Daisuke TANABE, Haruhiko KAIYA, Motoshi SAEKI
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
1012-1020
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Requirements changes frequently occur at any time of a software development process, and their management is a crucial issue to develop software of high quality. Meanwhile, goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support are necessary. This paper presents a technique related to the change management of goal graphs, realizing impact analysis on a goal graph when its modifications occur. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when an existing goal is deleted. We have implemented a supporting tool for automating the analysis. Two case studies suggested the efficiency of the proposed approach.
View full abstract
-
Takako NAKATANI, Narihito KONDO, Junko SHIROGANE, Haruhiko KAIYA, Shoz ...
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
1021-1030
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Requirements are elicited step by step during the requirements engineering (RE) process. However, some types of requirements are elicited completely after the scheduled requirements elicitation process is finished. Such a situation is regarded as problematic situation. In our study, the difficulties of eliciting various kinds of requirements is observed by components. We refer to the components as observation targets (OTs) and introduce the word “Requirements maturation.” It means when and how requirements are elicited completely in the project. The requirements maturation is discussed on physical and logical OTs. OTs Viewed from a logical viewpoint are called logical OTs, e.g. quality requirements. The requirements of physical OTs, e.g., modules, components, subsystems, etc., includes functional and non-functional requirements. They are influenced by their requesters' environmental changes, as well as developers' technical changes. In order to infer the requirements maturation period of each OT, we need to know how much these factors influence the OTs' requirements maturation. According to the observation of actual past projects, we defined the PRINCE (Pre Requirements Intelligence Net Consideration and Evaluation) model. It aims to guide developers in their observation of the requirements maturation of OTs. We quantitatively analyzed the actual cases with their requirements elicitation process and extracted essential factors that influence the requirements maturation. The results of interviews of project managers are analyzed by WEKA, a data mining system, from which the decision tree was derived. This paper introduces the PRINCE model and the category of logical OTs to be observed. The decision tree that helps developers infer the maturation type of an OT is also described. We evaluate the tree through real projects and discuss its ability to infer the requirements maturation types.
View full abstract
-
Haruhiko KAIYA, Atsushi OHNISHI
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
1031-1043
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to defining quality requirements more completely and correctly.
View full abstract
-
Masayuki MAKINO, Atsushi OHNISHI
Article type: PAPER
2012 Volume E95.D Issue 4 Pages
1044-1051
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
A method of generating scenarios using differential scenaro information is presented. Behaviors of normal scenarios of similar purpose are quite similar each other, while actors and data in scenarios are different among these scenarios. We derive the differential information between them and apply the differential information to generate new alternative/exceptional scenarios. Our method will be illustrated with examples. This paper describes (1) a language for describing scenarios based on a simple case grammar of actions, (2) introduction of the differential scenario, and (3) method and examples of scenario generation using the differential scenario.
View full abstract
-
Chaochao FENG, Zhonghai LU, Axel JANTSCH, Minxuan ZHANG, Xianju YANG
Article type: PAPER
Subject area: Computer System
2012 Volume E95.D Issue 4 Pages
1052-1061
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this paper, we propose three Deflection-Routing-based Multicast (DRM) schemes for a bufferless NoC. The DRM scheme without packets replication (DRM_noPR) sends multicast packet through a non-deterministic path. The DRM schemes with adaptive packets replication (DRM_PR_src and DRM_PR_all) replicate multicast packets at the source or intermediate node according to the destination position and the state of output ports to reduce the average multicast latency. We also provide fault-tolerant supporting in these schemes through a reinforcement-learning-based method to reconfigure the routing table to tolerate permanent faulty links in the network. Simulation results illustrate that the DRM_PR_all scheme achieves 41%, 43% and 37% less latency on average than that of the DRM_noPR scheme and 27%, 29% and 25% less latency on average than that of the DRM_PR_src scheme under three synthetic traffic patterns respectively. In addition, all three fault-tolerant DRM schemes achieve acceptable performance degradation at various link fault rates without any packet lost.
View full abstract
-
Fuyuan XIAO, Teruaki KITASUKA, Masayoshi ARITSUGI
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2012 Volume E95.D Issue 4 Pages
1062-1073
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
We present an economical and fault-tolerant load balancing strategy (EFTLBS) based on an operator replication mechanism and a load shedding method, that fully utilizes the network resources to realize continuous and highly-available data stream processing without dynamic operator migration over wide area networks. In this paper, we first design an economical operator distribution (EOD) plan based on a bin-packing model under the constraints of each stream bandwidth as well as each server's CPU capacity. Next, we devise super-operator (SO) that load balances multi-degree operator replicas. Moreover, for improving the fault-tolerance of the system, we color the SOs based on a coloring bin-packing (CBP) model that assigns peer operator replicas to different servers. To minimize the effects of input rate bursts upon the system, we take advantage of a load shedding method while keeping the QoS guarantees made by the system based on the SO scheme and the CBP model. Finally, we substantiate the utility of our work through experiments on ns-3.
View full abstract
-
Ali MORADI AMANI, Ahmad AFSHAR, Mohammad Bagher MENHAJ
Article type: PAPER
Subject area: Dependable Computing
2012 Volume E95.D Issue 4 Pages
1074-1083
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this paper, the problem of control reconfiguration in the presence of actuator failure preserving the nominal controller is addressed. In the actuator failure condition, the processing algorithm of the control signal should be adapted in order to re-achieve the desired performance of the control loop. To do so, the so-called reconfiguration block, is inserted into the control loop to reallocate nominal control signals among the remaining healthy actuators. This block can be either a constant mapping or a dynamical system. In both cases, it should be designed so that the states or output of the system are fully recovered. All these situations are completely analysed in this paper using a novel structural approach leading to some theorems which are supported in each section by appropriate simulations.
View full abstract
-
Nobutaka KITO, Shinichi FUJII, Naofumi TAKAGI
Article type: PAPER
Subject area: Dependable Computing
2012 Volume E95.D Issue 4 Pages
1084-1092
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
We propose a C-testable multiple-block carry select adder with respect to the cell fault model. Full adders and 2: 1 multiplexers are considered as cells. By an additional external input, we obtain a C-testable carry select adder. We only modify the least significant position of each block. The adder is testable with a test set consisting of 16 patterns regardless of the size of each block and the number of blocks. This is the minimum test set for the adder. We show two gate-level implementations of the adder which are testable with a test set of 9 patterns and 7 patterns respectively, with respect to the single stuck-at fault model.
View full abstract
-
Yoshinobu HIGAMI, Satoshi OHNO, Hironori YAMAOKA, Hiroshi TAKAHASHI, Y ...
Article type: PAPER
Subject area: Dependable Computing
2012 Volume E95.D Issue 4 Pages
1093-1100
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this paper, we propose a test generation method for diagnosing transition faults. The proposed method assumes launch on capture test, and it generates test vectors for given fault pairs using a stuck-at ATPG tool so that they can be distinguished. If a given fault pair is indistinguishable, it is identified, and thus the proposed method achieves a complete diagnostic test generation. The conditions for distinguishing a fault pair are carefully considered, and they are transformed into the conditions of the detection of a stuck-at fault, and some additional logic gates are inserted in a CUT during the test generation process. Experimental results show that the proposed method can generate test vectors for distinguishing the fault pairs that are not distinguished by commercial tools, and also identify indistinguishable fault pairs.
View full abstract
-
Yasuhisa FUJII, Kazumasa YAMAMOTO, Seiichi NAKAGAWA
Article type: PAPER
Subject area: Speech and Hearing
2012 Volume E95.D Issue 4 Pages
1101-1111
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
This paper presents a novel method for improving the readability of automatic speech recognition (ASR) results for classroom lectures. Because speech in a classroom is spontaneous and contains many ill-formed utterances with various disfluencies, the ASR result should be edited to improve the readability before presenting it to users, by applying some operations such as removing disfluencies, determining sentence boundaries, inserting punctuation marks and repairing dropped words. Owing to the presence of many kinds of domain-dependent words and casual styles, even state-of-the-art recognizers can only achieve a 30-50% word error rate for speech in classroom lectures. Therefore, a method for improving the readability of ASR results is needed to make it robust to recognition errors. We can use multiple hypotheses instead of the single-best hypothesis as a method to achieve a robust response to recognition errors. However, if the multiple hypotheses are represented by a lattice (or a confusion network), it is difficult to utilize sentence-level knowledge, such as chunking and dependency parsing, which are imperative for determining the discourse structure and therefore imperative for improving readability. In this paper, we propose a novel algorithm that infers clean, readable transcripts from spontaneous multiple hypotheses represented by a confusion network while integrating sentence-level knowledge. Automatic and manual evaluations showed that using multiple hypotheses and sentence-level knowledge is effective to improve the readability of ASR results, while preserving the understandability.
View full abstract
-
Byoung-Ju YUN, Hee-Dong HONG, Ho-Hyoung CHOI
Article type: PAPER
Subject area: Image Processing and Video Processing
2012 Volume E95.D Issue 4 Pages
1112-1119
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Poor illumination and viewing conditions have negativeinfluences on the quality of an image, especially the contrast of the dark and bright region. Thus, captured and displayed images usually need contrast enhancement. Histogram-based or gamma correction-based methods are generally utilized for this. However, these methods are global contrast enhancement method, and since the sensitivity of the human eye changes locally according to the position of the object and the illumination in the scene, the global contrast enhancement methods have a limit. The spatial adaptive method is needed to overcome these limitations and it has led to the development of an integrated surround retinex (ISR), and estimation of dominant chromaticity (EDC) methods. However, these methods are based on Gray-World Assumption, and they use a general image formation model, so the color constancy is known to get poor results, shown through graying-out, halo-artifacts (ringing effects), and the dominated color. This paper presents a contrast enhancement method using a modified image formation model in which the image is divided into three components: global illumination, local illumination and reflectance. After applying the power constant value to control the contrast in the resulting image, the output image is obtained from their product to avoid or minimize a color distortion, based on the sRGB color representation. The experimental results show that the proposed method yields better performances than conventional methods.
View full abstract
-
Osamu WATANABE, Takahiro FUKUHARA, Hitoshi KIYA
Article type: PAPER
Subject area: Image Processing and Video Processing
2012 Volume E95.D Issue 4 Pages
1120-1129
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
View full abstract
-
Lei CHEN, Takeshi TAKAKI, Idaku ISHII
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2012 Volume E95.D Issue 4 Pages
1130-1141
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
This study investigates the effect of frame intervals on the accuracy of the Lucas-Kanade optical flow estimates for high-frame-rate (HFR) videos, with a view to realizing accurate HFR-video-based optical flow estimation. For 512×512 pixels videos of patterned objects moving at different speeds and captured at 1000frames per second, the averages and standard deviations of the estimated optical flows were determined as accuracy measures for frame intervals of 1-40ms. The results showed that the accuracy was highest when the displacement between frames was around 0.6pixel/frame. This common property indicates that accurate optical flow estimation for HFR videos can be realized by varying frame intervals according to the motion field: a small frame interval for high-speed objects and a large frame interval for low-speed objects.
View full abstract
-
Ji WANG, Yuanzhi CHENG, Yili FU, Shengjun ZHOU, Shinichi TAMURA
Article type: PAPER
Subject area: Biological Engineering
2012 Volume E95.D Issue 4 Pages
1142-1150
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).
View full abstract
-
Dong Kwan KIM, Won-Tae KIM, Seung-Min PARK
Article type: LETTER
Subject area: Software Engineering
2012 Volume E95.D Issue 4 Pages
1151-1154
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this letter, we apply dynamic software updating to long-lived applications on the DDS middleware while minimizing service interruption and satisfying Quality of Service (QoS) requirements. We dynamically updated applications which run on a commercial DDS implementation to demonstrate the applicability of our approach to dynamic updating. The results show that our update system does not impose an undue performance overhead-all patches could be injected in less than 350ms and the maximum CPU usage is less than 17%. In addition, the overhead on application throughput due to dynamic updates ranged from 0 to at most 8% and the deadline QoS of the application was satisfied while updating.
View full abstract
-
Jin Seok KIM, Kookrae CHO, Dae Hyun YUM, Sung Je HONG, Pil Joong LEE
Article type: LETTER
Subject area: Information Network
2012 Volume E95.D Issue 4 Pages
1155-1158
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Traditional authentication protocols are based on cryptographic techniques to achieve identity verification. Distance bounding protocols are an enhanced type of authentication protocol built upon both signal traversal time measurement and cryptographic techniques to accomplish distance verification as well as identity verification. A distance bounding protocol is usually designed to defend against the relay attack and the distance fraud attack. As there are applications to which the distance fraud attack is not a serious threat, we propose a streamlined distance bounding protocol that focuses on the relay attack. The proposed protocol is more efficient than previous protocols and has a low false acceptance rate under the relay attack.
View full abstract
-
Byungsung PARK, Jaeyeong YOO, Hagbae KIM
Article type: LETTER
Subject area: Dependable Computing
2012 Volume E95.D Issue 4 Pages
1159-1161
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In a large queuing system, the effect of the ratio of the filled data on the queue and waiting time from the head of a queue to the service gate are important factors for process efficiency because they are too large to ignore. However, many research works assumed that the factors can be considered to be negligible according to the queuing theory. Thus, the existing queuing models are not applicable to the design of large-scale systems. Such a system could be used as a product classification center for a home delivery service. In this paper, we propose a tree-queue model for large-scale systems that is more adaptive to efficient processes compared to existing models. We analyze and design a mean waiting time equation related to the ratio of the filled data in the queue. Based on simulations, the proposed model demonstrated improvement in process-efficiency, and it is more suitable to realistic system modeling than other compared models for large-scale systems.
View full abstract
-
Jeong Bong SEO, Dae-Won KIM
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2012 Volume E95.D Issue 4 Pages
1162-1165
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Despite the benefits of the Gustafson-Kessel (GK) clustering algorithm, it becomes computationally inefficient when applied to high-dimensional data. In this letter, a parallel implementation of the GK algorithm on the GPU with CUDA is proposed. Using an optimized matrix multiplication algorithm with fast access to shared memory, the CUDA version achieved a maximum 240-fold speedup over the single-CPU version.
View full abstract
-
Zhuo YANG, Sei-ichiro KAMATA
Article type: LETTER
Subject area: Image Processing and Video Processing
2012 Volume E95.D Issue 4 Pages
1166-1169
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
Hypercomplex polar Fourier analysis treats a signal as a vector field and generalizes the conventional polar Fourier analysis. It can handle signals represented by hypercomplex numbers such as color images. Hypercomplex polar Fourier analysis is reversible that means it can reconstruct image. Its coefficient has rotation invariance property that can be used for feature extraction. However in order to increase the computation speed, fast algorithm is needed especially for image processing applications like realtime systems and limited resource platforms. This paper presents fast hypercomplex polar Fourier analysis based on symmetric properties and mathematical properties of trigonometric functions. Proposed fast hypercomplex polar Fourier analysis computes symmetric points simultaneously, which significantly reduce the computation time.
View full abstract
-
André CAVALCANTE, Allan Kardec BARROS, Yoshinori TAKEUCHI, Nobo ...
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2012 Volume E95.D Issue 4 Pages
1170-1173
Published: April 01, 2012
Released on J-STAGE: April 01, 2012
JOURNAL
FREE ACCESS
In this letter, a new approach to segment depth-of-field (DoF) images is proposed. The methodology is based on a two-stage model of visual neuron. The first stage is a retinal filtering by means of luminance normalizing non-linearity. The second stage is a V1-like filtering using filters estimated by independent component analysis (ICA). Segmented image is generated by the response activity of the neuron measured in terms of kurtosis. Results demonstrate that the model can discriminate image parts in different levels of depth-of-field. Comparison with other methodologies and limitations of the proposed methodology are also presented.
View full abstract