IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E92.D, Issue 10
Displaying 1-42 of 42 articles from this issue
Special Section on New Technologies and their Applications of the Internet
• Katsuyuki YAMAZAKI
2009 Volume E92.D Issue 10 Pages 1825
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
• Ping WANG, Fuqiang LIU
Article type: PAPER
Subject area: Wireless Network
2009 Volume E92.D Issue 10 Pages 1826-1835
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
To support mobile multicasting in wireless networks, we present a new active multicasting mechanism which makes use of the state characteristic of multicast agent. In this mechanism, a multicast agent just locates the position for roaming hosts when it does not forward multicast packets. Upon reception of multicast packets, the multicast agent adjusts the service range to achieve an appropriate balance between routing efficiency and the overhead of multicast tree reconstruction. Therefore, a lot of unnecessary tree reconstructions are eliminated during the time when none multicast packet is transferred and multicast delivery path is near optimal because of the limited service range of multicast agent in the active state. To justify the effectiveness of our proposed scheme, we develop an analytical model to evaluate the signaling overhead. Our performance analysis shows that the proposed scheme can significantly reduce the system overhead and multicast routing is near optimal. The other important contribution is the novel analytical approach in evaluating the performance of mobile multicast routing protocol.
• Khalid MAHMOOD, Xiaodong LU, Yuji HORIKOSHI, Kinji MORI
Article type: PAPER
Subject area: Wireless Network
2009 Volume E92.D Issue 10 Pages 1836-1846
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Location Based Services (LBS) are expected to become one of the major drivers of ubiquitous services due to recent inception of GPS-enabled mobile devices, the development of Web2.0 paradigm, and emergence of 3G broadband networks. Having this vision in mind, Community Context-attribute-oriented Collaborative Information Environment (CCCIE) based Autonomous Decentralized Community System (ADCS) is proposed to enable provision of services to specific users in specific place at specific time considering various context-attributes. This paper presents autonomous community construction technology that share service discovered by one member among others in flexible way to improve timeliness and reduce network cost. In order to meet crucial goal of real-time and context-aware community construction (provision of service/ service information to users with common interests), and defining flexible service area in highly dynamic operating environment of ADCS, proposed progressive ripple based service discovery technique introduces novel idea of snail's pace and steady advancing search followed by swift boundary confining mechanism; while service area construction shares the discovered service among members in defined area to further improve timeliness and reduce network cost. Analysis and empirical results verify the effectiveness of the proposed technique.
Article type: PAPER
Subject area: Wireless Network
2009 Volume E92.D Issue 10 Pages 1847-1856
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Switching a communication path from one Access Point (AP) to another in inter-domain WLANs is a critical challenge for delay-sensitive applications such as Voice over IP (VoIP) because communication quality during handover (HO) is more likely to be deteriorated. To maintain VoIP quality during HO, we need to solve many problems. In particular, in bi-directional communication such as VoIP, an AP becomes a bottleneck with the increase of VoIP calls. As a result, packets queued in the AP buffer may experience a large queuing delay or packet losses due to increase in queue length or buffer overflow, thereby causing the degradation of VoIP quality for the Mobile Nodes (MNs) side. To avoid this degradation, MNs need to appropriately and autonomously execute HO in response to the change in wireless network condition, i.e., the deterioration of wireless link quality and the congestion state at the AP. In this paper, we propose an HO decision strategy considering frame retries, AP queue length, and transmission rate at an MN for maintaining VoIP quality during HO. Through simulation experiments, we then show that our proposed method can maintain VoIP quality during HO by properly detecting the wireless network condition.
• Hyunku JEONG, Seungryoul MAENG, Youngsu CHAE
Article type: PAPER
Subject area: Wireless Network
2009 Volume E92.D Issue 10 Pages 1857-1866
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
With the increasing deployment of mobile devices and the advent of broadband wireless access systems such as WiBro, mWiMAX, and HSDPA, an efficient IP mobility management protocol becomes one of the most important technical issues for the successful deployment of the broadband wireless data networking service. IETF has proposed the Mobile IPv6 as the basic mobility management protocol for IPv6 networks. To enhance the performance of the basic MIPv6, researchers have been actively working on HMIPv6 and FMIPv6 protocols. In this paper, we propose a new mobility management protocol, HIMIPv6 (Highly Integrated MIPv6), which tightly integrates the hierarchical mobility management mechanism of the HMIPv6 and the proactive handover support of the FMIPv6 to enhance the handover performance especially for the cellular networking environment with high frequent handover activities. We have performed extensive simulation study using ns2 and the results show that the proposed HIMIPv6 outperforms FMIPv6 and HMIPv6. There is no packet loss and consequent service interruption caused by IP handover in HIMIP.
• Yuzo TAENAKA, Shigeru KASHIHARA, Kazuya TSUKAMOTO, Suguru YAMAGUCHI, Y ...
Article type: PAPER
Subject area: Wireless Network
2009 Volume E92.D Issue 10 Pages 1867-1876
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
• Mio SUZUKI, Hiroaki HAZEYAMA, Daisuke MIYAMOTO, Shinsuke MIWA, Youki K ...
Article type: PAPER
Subject area: Network Architecture and Testbed
2009 Volume E92.D Issue 10 Pages 1877-1887
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Building an experimental network within a testbed has been a tiresome process for experimenters, due to the complexity of the physical resource assignment and the configuration overhead. Also, the process could not be expedited across testbeds, because the syntax of a configuration file varies depending on specific hardware and software. Re-configuration of an experimental topology for each testbed wastes time, an experimenter could not carry out his/her experiments during the limited lease time of a testbed at worst. In this paper, we propose the AnyBed: the experimental network-building system. The conceptual idea of AnyBed is “If experimental network topologies can be portable across any kinds of testbed, then, it would expedite building an experimental network on a testbed while manipulating experiments by each testbed support tool”. To achieve this concept, AnyBed divide an experimental network configuration into the logical and physical network topologies. Mapping these two topologies, AnyBed can build intended logical network topology on any PC clusters. We have evaluated the AnyBed implementation using two distinct clusters. The evaluation result shows a BGP topology with 150 nodes can be constructed on a large scale testbed in less than 113 seconds.
• Sergio CARRILHO, Hiroshi ESAKI
Article type: PAPER
Subject area: Network Architecture and Testbed
2009 Volume E92.D Issue 10 Pages 1888-1896
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Access to information is taken for granted in urban areas covered by a robust communication infrastructure. Nevertheless most of the areas in the world, are not covered by such infrastructures. We propose a DTN publish and subscribe system called Hikari, which uses nodes' mobility in order to distribute messages without using a robust infrastructure. The area of Disruption/Delay Tolerant Networks (DTN) focuses on providing connectivity to locations separated by networks with disruptions and delays. The Hikari system does not use node identifiers for message forwarding thus eliminating the complexity of routing associated with many forwarding schemes in DTN. Hikari uses nodes paths' information, advertised by special nodes in the system or predicted by the system itself, for optimizing the message dissemination process. We have used the Paris subway system, due to it's complexity, to validate Hikari and to analyze it's performance. We have shown that Hikari achieves a superior deliver rate while keeping redundant messages in the system low, which is ideal when using devices with limited resources for message dissemination.
• Sho FUJITA, Keiichi SHIMA, Yojiro UO, Hiroshi ESAKI
Article type: PAPER
Subject area: Network Architecture and Testbed
2009 Volume E92.D Issue 10 Pages 1897-1904
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
We present a decentralized VPN service that can be built over generalized mobile ad-hoc networks (Generalized MANETs), in which topologies can be represented as a time-varying directed multigraph. We address wireless ad-hoc networks and overlay ad-hoc networks as instances of Generalized MANETs. We first propose an architecture to operate on various kinds of networks through a single set of operations. Then, we design and implement a decentralized VPN service on the proposed architecture. Through the development and operation of a prototype system we implemented, we found that the proposed architecture makes the VPN service applicable to each instance of Generalized MANETs, and that the VPN service makes it possible for unmodified applications to operate on the networks.
• Ronggang BAI, Yusheng JI, Zhiting LIN, Qinghua WANG, Xiaofang ZHOU, Yu ...
Article type: PAPER
Subject area: Network Architecture and Testbed
2009 Volume E92.D Issue 10 Pages 1905-1914
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Being different from other networks, the load and direction of data traffic for wireless sensor networks are rather predictable. The relationships between nodes are cooperative rather than competitive. These features allow the design approach of a protocol stack to be able to use the cross-layer interactive way instead of a hierarchical structure. The proposed cross-layer protocol CLWSN optimizes the channel allocation in the MAC layer using the information from the routing tables, reduces the conflicting set, and improves the throughput. Simulations revealed that it outperforms SMAC and MINA in terms of delay and energy consumption.
• Pi-Chung WANG
Article type: PAPER
Subject area: QoS and Quality Management
2009 Volume E92.D Issue 10 Pages 1915-1922
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Packet classification categorizes incoming packets into multiple forwarding classes based on pre-defined filters. This categorization makes information accessible for quality of service or security handling in the network. In this paper, we propose a scheme which combines the Aggregate Bit Vector algorithm and the Pruned Tuple Space Search algorithm to improve the performance of packet classification in terms of speed and storage. We also present the procedures of incremental update. Our scheme is evaluated with filter databases of varying sizes and characteristics. The experimental results demonstrate that our scheme is feasible and scalable.
• Ryota KAWASHIMA, Yusheng JI, Katsumi MARUYAMA
Article type: PAPER
Subject area: QoS and Quality Management
2009 Volume E92.D Issue 10 Pages 1923-1933
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Networking technologies have recently been evolving and network applications are now expected to support flexible composition of upper-layer network services, such as security, QoS, or personal firewall. We propose a multi-platform framework called FreeNA that extends existing applications by incorporating the services based on user definitions. This extension does not require users to modify their systems at all. Therefore, FreeNA is valuable for experimental system usage. We implemented FreeNA on both Linux and Microsoft Windows operating systems, and evaluated their functionality and performance. In this paper, we describe the design and implementation of FreeNA including details on how to insert network services into existing applications and how to create services in a multi-platform environment. We also give an example implementation of a service with SSL, a functionality comparison with relevant systems, and our performance evaluation results. The results show that FreeNA offers finer configurability, composability, and usability than other similar systems. We also show that the throughput degradation of transparent service insertion is 2% at most compared with a method of directly inserting such services into applications.
• Jun NISHIOKA, Satoru YAMANO
Article type: PAPER
Subject area: QoS and Quality Management
2009 Volume E92.D Issue 10 Pages 1934-1944
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
• Wook KIM, Heungwoo NAM, Sunshin AN
Article type: LETTER
Subject area: QoS and Quality Management
2009 Volume E92.D Issue 10 Pages 1945-1948
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
IEEE 802.15.4 is a new standard, uniquely designed for low rate wireless personal area networks (LR-WPANs). It targets ultra-low complexity, cost, and power, for low-data-rate wireless connectivity. However, one of the main problems of this new standard is its insufficient, and inefficient, media access control (MAC) for priority data. This paper introduces an extended contention access period (XCAP) concept for priority packets, also an traffic adaptive contention differentiation utilizing the XCAP (TACDX). The TACDX determines appropriate transmission policy alternatively according to the traffic conditions and type of packet. TACDX achieves not only enhanced transmission for priority packets but it also has a high energy efficiency for the overall network. The proposed TACDX is verified with simulations to measure the performances.
• Yuki SAKAI, Masato UCHIDA, Masato TSURU, Yuji OIE
Article type: LETTER
Subject area: QoS and Quality Management
2009 Volume E92.D Issue 10 Pages 1949-1952
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
A basic and inevitable problem in estimating flow duration distribution arises from “censoring” (i.e., cutting off) the observed flow duration because of a finite measurement period. We extended the Kaplan-Meier method, which is used in the survival analysis field, and applied it to recover information on the flow duration distribution that was lost due to censoring. We show that the flow duration distribution from a short period of actual traffic data with censoring that was estimated using a Kaplan-Meier-based method can approximate well the flow duration distribution calculated from a sufficiently long period of actual traffic data.
• Shuzhuang ZHANG, Hao LUO, Binxing FANG, Xiaochun YUN
Article type: PAPER
Subject area: DRM and Security
2009 Volume E92.D Issue 10 Pages 1953-1960
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Scanning packet payload at a high speed has become a crucial task in modern network management due to its wide variety applications on network security and application-specific services. Traditionally, Deterministic finite automatons (DFAs) are used to perform this operation in linear time. However, the memory requirements of DFAs are prohibitively high for patterns used in practical packet scanning, especially when many patterns are compiled into a single DFA. Existing solutions for memory blow-up are making a trade-off between memory requirement and memory access of processing per input character. In this paper we proposed a novel method to drastically reduce the memory requirements of DFAs while still maintain the high matching speed and provide worst-case guarantees. We removed the duplicate transitions between states by dividing all the DFA states into a number of groups and making each group of states share a merged transition table. We also proposed an efficient algorithm for transition sharing between states. The high efficiency in time and space made our approach adapted to frequently updated DFAs. We performed several experiments on real world rule sets. Overall, for all rule sets and approach evaluated, our approach offers the best memory versus run-time trade-offs.
• Akira YAMADA, Ayumu KUBOTA, Yutaka MIYAKE, Kazuo HASHIMOTO
Article type: PAPER
Subject area: DRM and Security
2009 Volume E92.D Issue 10 Pages 1961-1970
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Using Web-based content management systems such as Blog, an end user can easily publish User Generated Content (UGC). Although publishing of UGCs is easy, controlling access to them is a difficult problem for end users. Currently, most of Blog sites offer no access control mechanism, and even when it is available to users, it is not sufficient to control users who do not have an account at the site, not to mention that it cannot control accesses to content hosted by other UGC sites. In this paper, we propose new access control architecture for UGC, in which third party entities can offer access control mechanism to users independently of UGC hosting sites. With this architecture, a user can control accesses to his content that might be spread over many different UGC sites, regardless of whether those sites have access control mechanism or not. The key idea to separate access control mechanism from UGC sites is to apply cryptographic access control and we implemented the idea in such a way that it requires no modification to UGC sites and Web browsers. Our prototype implementation shows that the proposed access control architecture can be easily deployed in the current Web-based communication environment and it works quite well with popular Blog sites.
• Sunghyun KIM, Heejo LEE
Article type: PAPER
Subject area: DRM and Security
2009 Volume E92.D Issue 10 Pages 1971-1978
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Network intrusion detection systems rely on a signature-based detection engine. When under attack or during heavy traffic, the detection engines need to make a fast decision whether a packet or a sequence of packets is normal or malicious. However, if packets have a heavy payload or the system has a great deal of attack patterns, the high cost of payload inspection severely diminishes detection performance. Therefore, it would be better to avoid unnecessary payload scans by checking the protocol fields in the packet header, before executing their heavy operations of payload inspection. When payload inspection is necessary, it is better to compare a minimum number of attack patterns. In this paper, we propose new methods to classify attack signatures and make pre-computed multi-pattern groups. Based on IDS rule analysis, we grouped the signatures of attack rules by a multi-dimensional classification method adapted to a simplified address flow. The proposed methods reduce unnecessary payload scans and make light pattern groups to be checked. While performance improvements are dependent on a given networking environment, the experimental results with the DARPA data set and university traffic show that the proposed methods outperform the most recent Snort by up to 33%.
• Ilsun YOU, Kouichi SAKURAI, Yoshiaki HORI
Article type: LETTER
Subject area: DRM and Security
2009 Volume E92.D Issue 10 Pages 1979-1982
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Recently, Kempf and Koodli have proposed a security protocol for Fast Mobile IPv6 (FMIPv6). Through the SEcure Neighbor Discovery (SEND) protocol, it achieves secure distribution of a handover key, and consequently becomes a security standard for FMIPv6. However, it is still vulnerable to redirection attacks. In addition, due to the SEND protocol, it suffers from denial of service attacks and expensive computational cost. In this paper, we present a security protocol, which enhances Kempf-Koodli's one with the help of the AAA infrastructure.
• Byung-Rae LEE
Article type: LETTER
Subject area: DRM and Security
2009 Volume E92.D Issue 10 Pages 1983-1985
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Content sharing mechanisms in current DRM systems are based on a domain where multiple devices have the same domain key. This can lead to a security weakness as failure of one device means revocation of a domain itself. Furthermore, if a device leaves the domain, all the other devices should update their domain key. This also leads to efficiency problem. This paper proposes the new content sharing scheme based on the user binding DRM without the use of domain key. The proposed scheme improves the previous domain technology in terms of security and efficiency as it removes the use of domain key and only allows content sharing for multiple devices owned by the same user.
• Ho-Joong KIM, Seungryoul MAENG
Article type: PAPER
Subject area: Parallel and Distributed Architecture
2009 Volume E92.D Issue 10 Pages 1986-1998
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
We propose an Edge-write architecture which performs eager update propagation for update requests for the corresponding secondary server, whereas it lazily propagates updates from other secondary servers. Our architecture resolves consistency problems caused by read/update decoupling in the conventional lazy update propagation-based system. It also improves overall scalability by alleviating the performance bottleneck at the primary server in compensation for increased but bounded response time. Such relaxed consistency management enables a read request to choose whether to read the replicated data immediately or to refresh it. We use the age of a local data copy as the freshness factor so that a secondary server can make a decision for freshness control independently. As a result, our freshness-controlled edge-write architecture benefits by adjusting a tradeoff between the response time and the correctness of data.
• Yusuke DOI, Shirou WAKAYAMA, Satoshi OZAKI
Article type: PAPER
Subject area: Parallel and Distributed Architecture
2009 Volume E92.D Issue 10 Pages 1999-2006
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.
Regular Section
• Keiichirou KUSAKARI, Yasuo ISOGAI, Masahiko SAKAI, Frédé ...
Article type: PAPER
Subject area: Computation and Computational Models
2009 Volume E92.D Issue 10 Pages 2007-2015
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.
• Gen NISHIKAWA, Tomoko IZUMI, Fukuhito OOSHITA, Hirotsugu KAKUGAWA, Tos ...
Article type: PAPER
Subject area: Computation and Computational Models
2009 Volume E92.D Issue 10 Pages 2016-2024
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Wireless LANs, which consist of access points and wireless stations, have widely spread in recent years. Routing in wireless LANs suffers the problem that each wireless station selects an access point and a wired path to its destination station. It is desired to design an adaptive routing protocol for wireless LANs since throughputs of communications are dynamically affected by selections of other wireless stations and external environmental changes. In this paper, we propose a routing protocol for wireless LANs based on attractor selection. Attractor selection is a biologically inspired approach, and it has high adaptability to dynamic environmental changes. By applying attractor selection, each wireless station can adaptively select its access point and wired path with high throughput against environmental changes. In addition, we design the protocol with a new technique: combination of multiple attractor selections. The technique is useful because it enables us to divide a problem into several simpler problems. To the best of our knowledge, our protocol is the first one designed around a combination of multiple attractor selections. We show the effectiveness and adaptability of our protocol by simulations.
• Joong Chae NA, Ji Eun KIM, Kunsoo PARK, Dong Kyue KIM
Article type: PAPER
Subject area: Algorithm Theory
2009 Volume E92.D Issue 10 Pages 2025-2033
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Succinct representation is a space-efficient method to represent n discrete objects using space close to the information-theoretic lower bound. In order to directly access the ith object of succinctly represented data structures in constant time, two fundamental functions, rank and select, are commonly used. In this paper we propose two implementations supporting rank and select in constant time for non-compressed bit strings. One uses $O(n \\log \\log n / \\sqrt{\\log n})$bits of extra space and the other uses $n+O(n \\log \\log n / \\log n)$bits of extra space in the worst case. The former is rather a theoretical algorithm and the latter is a practical algorithm which works faster and uses less space in practice.
• Anh-Tuan HOANG, Katsuhiro YAMAZAKI, Shigeru OYANAGI
Article type: PAPER
Subject area: VLSI Systems
2009 Volume E92.D Issue 10 Pages 2034-2042
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
The security hash algorithm 512 (SHA-512), which is used to verify the integrity of a message, involves computational iterations on data. The huge computation delay generated in such iterations limits the entire throughput of the system and makes it difficult to pipeline the computation. We describe a way to pipeline the computation using fine-grained pipelining with balanced critical paths. In this method, one critical path is broken into two stages by using data forwarding. The other critical path is broken into three stages by using computation postponement. The resulting critical paths all have two adder-layers with some data movements, and thus are balanced. In addition, the method also allows register reduction. Also, the similarity in SHA-384 and SHA-512 are used for a multi-mode design, which can generate a message digest for both versions with the same throughput, but with only a small increase in hardware size. Experimental results show that our implementation achieved not only the best area performance rate (throughput divided by area), but also a higher throughput than almost all related work.
• Iván GARCÍA-MAGARIÑO, Rubén FUENTES-FERN&A ...
Article type: PAPER
Subject area: Fundamentals of Software and Theory of Programs
2009 Volume E92.D Issue 10 Pages 2043-2052
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Model-Driven Engineering and Domain-Specific Modeling Languages are encouraging an increased used of metamodels for the definition of languages and tools. Although the Meta Object Facility language is the standard for metamodeling, there are alternative metamodeling languages that are aimed at satisfying specific requirements. In this context, sharing information throughout different domains and tools requires not only being able to translate models between modeling languages defined with the same metamodeling language, but also between different metamodeling languages. This paper addresses this latter need describing a general technique to define transformations that perform this translation. In this work, two case studies illustrate the application of this process.
• Yangwoo ROH, Jaesub KIM, Kyu Ho PARK
Article type: PAPER
Subject area: Fundamentals of Software and Theory of Programs
2009 Volume E92.D Issue 10 Pages 2053-2063
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Applications usually have their own phases in heap memory usage. The traditional garbage collector fails to match various application phases because the same heuristic on the object behavior is used throughout the entire execution. This paper introduces a phase-adaptive garbage collector which reorganizes the heap layout and adjusts the invocation time of the garbage collection according to the phases. The proposed collector identifies phases by detecting the application methods strongly related to the phase boundaries. The experimental results show that the proposed phase-adaptive collector successfully recognizes application phases and improves the garbage collection time by as much as 41%.
• Hisashi KURASAWA, Atsuhiro TAKASU, Jun ADACHI
Article type: PAPER
Subject area: Contents Technology and Web Information Systems
2009 Volume E92.D Issue 10 Pages 2064-2072
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.
• Yi WAN, Takuya ASAKA, Tatsuro TAKAHASHI
Article type: PAPER
Subject area: Networks
2009 Volume E92.D Issue 10 Pages 2073-2081
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
User Generated Content (UGC) VoD services such as YouTube are becoming more and more popular, and their maintenance costs are growing as well. Many P2P solutions have been proposed to reduce server load in such systems, but almost all of them focus on the single-video approach, which only has limited effect on the systems serving short videos such as UGC. The purpose of this paper is to investigate the potential of an alternative approach, the multi-video approach, and we use a very simple method called collaborative caching to show that methods using the multi-video approach are generally more suitable for current UGC VoD systems. We also study the influence of the major design factors through simulations and provide guidelines for efficiently building systems with this method.
• Mi-Young PARK, Sang-Hwa CHUNG, Prasanthi SREEKUMARI
Article type: PAPER
Subject area: Networks
2009 Volume E92.D Issue 10 Pages 2082-2093
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
When TCP operates in multi-hop wireless networks, it suffers from severe performance degradation. This is because TCP reacts to wireless packet losses by unnecessarily decreasing its sending rate. Although previous loss differentiation algorithms (LDAs) can identify some of the packet losses due to wireless transmission errors as wireless losses, their accuracy is not high as much as we expect, and these schemes cannot avoid sacrificing the accuracy of congestion loss discrimination by misclassifying congestion losses as wireless losses. In this paper, we suggest a new end-to-end loss differentiation scheme which has high accuracy in both wireless loss discrimination and congestion loss discrimination. Our scheme estimates the rate of queue usage using information available to TCP. If the estimated queue usage is larger than 50% when a packet is lost, our scheme diagnoses the packet loss as congestion losses. Otherwise, it diagnoses the packet loss as wireless losses. Because the estimated queue usage is highly correlated to congestion, our scheme has an advantage to more exactly identify packet losses related to congestion and those unrelated to congestion. Through extensive simulations, we compare and evaluate our scheme with previous LDAs in terms of correlation, accuracy, and stability. And the results show that our scheme has the highest accuracy as well as its accuracy is more reliable than the other LDAs.
• Ukrit WATCHAREERUETAI, Tetsuya MATSUMOTO, Noboru OHNISHI, Hiroaki KUDO ...
Article type: PAPER
Subject area: Artificial Intelligence and Cognitive Science
2009 Volume E92.D Issue 10 Pages 2094-2102
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
We propose a learning strategy for acceleration in learning speed of genetic programming (GP), named hierarchical structure GP (HSGP). The HSGP exploits multiple learning nodes (LNs) which are connected in a hierarchical structure, e.g., a binary tree. Each LN runs conventional evolutionary process to evolve its own population, and sends the evolved population into the connected higher-level LN. The lower-level LN evolves the population with a smaller subset of training data. The higher-level LN then integrates the evolved population from the connected lower-level LNs together, and evolves the integrated population further by using a larger subset of training data. In HSGP, evolutionary processes are sequentially executed from the bottom-level LNs to the top-level LN which evolves with the entire training data. In the experiments, we adopt conventional GPs and the HSGPs to evolve image recognition programs for given training images. The results show that the use of hierarchical structure learning can significantly improve learning speed of GPs. To achieve the same performance, the HSGPs need only 30-40% of the computation cost needed by conventional GPs.
• Wen-Bing HORNG, Chun-Wen CHEN
Article type: PAPER
Subject area: Pattern Recognition
2009 Volume E92.D Issue 10 Pages 2103-2111
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Boundary-based corner detection has been widely applied in spline curve fitting, automated optical inspection, image segmentation, object recognition, etc. In order to obtain good results, users usually need to adjust the length of region of support to resist zigzags due to quantization and random noise on digital boundaries. To automatically determine the length of region of support for corner detection, Teh-Chin and Guru-Dinesh presented adaptive approaches based on some local properties of boundary points. However, these local-property based approaches are sensitive to noise. In this paper, we propose a new approach to find the optimum length of region of support for corner detection based on a statistic discriminant criterion. Since our approach is based on the global perspective of all boundary points, rather than the local properties of some points, the experiments show that the determined length of region of support increases as the noise intensity strengthens. In addition, the detected corners based on the optimum length of region of support are consistent with human experts' judgment, even for noisy boundaries.
• Dipankar DAS, Yoshinori KOBAYASHI, Yoshinori KUNO
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 10 Pages 2112-2121
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
This paper proposes an integrated approach to simultaneous detection and localization of multiple object categories using both generative and discriminative models. Our approach consists of first generating a set of hypotheses for each object category using a generative model (pLSA) with a bag of visual words representing each object. Based on the variation of objects within a category, the pLSA model automatically fits to an optimal number of topics. Then, the discriminative part verifies each hypothesis using a multi-class SVM classifier with merging features that combines spatial shape and appearance of an object. In the post-processing stage, environmental context information along with the probabilistic output of the SVM classifier is used to improve the overall performance of the system. Our integrated approach with merging features and context information allows reliable detection and localization of various object categories in the same image. The performance of the proposed framework is evaluated on the various standards (MIT-CSAIL, UIUC, TUD etc.) and the authors' own datasets. In experiments we achieved superior results to some state of the art methods over a number of standard datasets. An extensive experimental evaluation on up to ten diverse object categories over thousands of images demonstrates that our system works for detecting and localizing multiple objects within an image in the presence of cluttered background, substantial occlusion, and significant scale changes.
• Sutee SUDPRASERT, Asanee KAWTRAKUL, Christian BOITET, Vincent BERMENT
Article type: PAPER
Subject area: Natural Language Processing
2009 Volume E92.D Issue 10 Pages 2122-2136
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
In this paper, we present a new dependency parsing method for languages which have very small annotated corpus and for which methods of segmentation and morphological analysis producing a unique (automatically disambiguated) result are very unreliable. Our method works on a morphosyntactic lattice factorizing all possible segmentation and part-of-speech tagging results. The quality of the input to syntactic analysis is hence much better than that of an unreliable unique sequence of lemmatized and tagged words. We propose an adaptation of Eisner's algorithm for finding the k-best dependency trees in a morphosyntactic lattice structure encoding multiple results of morphosyntactic analysis. Moreover, we present how to use Dependency Insertion Grammar in order to adjust the scores and filter out invalid trees, the use of language model to rescore the parse trees and the k-best extension of our parsing model. The highest parsing accuracy reported in this paper is 74.32% which represents a 6.31% improvement compared to the model taking the input from the unreliable morphosyntactic analysis tools.
Article type: PAPER
Subject area: Biological Engineering
2009 Volume E92.D Issue 10 Pages 2137-2151
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
Teeth segmentation in computed tomography (CT) images is a major and challenging task for various computer assisted procedures. In this paper, we introduced a hybrid method for quantification of teeth in CT volumetric dataset inspired by our previous experiences and anatomical knowledge of teeth and jaws. In this regard, we propose a novel segmentation technique using an adaptive thresholding, morphological operations, panoramic re-sampling and variational level set algorithm. The proposed method consists of several steps as follows: first, we determine the operation region in CT slices. Second, the bony tissues are separated from other tissues by utilizing an adaptive thresholding technique based on the 3D pulses coupled neural networks (PCNN). Third, teeth tissue is classified from other bony tissues by employing panorex lines and anatomical knowledge of teeth in the jaws. In this case, the panorex lines are estimated using Otsu thresholding and mathematical morphology operators. Then, the proposed method is followed by calculating the orthogonal lines corresponding to panorex lines and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the integral projections of the panoramic dataset. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a variational level set to refine initial teeth boundaries to final contour. In the last step a surface rendering algorithm known as marching cubes (MC) is applied to volumetric visualization. The proposed algorithm was evaluated in the presence of 30 cases. Segmented images were compared with manually outlined contours. We compared the performance of segmentation method using ROC analysis of the thresholding, watershed and our previous works. The proposed method performed best. Also, our algorithm has the advantage of high speed compared to our previous works.
• Moonju PARK, Jinseok CHAE
Article type: LETTER
Subject area: Dependable Computing
2009 Volume E92.D Issue 10 Pages 2152-2155
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.
• Ki-Woong SEONG, Eui-Sung JUNG, Hyung-Gyu LIM, Jang-Woo LEE, Min-Woo KI ...
Article type: LETTER
Subject area: Rehabilitation Engineering and Assistive Technology
2009 Volume E92.D Issue 10 Pages 2156-2158
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
In this paper, the vibration characteristics of stapes, driven by the implanted differential floating mass transducer (DFMT) in the human middle ear, are analyzed by using an electrical model. The electrical model has been simulated by using the PSpice, in which the simulated results are compared with the experimental results by using the fabricated DFMT and the human temporal bones.
Article type: LETTER
Subject area: Pattern Recognition
2009 Volume E92.D Issue 10 Pages 2159-2162
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
• Miki HASEYAMA, Makoto TAKIZAWA, Takashi YAMAMOTO
Article type: LETTER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 10 Pages 2163-2166
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
In this paper, a new video frame interpolation method based on image morphing for frame rate up-conversion is proposed. In this method, image features are extracted by Scale-Invariant Feature Transform in each frame, and their correspondence in two contiguous frames is then computed separately in foreground and background regions. By using the above two functions, the proposed method accurately generates interpolation frames and thus achieves frame rate up-conversion.
• Suk Tae SEO, Hye Cheun JEONG, In Keun LEE, Chang Sik SON, Soon Hak KWO ...
Article type: LETTER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 10 Pages 2167-2170
Published: October 01, 2009
Released on J-STAGE: October 01, 2009
JOURNAL FREE ACCESS
An approach to image thresholding based on the plausibility of object and background regions by adopting a co-occurrence matrix and category utility is presented. The effectiveness of the proposed method is shown through the experimental results tested on several images and compared with conventional methods.