IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E95.D , Issue 2
Showing 1-50 articles out of 50 articles from the selected issue
Special Section on Reconfigurable Systems
  • Hideharu AMANO
    2012 Volume E95.D Issue 2 Pages 293
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Download PDF (54K)
  • Masahiro IIDA, Motoki AMAGASAKI, Yasuhiro OKAMOTO, Qian ZHAO, Toshinor ...
    Type: PAPER
    Subject area: Architecture
    2012 Volume E95.D Issue 2 Pages 294-302
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Because of numerous circuit resources of FPGAs, there is a performance gap between FPGAs and ASICs. In this paper, we propose a small-memory logic cell, COGRE, to reduce the FPGA area. Our approach is to investigate the appearance ratio of the logic functions in a circuit implementation. Moreover, we group the logic functions on the basis of the NPN-equivalence class. The results of our investigation show that only small portions of the NPN-equivalence class can cover large portions of the logic functions used to implement circuits. Further, we found that NPN-equivalence classes with a high appearance ratio can be implemented by using a small number of AND gates, OR gates, and NOT gates. On the basis of this analysis, we develop COGRE architectures composed of several NAND gates and programmable inverters. The experimental results show that the logic area of 4-COGRE is smaller than that of 4-LUT and 5-LUT by approximately 35.79% and 54.70%, respectively. The logic area of 8-COGRE is 75.19% less than that of 8-LUT. Further, the total number of configuration memory bits of 4-COGRE is 8.26% less than the number of configuration memory bits of 4-LUT. The total number of configuration memory bits of 8-COGRE is 68.27% less than the number of configuration memory bits of 8-LUT.
    Download PDF (656K)
  • Kazuki INOUE, Masahiro KOGA, Motoki AMAGASAKI, Masahiro IIDA, Yoshinob ...
    Type: PAPER
    Subject area: Architecture
    2012 Volume E95.D Issue 2 Pages 303-313
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Generally, a programmable LSI such as an FPGA is difficult to test compared to an ASIC. There are two major reasons for this. The first is that an automatic test pattern generator (ATPG) cannot be used because of the programmability of the FPGA. The other reason is that the FPGA architecture is very complex. In this paper, we propose a new FPGA architecture that will simplify the testing of the device. The base of our architecture is general island-style FPGA architecture, but it consists of a few types of circuit blocks and orderly wire connections. This paper also presents efficient test configurations for our proposed architecture. We evaluated our architecture and test configurations using a prototype chip. As a result, the chip was fully tested using our configurations in a short test time. Moreover, our architecture can provide comparable performance to a conventional FPGA architecture.
    Download PDF (3389K)
  • Ce LI, Yiping DONG, Takahiro WATANABE
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 314-323
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    An FPGA plays an essential role in industrial products due to its fast, stable and flexible features. But the power consumption of FPGAs used in portable devices is one of critical issues. Top-down hierarchical design method is commonly used in both ASIC and FPGA design. But, in the case where plural modules are integrated in an FPGA and some of them might be in sleep-mode, current FPGA architecture cannot be fully effective. In this paper, coarse-grained power gating FPGA architecture is proposed where a whole area of an FPGA is partitioned into several regions and power supply is controlled for each region, so that modules in sleep mode can be effectively power-off. We also propose a region oriented FPGA placement algorithm fitted to this user's hierarchical design based on VPR[1]. Simulation results show that this proposed method could reduce power consumption of FPGA by 38% on average by setting unused modules or regions in sleep mode.
    Download PDF (572K)
  • Masatoshi NAKAMURA, Masato INAGI, Kazuya TANIGAWA, Tetsuo HIRONAKA, Ma ...
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 324-334
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a placement and routing method for a new memory-based programmable logic device (MPLD) and confirm its capability by placing and routing benchmark circuits. An MPLD consists of multiple-output look-up tables (MLUTs) that can be used as logic and/or routing elements, whereas field programmable gate arrays (FPGAs) consist of LUTs (logic elements) and switch blocks (routing elements). MPLDs contain logic circuits more efficiently than FPGAs because of their flexibility and area efficiency. However, directly applying the existing placement and routing algorithms of FPGAs to MPLDs overcrowds the placed logic cells and causes a shortage of routing domains between logic cells. Our simulated annealing-based method considers the detailed wire congestion and nearness between logic cells based on the cost function and reserves the area for routing. In the experiments, our method reduced wire congestion and successfully placed and routed 27 out of 31 circuits, 13 of which could not be placed or routed using the versatile place and route tool (VPR), a well-known method for FPGAs.
    Download PDF (1709K)
  • Shouyi YIN, Chongyong YIN, Leibo LIU, Min ZHU, Shaojun WEI
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 335-344
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Coarse-grained reconfigurable architecture (CGRA) combines the performance of application-specific integrated circuits (ASICs) and the flexibility of general-purpose processors (GPPs), which is a promising solution for embedded systems. With the increasing complexity of reconfigurable resources (processing elements, routing cells, I/O blocks, etc.), the reconfiguration cost is becoming the performance bottleneck. The major reconfiguration cost comes from the frequent memory-read/write operations for transferring the configuration context from main memory to context buffer. To improve the overall performance, it is critical to reduce the amount of configuration context. In this paper, we propose a configuration context reduction method for CGRA. The proposed method exploits the structure correlation of computation tasks that are mapped onto CGRA and reduce the redundancies in configuration context. Experimental results show that the proposed method can averagely reduce the configuration context size up to 71% and speed up the execution up to 68%. The proposed method does not depend on any architectural feature and can be applied to CGRA with an arbitrary architecture.
    Download PDF (2345K)
  • Krzysztof JOZWIK, Hiroyuki TOMIYAMA, Shinya HONDA, Hiroaki TAKADA
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 345-353
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Modern FPGAs (Field Programmable Gate Arrays), such as Xilinx Virtex-4, have the capability of changing their contents dynamically and partially, allowing implementation of such concepts as a HW (hardware) task. Similarly to its software counterpart, the HW task shares time-multiplexed resources with other HW tasks. To support preemptive multitasking in such systems, additional context saving and restoring mechanisms must be built practically from scratch. This paper presents an efficient method for hardware task preemption which is suitable for tasks containing both Flip-Flops and memory elements. Our solution consists of an offline tool for analyzing and manipulating bitstreams, used at the design time, as well as an embedded system framework. The framework contains a DMA-based (Direct Memory Access), instruction-driven reconfiguration/readback controller and a developed lightweight bus facilitating management of HW tasks. The whole system has been implemented on top of the Xilinx Virtex-4 FPGA and showed promising results for a variety of HW tasks.
    Download PDF (3723K)
  • Hasitha Muthumala WAIDYASOORIYA, Yosuke OHBAYASHI, Masanori HARIYAMA, ...
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 354-363
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Accelerator cores in low-power heterogeneous processors have on-chip local memories to enable parallel data access. The memory capacities of the local memories are very small. Therefore, the data should be transferred from the global memory to the local memories many times. These data transfers greatly increase the total processing time. Memory allocation technique to increase the data sharing is a good solution to this problem. However, when using reconfigurable cores, the data must be shared among multiple contexts. However, conventional context partitioning methods only consider how to reuse limited hardware resources in different time slots. They do not consider the data sharing. This paper proposes a context partitioning method to share both the hardware resources and the local memory data. According to the experimental results, the proposed method reduces the processing time by more than 87% compared to conventional context partitioning techniques.
    Download PDF (846K)
  • Hiroki NAKAHARA, Tsutomu SASAO, Munehiro MATSUURA
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 364-373
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper shows a design method for a regular expression matching circuit based on a decomposed automaton. To implement a regular expression matching circuit, first, we convert a regular expression into a non-deterministic finite automaton (NFA). Then, to reduce the number of states, we convert the NFA into a merged-states non-deterministic finite automaton with unbounded string transition (MNFAU) using a greedy algorithm. Next, to realize it by a feasible amount of hardware, we decompose the MNFAU into a deterministic finite automaton (DFA) and an NFA. The DFA part is implemented by an off-chip memory and a simple sequencer, while the NFA part is implemented by a cascade of logic cells. Also, in this paper, we show that the MNFAU based implementation has lower area complexity than the DFA and the NFA based ones. Experiments using regular expressions form SNORT shows that, as for the embedded memory size per a character, the MNFAU is 17.17-148.70 times smaller than DFA methods. Also, as for the number of LCs (Logic Cells) per a character, the MNFAU is 1.56-5.12 times smaller than NFA methods. This paper describes detail of the MEMOCODE2010 HW/SW co-design contest for which we won the first place award.
    Download PDF (781K)
  • Xinning LIU, Chen MEI, Peng CAO, Min ZHU, Longxing SHI
    Type: PAPER
    Subject area: Design Methodology
    2012 Volume E95.D Issue 2 Pages 374-382
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper proposes a novel sub-architecture to optimize the data flow of REMUS-II (REconfigurable MUltimedia System 2), a dynamically coarse grain reconfigurable architecture. REMUS-II consists of a µPU (Micro-Processor Unit) and two RPUs (Reconfigurable Processor Unit), which are used to speeds up control-intensive tasks and data-intensive tasks respectively. The parallel computing capability and flexibility of REMUS-II makes itself an excellent candidate to process multimedia applications, which require a large amount of memory accesses. In this paper, we specifically optimize the data flow to deal with those performance-hazard and energy-hungry memory accessing in order to meet the bandwidth requirement of parallel computing. The RPU internal memory could work in multiple modes, like 2D-access mode and transformation mode, according to different multimedia access patterns. This novel design can improve the performance up to 26% compared to traditional on-chip memory. Meanwhile, the block buffer is implemented to optimize the off-chip data flow through reducing off-chip memory accesses, which reducing up to 43% compared to direct DDR access. Based on RTL simulation, REMUS-II can achieve 1080p@30fps of H.264 High Profile@ Level 4 and High Level MPEG2 at 200MHz clock frequency. The REMUS-II is implemented into 23.7mm2 silicon on TSMC 65nm logic process with a 400MHz maximum working frequency.
    Download PDF (1920K)
  • Weina ZHOU, Lin DAI, Yao ZOU, Xiaoyang ZENG, Jun HAN
    Type: PAPER
    Subject area: Application
    2012 Volume E95.D Issue 2 Pages 383-391
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Face detection has been an independent technology playing an important role in more and more fields, which makes it necessary and urgent to have its architecture reconfigurable to meet different demands on detection capabilities. This paper proposed a face detection architecture, which could be adjusted by the user according to the background, the sensor resolution, the detection accuracy and speed in different situations. This user adjustable mode makes the reconfiguration simple and efficient, and is especially suitable for portable mobile terminals whose working condition often changes frequently. In addition, this architecture could work as an accelerator to constitute a larger and more powerful system integrated with other functional modules. Experimental results show that the reconfiguration of the architecture is very reasonable in face detection and synthesized report also indicates its advantage on little consumption of area and power.
    Download PDF (1371K)
  • Jian XIAO, Jinguo ZHANG, Min ZHU, Jun YANG, Longxing SHI
    Type: PAPER
    Subject area: Application
    2012 Volume E95.D Issue 2 Pages 392-402
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    An AdaBoost-based face detection system is proposed, on a Coarse Grain Reconfigurable Architecture (CGRA) named “REMUS-II”. Our work is quite distinguished from previous ones in three aspects. First, a new hardware-software partition method is proposed and the whole face detection system is divided into several parallel tasks implemented on two Reconfigurable Processing Units (RPU) and one micro Processors Unit (µPU) according to their relationships. These tasks communicate with each other by a mailbox mechanism. Second, a strong classifier is treated as a smallest phase of the detection system, and every phase needs to be executed by these tasks in order. A phase of Haar classifier is dynamically mapped onto a Reconfigurable Cell Array (RCA) only when needed, and it's quite different from traditional Field Programmable Gate Array (FPGA) methods in which all the classifiers are fabricated statically. Third, optimized data and configuration word pre-fetch mechanisms are employed to improve the whole system performance. Implementation results show that our approach under 200MHz clock rate can process up-to 17 frames per second on VGA size images, and the detection rate is over 95%. Our system consumes 194mW, and the die size of fabricated chip is 23mm2 using TSMC 65nm standard cell based technology. To the best of our knowledge, this work is the first implementation of the cascade Haar classifier algorithm on a dynamically CGRA platform presented in the literature.
    Download PDF (1362K)
  • Shuangqu HUANG, Xiaoyang ZENG, Yun CHEN
    Type: PAPER
    Subject area: Application
    2012 Volume E95.D Issue 2 Pages 403-412
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper a programmable and area-efficient decoder architecture supporting two decoding algorithms for Block-LDPC codes is presented. The novel decoder can be configured to decode in either TPMP or TDMP decoding mode according to different Block-LDPC codes, essentially combining the advantages of two decoding algorithms. With a regular and scalable data-path, a Reconfigurable Serial Processing Engine (RSPE) is proposed to achieve area efficiency. To verify our proposed architecture, a flexible LDPC decoder fully compliant to IEEE 802.16e applications is implemented on a 130nm 1P8M CMOS technology with a total area of 6.3mm2 and maximum operating frequency of 250MHz. The chip dissipates 592mW when operates at 250MHz frequency and 1.2V supply.
    Download PDF (1600K)
  • Jeich MAR, Chi-Cheng KUO, Shin-Ru WU, You-Rong LIN
    Type: PAPER
    Subject area: Application
    2012 Volume E95.D Issue 2 Pages 413-425
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The hierarchical multi-function matrix operation (MFMO) circuit modules are designed using coordinate rotations digital computer (CORDIC) algorithm for realizing the intensive computation of matrix operations. The paper emphasizes that the designed hierarchical MFMO circuit modules can be used to develop a power-efficient software-defined radio (SDR) digital beamformer (DBF). The formulas of the processing time for the scalable MFMO circuit modules implemented in field programmable gate array (FPGA) are derived to allocate the proper logic resources for the hardware reconfiguration. The hierarchical MFMO circuit modules are scalable to the changing number of array branches employed for the SDR DBF to achieve the purpose of power saving. The efficient reuse of the common MFMO circuit modules in the SDR DBF can also lead to energy reduction. Finally, the power dissipation and reconfiguration function in the different modes of the SDR DBF are observed from the experiment results.
    Download PDF (2523K)
  • Hisashi HATA, Shuichi ICHIKAWA
    Type: PAPER
    Subject area: Application
    2012 Volume E95.D Issue 2 Pages 426-436
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.
    Download PDF (709K)
  • Antoine TROUVE, Kazuaki MURAKAMI
    Type: LETTER
    Subject area: Design Optimisation
    2012 Volume E95.D Issue 2 Pages 437-440
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This article introduces some improvements to the already proposed custom instruction candidates selection for the automatic ISA customisation problem targeting reconfigurable processors. It introduces new opportunities to prune the search space, and a technique based on dynamic programming to check the independence between groups. The proposed new algorithm yields one order less measured number of convexity checks than the related work for the same inputs and outputs.
    Download PDF (166K)
Special Section on Architectures, Protocols, and Applications for the Future Internet
  • Motonori NAKAMURA
    2012 Volume E95.D Issue 2 Pages 441
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Download PDF (45K)
  • Chien-Sheng CHEN, Yi-Wen SU, Wen-Hsiung LIU, Ching-Lung CHI
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 442-450
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper a novel and effective two phase admission control (TPAC) for QoS mobile ad hoc networks is proposed that satisfies the real-time traffic requirements in mobile ad hoc networks. With a limited amount of extra overhead, TPAC can avoid network congestions by a simple and precise admission control which blocks most of the overloading flow-requests in the route discovery process. When compared with previous QoS routing schemes such as QoS-aware routing protocol and CACP protocols, it is shown from system simulations that the proposed scheme can increase the system throughput and reduce both the dropping rate and the end-to-end delay. Therefore, TPAC is surely an effective QoS-guarantee protocol to provide for real-time traffic.
    Download PDF (992K)
  • Yong-Pyo KIM, Keisuke NAKANO, Kazuyuki MIYAKITA, Masakazu SENGOKU, Yon ...
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 451-461
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Delay Tolerant Network (DTN) has been emerged to support the network connectivity of the disruptive networks. A variety of routing methods have been proposed to reduce the latency for message delivery. PROPHET was proposed as a probabilistic routing that utilizes history of encounters and transitivity of nodes, which is computed as contact probability. While PROPHET improves the performance of DTN due to contact probability, contact probability is just one parameter reflecting the mobility pattern of nodes, and further study on utilizing contacting information of mobility pattern is still an important problem. Hence, in this paper, we try to improve routing for DTN by using a novel metric other than contact probability as mobility information. We propose the routing protocol to use mean residual contact time that describes the contact period for a given pair of nodes. The simulation results show that using the mean residual contact time can improve the performance of routing protocols for DTN. In addition, we also show in what situations the proposed method provides more efficient data delivery service. We characterize these situations using a parameter called Variation Metric.
    Download PDF (1006K)
  • Chen CHEN, Xinbo GAO, Xiaoji LI, Qingqi PEI
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 462-471
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, a decentralized concurrent transmission strategy in shared channel in Ad Hoc networks is proposed based on game theory. Firstly, a static concurrent transmissions game is used to determine the candidates for transmitting by channel quality threshold and to maximize the overall throughput with consideration of channel quality variation. To achieve NES (Nash Equilibrium Solution), the selfish behaviors of node to attempt to improve the channel gain unilaterally are evaluated. Therefore, this game allows each node to be distributed and to decide whether to transmit concurrently with others or not depending on NES. Secondly, as there are always some nodes with lower channel gain than NES, which are defined as hunger nodes in this paper, a hunger suppression scheme is proposed by adjusting the price function with interferences reservation and forward relay, to fairly give hunger nodes transmission opportunities. Finally, inspired by stock trading, a dynamic concurrent transmission threshold determination scheme is implemented to make the static game practical. Numerical results show that the proposed scheme is feasible to increase concurrent transmission opportunities for active nodes, and at the same time, the number of hunger nodes is greatly reduced with the least increase of threshold by interferences reservation. Also, the good performance on network goodput of the proposed model can be seen from the results.
    Download PDF (611K)
  • Chun-Liang LEE, Guan-Yu LIN, Yaw-Chung CHEN
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 472-479
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.
    Download PDF (485K)
  • Kien NGUYEN, Ulrich MEIS, Yusheng JI
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 480-489
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Wireless sensor network MAC protocols switch radios off periodically, employing the so-called duty cycle mechanism, in order to conserve battery power that would otherwise be wasted by energy-costly idle listening. In order to minimize the various negative side-effects of the original scheme, especially on latency and throughput, various improvements have been proposed. In this paper, we introduce a new MAC protocol called MAC2(Multi-hop Adaptive with packet Concatenation-MAC) which combines three promising techniques into one protocol. Firstly, the idea to forward packets over multiple hops within one operational cycle as initially introduced in RMAC. Secondly, an adaptive method that adjusts the listening period according to traffic load minimizing idle listening. Thirdly, a packet concatenation scheme that not only increases throughput but also reduces power consumption that would otherwise be incurred by additional control packets. Furthermore, MAC2 incorporates the idea of scheduling data transmissions with minimum latency, thereby performing packet concatenation together with the multi-hop transmission mechanism in a most efficient way. We evaluated MAC2 using the prominent network simulator ns-2 and the results show that our protocol can outperform DW-MAC — a state of the art protocol both in terms of energy efficiency and throughput.
    Download PDF (383K)
  • Saber ZRELLI, Nobuo OKABE, Yoichi SHINODA
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 490-502
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The wireless medium is a key technology for enabling ubiquitous and continuous network connectivity. It is becoming more and more important in our daily life especially with the increasing adoption of networking technologies in many fields such as medical care and transportation systems. Although most wireless technologies nowadays provide satisfying bandwidth and higher speeds, several of these technologies still lack improvements with regard to handoff performance. In this paper, we focus on wireless network technologies that rely on the Extensible Authentication Protocol for mutual authentication between the station and the access network. Such technologies include local area wireless networks (IEEE 802.11) as well as broadband wireless networks (IEEE 802.16). We present a new EAP authentication method based on a three party authentication scheme, namely Kerberos, that considerably shortens handoff delays. Compared to other methods, the proposed method has the advantage of not requiring any changes on the access points, making it readily deployable at reasonable costs.
    Download PDF (810K)
  • Souheil BEN AYED, Fumio TERAOKA
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 503-513
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The evolution of Internet, the growth of Internet users and the new enabled technological capabilities place new requirements to form the Future Internet. Many features improvements and challenges were imposed to build a better Internet, including securing roaming of data and services over multiple administrative domains. In this research, we propose a multi-domain access control infrastructure to authenticate and authorize roaming users through the use of the Diameter protocol and EAP. The Diameter Protocol is a AAA protocol that solves the problems of previous AAA protocols such as RADIUS. The Diameter EAP Application is one of Diameter applications that extends the Diameter Base Protocol to support authentication using EAP. The contributions in this paper are: 1) first implementation of Diameter EAP Application, called DiamEAP, capable of practical authentication and authorization services in a multi-domain environment, 2) extensibility design capable of adding any new EAP methods, as loadable plugins, without modifying the main part, and 3) provision of EAP-TLS plugin as one of the most secure EAP methods. DiamEAP Server basic performances were evaluated and tested in a real multi-domain environment where 200 users attempted to access network using the EAP-TLS method during an event of 4 days. As evaluation results, the processing time of DiamEAP using the EAP-TLS plugin for authentication of 10 requests is about 20ms while that for 400 requests/second is about 1.9 second. Evaluation and operation results show that DiamEAP is scalable and stable with the ability to handle more than 6 hundreds of authentication requests per second without any crashes. DiamEAP is supported by the AAA working group of the WIDE Project.
    Download PDF (1468K)
  • Othman M. M. OTHMAN, Koji OKAMURA
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 514-522
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we suggest a new technology called Content Anycasting, and we show our design and evaluation of it. Content Anycasting shows how to utilize the capabilities of one of the candidate future Internet technologies that is the Flow-based network as in OpenFlow to giving new opportunities to the future internet that are currently not available. Content Anycasting aims to provide more flexible and dynamic redirection of contents. This would be very useful in extending the content server's capacity by enabling it to serve more clients, and in improving the response of the P2P networks by reducing the time of joining P2P networks. This method relies on three important ideas which are; the content based networking, decision making by the network in a similar manner to anycast, and the participation of user clients in providing the service. This is done through the use of the flow-based actions in flow-based network and having some modifications to the content server and client.
    Download PDF (1605K)
  • Kazumine OGURA, Yohei NEMOTO, Zhou SU, Jiro KATTO
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 523-531
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper focuses on RTT-fairness of multiple TCP flows over the Internet, and proposes a new TCP congestion control named “HRF (Hybrid RTT-Fair)-TCP”. Today, it is a serious problem that the flows having smaller RTT utilize more bandwidth than others when multiple flows having different RTT values compete in the same network. This means that a user with longer RTT may not be able to obtain sufficient bandwidth by the current methods. This RTT fairness issue has been discussed in many TCP papers. An example is CR (Constant Rate) algorithm, which achieves RTT-fairness by multiplying the square of RTT value in its window increment phase against TCP-Reno. However, the method halves its windows size same as TCP-Reno when a packet loss is detected. This makes worse its efficiency in certain network cases. On the other hand, recent proposed TCP versions essentially require throughput efficiency and TCP-friendliness with TCP-Reno. Therefore, we try to keep these advantages in our TCP design in addition to RTT-fairness. In this paper, we make intuitive analytical models in which we separate resource utilization processes into two cases: utilization of bottleneck link capacity and that of buffer space at the bottleneck link router. These models take into account three characteristic algorithms (Reno, Constant Rate, Constant Increase) in window increment phase where a sender receives an acknowledgement successfully. Their validity is proved by both simulations and implementations. From these analyses, we propose HRF-TCP which switches two modes according to observed RTT values and achieves RTT fairness. Experiments are carried out to validate the proposed method. Finally, HRF-TCP outperforms conventional methods in RTT-fairness, efficiency and friendliness with TCP-Reno.
    Download PDF (1178K)
  • Hiroshi YAMAMOTO, Shohei UCHIYAMA, Maki YAMAMOTO, Katsuichi NAKAMURA, ...
    Type: PAPER
    2012 Volume E95.D Issue 2 Pages 532-539
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    It becomes so important to observe a wild life for obtaining not only knowledge of its biological behaviors but also interactions with human beings in terms of geoenvironmental investigation and assessment. A sensor network is considered to be a suitable and powerful tool to monitor and observe a wild life in fields. In order to monitor/observe seabirds, a sensor network is deployed in Awashima island, Japan. A sensor platform is useful for early and quick deployment in fields. Atlas, a server-client type sensor platform, is used with several sensors, i.e., infrared sensors, thermometers within a nest and a sound sensor. The experimental results and the first outcome of observation have been reported. Particularly emphasized is that an infrared sensor has detected a leaving and returning of seabirds, and has identified that a leaving and returning is affected by sunrises and sunsets. An infrared sensed data has also shown a chick's practice before flying to the south. These facts and knowledge have not been clearly obtained by observation of human beings, so have demonstrated the usefulness of sensor networking for ecology observations.
    Download PDF (608K)
  • Hiroshi YAMAMOTO, Yoshinori ISHII, Katsuyuki YAMAZAKI
    Type: LETTER
    2012 Volume E95.D Issue 2 Pages 540-541
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we have reported the development of a snowblower support system which can safely navigate snowblowers, even during a whiteout, with the combination of a very accurate GPS system, so called RTK-GPS, and a unique and highly accurate map of roadsides and obstacles on roads. Particularly emphasized new techniques in this paper are ways to detect accurate geographical positions of roadsides and obstacles by utilizing and analyzing 3D laser scanned data, whose data has become available in recent days. The experiment has shown that the map created by the methods and RTK-GPS can sufficiently navigate snowblowers, whereby a secure and pleasant social environment can be archived in snow areas of Japan. In addition, proposed methods are expected to be useful for other systems such as a quick development of a highly accurate road map, a safely navigation of a wheeled chair, and so on.
    Download PDF (353K)
  • Masayoshi SHIMAMURA, Takeshi IKENAGA, Masato TSURU
    Type: LETTER
    2012 Volume E95.D Issue 2 Pages 542-545
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The explosive growth of Internet usage has caused problems for the current Internet in terms of traffic congestion within networks and performance degradation of end-to-end flows. Therefore, a reconsideration of the current Internet has begun and is being actively discussed worldwide with the goals of enabling efficient share of limited network resources (i.e., the link bandwidth) and improved performance. To directly address the inefficiency of TCP's congestion mitigation solely on the end-to-end basis, in this paper we propose an adaptive split connection scheme on advanced relay nodes; this scheme dynamically splits end-to-end TCP connections on the basis of congestion status in output links. Through simulation evaluations, we examine the effectiveness and potential of the proposed scheme.
    Download PDF (356K)
  • Hikaru OOKURA, Hiroshi YAMAMOTO, Katsuyuki YAMAZAKI
    Type: LETTER
    2012 Volume E95.D Issue 2 Pages 546-548
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we have proposed a new method of observing walking traces, which can observe people's indoor movement for life-logging. Particularly emphasized new techniques in this paper are methods to detect locations, where walking directions are changed, by analyzing azimuth orientations measured by an orientation sensor of an Android mobile device, and to decide walking traces by a map matching with a vector map. The experimental evaluation has shown that the proposed method can determine the correct paths of walking traces.
    Download PDF (205K)
Regular Section
  • Ming-Der SHIEH, Shih-Hao FANG, Shing-Chung TANG, Der-Wei YANG
    Type: PAPER
    Subject area: Computer System
    2012 Volume E95.D Issue 2 Pages 549-557
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Partially parallel decoding architectures are widely used in the design of low-density parity-check (LDPC) decoders, especially for quasi-cyclic (QC) LDPC codes. To comply with the code structure of parity-check matrices of QC-LDPC codes, many small memory blocks are conventionally employed in this architecture. The total memory area usually dominates the area requirement of LDPC decoders. This paper proposes a low-complexity memory access architecture that merges small memory blocks into memory groups to relax the effect of peripherals in small memory blocks. A simple but efficient algorithm is also presented to handle the additional delay elements introduced in the memory merging method. Experiment results on a rate-1/2 parity-check matrix defined in the IEEE 802.16e standard show that the LDPC decoder designed using the proposed memory access architecture has the lowest area complexity among related studies. Compared to a design with the same specifications, the decoder implemented using the proposed architecture requires 33% fewer gates and is more power-efficient. The proposed new memory access architecture is thus suitable for the design of low-complexity LDPC decoders.
    Download PDF (753K)
  • Wei-Neng WANG, Kai NI, Jian-She MA, Zong-Chao WANG, Yi ZHAO, Long-Fa P ...
    Type: PAPER
    Subject area: Computer System
    2012 Volume E95.D Issue 2 Pages 558-564
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The wear leveling is a critical factor which significantly impacts the lifetime and the performance of flash storage systems. To extend lifespan and reduce memory requirements, this paper proposed an efficient wear leveling without substantially increasing overhead and without modifying Flash Translation Layer (FTL) for huge-capacity flash storage systems, which is based on selective replacement. Experimental results show that our design levels the wear of different physical blocks with limited system overhead compared with previous algorithms.
    Download PDF (1269K)
  • Hyun-il LIM, Taisook HAN
    Type: PAPER
    Subject area: Software System
    2012 Volume E95.D Issue 2 Pages 565-576
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a method for comparing and detecting clones of Java programs by analyzing program stack flows. A stack flow denotes an operational behavior of a program by describing individual instructions and stack movements for performing specific operations. We analyze stack flows by simulating the operand stack movements during execution of a Java program. Two programs for detection of clones of Java programs are compared by matching similar pairs of stack flows in the programs. Experiments were performed on the proposed method and compared with the earlier approaches of comparing Java programs, the Tamada, k-gram, and stack pattern based methods. Their performance was evaluated with real-world Java programs in several categories collected from the Internet. The experimental results show that the proposed method is more effective than earlier methods of comparing and detecting clones of Java programs.
    Download PDF (433K)
  • Shi-Cho CHA, Hsiang-Meng CHANG
    Type: PAPER
    Subject area: Information Network
    2012 Volume E95.D Issue 2 Pages 577-587
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Federated identity and access management (FIAM) systems enable a user to access services provided by various organizations seamlessly. In FIAM systems, service providers normally stipulate that their users show assertions issued by allied parties to use their services as well as determine user privileges based on attributes in the assertions. However, the integrity of the attributes is important under certain circumstances. In such a circumstance, all released assertions should reflect modifications made to user attributes. Despite the ability to adopt conventional certification revocation technologies, including CRL or OCSP, to revoke an assertion and request the corresponding user to obtain a new assertion, re-issuing an entirely new assertion if only one attribute, such as user location or other environmental information, is changed would be inefficient. Therefore, this work presents a self-adaptive framework to achieve consistency in federated identity and access management systems (SAFIAM). In SAFIAM, an identity provider (IdP), which authenticates users and provides user attributes, should monitor access probabilities according to user attributes. The IdP can then adopt the most efficient means of ensuring data integrity of attributes based on related access probabilities. While Internet-based services emerge daily that have various access probabilities with respect to their user attributes, the proposed self-adaptive framework significantly contributes to efforts to streamline the use of FIAM systems.
    Download PDF (546K)
  • Yosuke TODO, Yuki OZAWA, Toshihiro OHIGASHI, Masakatu MORII
    Type: PAPER
    Subject area: Information Network
    2012 Volume E95.D Issue 2 Pages 588-595
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose two new falsification attacks against Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP). A previous realistic attack succeeds only for a network that supports IEEE 802.11e QoS features by both an access point (AP) and a client, and it has an execution time of 12-15min, in which it recovers a message integrity code (MIC) key from an ARP packet. Our first attack reduces the execution time for recovering a MIC key. It can recover the MIC key within 7-8min. Our second attack expands its targets that can be attacked. This attack focuses on a new vulnerability of QoS packet processing, and this vulnerability can remove the condition that the AP supports IEEE 802.11e. In addition, we discovered another vulnerability by which our attack succeeds under the condition that the chipset of the client supports IEEE 802.11e even if the client disables this standard through the OS. We demonstrate that chipsets developed by several kinds of vendors have the same vulnerability.
    Download PDF (644K)
  • Yoshitatsu MATSUDA, Kazunori YAMAGUCHI
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2012 Volume E95.D Issue 2 Pages 596-603
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.
    Download PDF (341K)
  • Kanji TANAKA, Tomomi NAGASAKA
    Type: PAPER
    Subject area: Pattern Recognition
    2012 Volume E95.D Issue 2 Pages 604-613
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Obtaining a compact representation of a large-size feature map built by mapper robots is a critical issue in recent mobile robotics. This “map compression” problem is explored from a novel perspective of dictionary-based data compression techniques in the paper. The primary contribution of the paper is the proposal of the dictionary-based map compression approach. A map compression system is presented by employing RANSAC map matching and sparse coding as building blocks. The effectiveness levels of the proposed techniques is investigated in terms of map compression ratio, compression speed, the retrieval performance of compressed/decompressed maps, as well as applications to the Kolmogorov complexity.
    Download PDF (1318K)
  • Graham NEUBIG, Masato MIMURA, Shinsuke MORI, Tatsuya KAWAHARA
    Type: PAPER
    Subject area: Speech and Hearing
    2012 Volume E95.D Issue 2 Pages 614-625
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    We propose a novel scheme to learn a language model (LM) for automatic speech recognition (ASR) directly from continuous speech. In the proposed method, we first generate phoneme lattices using an acoustic model with no linguistic constraints, then perform training over these phoneme lattices, simultaneously learning both lexical units and an LM. As a statistical framework for this learning problem, we use non-parametric Bayesian statistics, which make it possible to balance the learned model's complexity (such as the size of the learned vocabulary) and expressive power, and provide a principled learning algorithm through the use of Gibbs sampling. Implementation is performed using weighted finite state transducers (WFSTs), which allow for the simple handling of lattice input. Experimental results on natural, adult-directed speech demonstrate that LMs built using only continuous speech are able to significantly reduce ASR phoneme error rates. The proposed technique of joint Bayesian learning of lexical units and an LM over lattices is shown to significantly contribute to this improvement.
    Download PDF (484K)
  • Norimichi UKITA, Kunihito TERASHITA, Masatsugu KIDODE
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 626-635
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    We propose a method for calibrating the topology of distributed pan-tilt cameras (i.e. the structure of routes among and within FOVs) and its probabilistic model. To observe as many objects as possible for as long as possible, pan-tilt control is an important issue in automatic calibration as well as in tracking. In a calibration period, each camera should be controlled towards an object that goes through an unreliable route whose topology is not calibrated yet. This camera control allows us to efficiently establish the topology model. After the topology model is established, the camera should be directed towards the route with the biggest possibility of object observation. We propose a camera control framework based on the mixture of the reliability of the estimated routes and the probability of object observation. This framework is applicable both to camera calibration and object tracking by adjusting weight variables. Experiments demonstrate the efficiency of our camera control scheme for establishing the camera topology model and tracking objects as long as possible.
    Download PDF (1397K)
  • Qingyi GU, Takeshi TAKAKI, Idaku ISHII
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 636-645
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    We describe a cell-based connected component labeling algorithm to calculate the 0th and 1st moment features as the attributes for labeled regions. These can be used to indicate their sizes and positions for multi-object extraction. Based on the additivity in moment features, the cell-based labeling algorithm can label divided cells of a certain size in an image by scanning the image only once to obtain the moment features of the labeled regions with remarkably reduced computational complexity and memory consumption for labeling. Our algorithm is a simple-one-time-scan cell-based labeling algorithm, which is suitable for hardware and parallel implementation. We also compared it with conventional labeling algorithms. The experimental results showed that our algorithm is faster than conventional raster-scan labeling algorithms.
    Download PDF (785K)
  • Xinyue ZHAO, Yutaka SATOH, Hidenori TAKAUJI, Shun'ichi KANEKO
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 646-657
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper presents a novel method for robust object tracking in video sequences using a hybrid feature-based observation model in a particle filtering framework. An ideal observation model should have both high ability to accurately distinguish objects from the background and high reliability to identify the detected objects. Traditional features are better at solving the former problem but weak in solving the latter one. To overcome that, we adopt a robust and dynamic feature called Grayscale Arranging Pairs (GAP), which has high discriminative ability even under conditions of severe illumination variation and dynamic background elements. Together with the GAP feature, we also adopt the color histogram feature in order to take advantage of traditional features in resolving the first problem. At the same time, an efficient and simple integration method is used to combine the GAP feature with color information. Comparative experiments demonstrate that object tracking with our integrated features performs well even when objects go across complex backgrounds.
    Download PDF (17829K)
  • Ju Hwan LEE, Sung Yun PARK, Sung Jae KIM, Sung Min KIM
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 658-667
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    The purpose of this study is to propose an advanced phase-based optical flow method with improved tracking accuracy for motion flow. The proposed method is mainly based on adaptive bilateral filtering (ABF) and Gabor based spatial filtering. ABF aims to preserve the maximum boundary information of the original image, while the spatial filtering aims to accurately compute the local variations. Our method tracks the optical flow in three stages. Firstly, the input images are filtered by using ABF and a spatial filter to remove noises and to preserve the maximum contour information. The component velocities are then computed based on the phase gradient of each pixel. Secondly, irregular pixels are eliminated, if the phase differences are not linear over the image frames. Lastly, the entire velocity is derived by integrating the component velocities of each pixel. In order to evaluate the tracking accuracy of the proposed method, we have examined its performance for synthetic and realistic images for which the ground truth data were known. As a result, it was observed that the proposed technique offers higher accuracy than the existing optical flow methods.
    Download PDF (3357K)
  • Muhammad Rasyid AQMAR, Koichi SHINODA, Sadaoki FURUI
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 668-676
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Variations in walking speed have a strong impact on gait-based person identification. We propose a method that is robust against walking-speed variations. It is based on a combination of cubic higher-order local auto-correlation (CHLAC), gait silhouette-based principal component analysis (GSP), and a statistical framework using hidden Markov models (HMMs). The CHLAC features capture the within-phase spatio-temporal characteristics of each individual, the GSP features retain more shape/phase information for better gait sequence alignment, and the HMMs classify the ID of each gait even when walking speed changes nonlinearly. We compared the performance of our method with other conventional methods using five different databases, SOTON, USF-NIST, CMU-MoBo, TokyoTech A and TokyoTech B. The proposed method was equal to or better than the others when the speed did not change greatly, and it was significantly better when the speed varied across and within a gait sequence.
    Download PDF (1971K)
  • Jihoon SON, Hyunsik CHOI, Yon Dohn CHUNG
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 2 Pages 677-680
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    MapReduce is a parallel processing framework for large scale data. In the reduce phase, MapReduce employs the hash scheme in order to distribute data sharing the same key across cluster nodes. However, this approach is not robust for the skewed data distribution. In this paper, we propose a skew-tolerant key distribution method for MapReduce. The proposed method assigns keys to cluster nodes balancing their workloads. We implemented our proposed method on Hadoop. Through experiments, we evaluate the performance of the proposed method in comparison with the conventional method.
    Download PDF (302K)
  • Yeo-Chan YOON, Myung-Gil JANG, Hyun-Ki KIM, So-Young PARK
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2012 Volume E95.D Issue 2 Pages 681-685
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    In this paper, we propose a duplicate document detection model recognizing both partial duplicates and near duplicates. The proposed model can detect partial duplicates as well as exact duplicates by splitting a large document into many small sentence fingerprints. Furthermore, the proposed model can detect even near duplicates, the result of trivial revisions, by filtering the common words and reordering the word sequence.
    Download PDF (218K)
  • Mingwu ZHANG, Fagen LI, Tsuyoshi TAKAGI
    Type: LETTER
    Subject area: Information Network
    2012 Volume E95.D Issue 2 Pages 686-689
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    A secret broadcasting scheme deals with secure transmission of a message so that more than one privileged receiver can decrypt it. Jeong et al. proposed an efficient secret broadcast scheme using binding encryption to obtain the security properties of IND-CPA semantic security and decryption consistency. Thereafter, Wu et al. showed that the Jeong et al.'s scheme just achieves consistency in relatively weak condition and is also inefficient, and they constructed a more efficient scheme to improve the security. In this letter, we demonstrate that the Wu et al.'s scheme is also a weak decryption consistency and cannot achieve the decryption consistency if an adversary has the ability to tamper with the ciphertext. We also present an improved and more efficient secret broadcast scheme to remedy the weakness. The proposed scheme achieves decryption consistency and IND-CCA security, which can protect against stronger adversary's attacks and allows us to broadcast a digital message securely.
    Download PDF (85K)
  • Yaping HUANG, Siwei LUO, Shengchun WANG
    Type: LETTER
    Subject area: Pattern Recognition
    2012 Volume E95.D Issue 2 Pages 690-693
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    Railway inspection is important in railway maintenance. There are several tasks in railway inspection, e.g., defect detection and bolt detection. For those inspection tasks, the detection of rail surface is a fundamental and key issue. In order to detect rail defects and missing bolts, one must know the exact location of the rail surface. To deal with this problem, we propose an efficient Rail Surface Detection (RSD) algorithm that combines boundary and region information in a uniform formulation. Moreover, we reevaluate the rail location by introducing the top down information — bolt location prior. The experimental results show that the proposed algorithm can detect the rail surface efficiently.
    Download PDF (4457K)
  • Yonggang HUANG, Jun ZHANG, Yongwang ZHAO, Dianfu MA
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2012 Volume E95.D Issue 2 Pages 694-698
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    We propose a novel re-ranking method for content-based medical image retrieval based on the idea of pseudo-relevance feedback (PRF). Since the highest ranked images in original retrieval results are not always relevant, a naive PRF based re-ranking approach is not capable of producing a satisfactory result. We employ a two-step approach to address this issue. In step 1, a Pearson's correlation coefficient based similarity update method is used to re-rank the high ranked images. In step 2, after estimating a relevance probability for each of the highest ranked images, a fuzzy SVM ensemble based approach is adopted to re-rank the images. The experiments demonstrate that the proposed method outperforms two other re-ranking methods.
    Download PDF (570K)
  • Chenbo SHI, Guijin WANG, Xiaokang PEI, Bei HE, Xinggang LIN
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2012 Volume E95.D Issue 2 Pages 699-702
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    This paper addresses stereo matching under scenarios of smooth region and obviously slant plane. We explore the flexible handling of color disparity, spatial relation and the reliability of matching pixels in support windows. Building upon these key ingredients, a robust stereo matching algorithm using local plane fitting by Confidence-based Support Window (CSW) is presented. For each CSW, only these pixels with high confidence are employed to estimate optimal disparity plane. Considering that RANSAC has shown to be robust in suppressing the disturbance resulting from outliers, we employ it to solve local plane fitting problem. Compared with the state of the art local methods in the computer vision community, our approach achieves the better performance and time efficiency on the Middlebury benchmark.
    Download PDF (659K)
  • Zhenfeng SHI, Dan LE, Liyang YU, Xiamu NIU
    Type: LETTER
    Subject area: Computer Graphics
    2012 Volume E95.D Issue 2 Pages 703-706
    Published: February 01, 2012
    Released: February 01, 2012
    JOURNALS FREE ACCESS
    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.
    Download PDF (665K)
feedback
Top