IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E94.D , Issue 8
Showing 1-23 articles out of 23 articles from the selected issue
Regular Section
  • Junqi ZHANG, Lina NI, Jing YAO, Wei WANG, Zheng TANG
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2011 Volume E94.D Issue 8 Pages 1527-1538
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Kennedy has proposed the bare bones particle swarm (BBPS) by the elimination of the velocity formula and its replacement by the Gaussian sampling strategy without parameter tuning. However, a delicate balance between exploitation and exploration is the key to the success of an optimizer. This paper firstly analyzes the sampling distribution in BBPS, based on which we propose an adaptive BBPS inspired by the cloud model (ACM-BBPS). The cloud model adaptively produces a different standard deviation of the Gaussian sampling for each particle according to the evolutionary state in the swarm, which provides an adaptive balance between exploitation and exploration on different objective functions. Meanwhile, the diversity of the swarms is further enhanced by the randomness of the cloud model itself. Experimental results show that the proposed ACM-BBPS achieves faster convergence speed and more accurate solutions than five other contenders on twenty-five unimodal, basic multimodal, extended multimodal and hybrid composition benchmark functions. The diversity enhancement by the randomness in the cloud model itself is also illustrated.
    Download PDF (2097K)
  • Jiongyao YE, Yu WAN, Takahiro WATANABE
    Type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 8 Pages 1539-1546
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Modern microprocessors employ caches to bridge the great speed variance between a main memory and a central processing unit, but these caches consume a larger and larger proportion of the total power consumption. In fact, many values in a processor rarely need the full-bit dynamic range supported by a cache. The narrow-width value occupies a large portion of the cache access and storage. In view of these observations, this paper proposes an Adaptive Various-width Data Cache (AVDC) to reduce the power consumption in a cache, which exploits the popularity of narrow-width value stored in the cache. In AVDC, the data storage unit consists of three sub-arrays to store data of different widths. When high sub-arrays are not used, they are closed to save its dynamic and static power consumption through the modified high-bit SRAM cell. The main advantages of AVDC are: 1) Both the dynamic and static power consumption can be reduced. 2) Low power consumption is achieved by the modification of the data storage unit with less hardware modification. 3) We exploit the redundancy of narrow-width values instead of compressed values, thus cache access latency does not increase. Experimental results using SPEC 2000 benchmarks show that our proposed AVDC can reduce the power consumption, by 34.83% for dynamic power saving and by 42.87% for static power saving on average, compared with a cache without AVDC.
    Download PDF (761K)
  • Je-Hoon LEE, Young-Jun SONG, Sang-Choon KIM
    Type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 8 Pages 1547-1556
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    This paper presents a self-timed SRAM system employing new memory segment technique that divides memory cell arrays into multiple regions based on its latency, not the size of the memory cell array. This is the main difference between the proposed memory segmentation technique and the conventional method. Consequently, the proposed method provides a more efficient way to reduce the memory access time. We also proposed an architecture of dummy cell and completion signal generator for the handshaking protocol. We synthesized a 8MB SRAM system consisting of 16 512K memory blocks using Hynix 0.35-µm CMOS process. Our implantation shows 15% higher performance compared to the other systems. Our implementation results shows a trade-off between the area overhead and the performance for the number of memory segmentation.
    Download PDF (1894K)
  • Ming-Der SHIEH, Yung-Kuei LU
    Type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 8 Pages 1557-1564
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    A low-complexity Reed-Solomon (RS) decoder design based on the modified Euclidean (ME) algorithm proposed by Truong is presented in this paper. Low complexity is achieved by reformulating Truong's ME algorithm using the proposed polynomial manipulation scheme so that a more compact polynomial representation can be derived. Together with the developed folding scheme and simplified boundary cell, the resulting design effectively reduces the hardware complexity while meeting the throughput requirements of optical communication systems. Experimental results demonstrate that the developed RS(255, 239) decoder, implemented in the TSMC 0.18µm process, can operate at up to 425MHz and achieve a throughput rate of 3.4Gbps with a total gate count of 11,759. Compared to related works, the proposed decoder has the lowest area requirement and the smallest area-time complexity.
    Download PDF (691K)
  • Zhao LEI, Hui XU, Daisuke IKEBUCHI, Tetsuya SUNATA, Mitaro NAMIKI, Hid ...
    Type: PAPER
    Subject area: Computer System
    2011 Volume E94.D Issue 8 Pages 1565-1574
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    This paper presents a leakage-efficient instruction TLB (Translation Lookaside Buffer) design for embedded processors. The key observation is that when programs enter a physical page, the following instructions tend to be fetched from the same page for a rather long time. Thus, by employing a small storage component which holds the recent address-translation information, the TLB access frequency can be drastically decreased, and the instruction TLB can be turned into the low-leakage mode with the dual voltage supply technique. Based on such a design philosophy, three leakage control policies are proposed to maximize the leakage reduction efficiency. Evaluation results with eight MiBench programs show that the proposed design can reduce the leakage power of the instruction TLB by 50% on average, with only 0.01% performance degradation.
    Download PDF (1363K)
  • Woosung JUNG, Eunjoo LEE, Chisu WU
    Type: PAPER
    Subject area: Software Engineering
    2011 Volume E94.D Issue 8 Pages 1575-1589
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Change history in project revisions provides helpful information on handling bugs. Existing studies on predicting bugs mainly focus on resulting bug patterns, not these change patterns. When a code hunk is copied onto several files, the set of original and copied hunks often need to be consistently maintained. We assume that it is a normal state when all of hunks survive or die in a specific revision. When partial change occurs on some duplicated hunks, they are regarded as suspicious hunks. Based on these assumptions, suspicious cases can be predicted and the project's developers can be alerted. In this paper, we propose a practical approach to detect various change smells based on revision history and code hunk tracking. The change smells are suspicious change patterns that can result in potential bugs, such as partial death of hunks, missed refactoring or fix, backward or late change. To detect these change smells, three kinds of hunks - add, delete, and modify - are tracked and analyzed by an automated tool. Several visualized graphs for each type have been suggested to improve the applicability of the proposed technique. We also conducted experiments on large-scale open projects. The case study results show the applicability of the proposed approach.
    Download PDF (3846K)
  • Shigeaki TAGASHIRA, Yutaka KAMINISHI, Yutaka ARAKAWA, Teruaki KITASUKA ...
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2011 Volume E94.D Issue 8 Pages 1590-1601
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Data caching is widely known as an effective power-saving technique, in which mobile devices use local caches instead of original data placed on a server, in order to reduce the power consumption necessary for network accesses. In such data caching, a cache invalidation mechanism is important in preventing these devices from unintentionally accessing invalid data. In this paper, we propose a broadcast-based protocol for cache invalidation in a location-aware system. The proposed protocol is designed to reduce the access time required for obtaining necessary invalidation reports through broadcast media and to avoid client-side sleep fragmentation while retrieving the reports. In the proposed protocol, a Bloom filter is used as the data structure of an invalidation report, in order to probabilistically check the invalidation of caches. Furthermore, we propose three broadcast scheduling methods that are intended to achieve flexible broadcasting structured by the Bloom filter: fragmentation avoidance scheduling method (FASM), metrics balancing scheduling method (MBSM), and minimizing access time scheduling method (MASM). The broadcast schedule is arranged for consecutive accesses to geographically neighboring invalidation reports. In addition, the effectiveness of the proposed methods is evaluated by simulation. The results indicate that the MBSM and MASM achieve a high rate of performance scheduling. Compared to the FASM, the MBSM reduces the access time by 34%, while the fragmentations on the resultant schedule increase by 40%, and the MASM reduces the access time by 40%, along with an 85% increase in the number of fragmentations.
    Download PDF (1549K)
  • Takahiro ARIYOSHI, Satoshi FUJITA
    Type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 8 Pages 1602-1609
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    In this paper, we study the problem of efficient processing of conjunctive queries in Peer-to-Peer systems based on Distributed Hash Tables (P2P DHT, for short). The basic idea of our approach is to cache the search result for the queries submitted in the past, and to use them to improve the performance of succeeding query processing. More concretely, we propose to adopt Bloom filters as a concrete implementation of such a result cache rather than a list of items used in many conventional schemes. By taking such an approach, the cache size for each conjunctive query becomes as small as the size of each file index. The performance of the proposed scheme is evaluated by simulation. The result of simulation indicates that the proposed scheme is particularly effective when the size of available memory in each peer is bounded by a small value, and when the number of peers is 100, it reduces the amount of data transmissions of previous schemes by 75%.
    Download PDF (1043K)
  • Katsuyuki HAGIWARA
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2011 Volume E94.D Issue 8 Pages 1610-1619
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    In this paper, we consider a nonparametric regression problem using a learning machine defined by a weighted sum of fixed basis functions, where the number of basis functions, or equivalently, the number of weights, is equal to the number of training data. For the learning machine, we propose a training scheme that is based on orthogonalization and thresholding. On the basis of the scheme, vectors of basis function outputs are orthogonalized and coefficients of the orthogonalized vectors are estimated instead of weights. The coefficient is set to zero if it is less than a predetermined threshold level assigned component-wise to each coefficient. We then obtain the resulting weight vector by transforming the thresholded coefficients. In this training scheme, we propose asymptotically reasonable threshold levels to distinguish contributed components from unnecessary ones. To see how this works in a simple case, we derive an upper bound for the generalization error of the training scheme with the given threshold levels. It tells us that an increase in the generalization error is of O(log n/n) when there is a sparse representation of a target function in an orthogonal domain. In implementing the training scheme, eigen-decomposition or the Gram-Schmidt procedure is employed for orthogonalization, and the corresponding training methods are referred to as OHTED and OHTGS. Furthermore, modified versions of OHTED and OHTGS, called OHTED2 and OHTGS2 respectively, are proposed for reduced estimation bias. On real benchmark datasets, OHTED2 and OHTGS2 are found to exhibit relatively good generalization performance. In addition, OHTGS2 is found to be obtain a sparse representation of a target function in terms of the basis functions.
    Download PDF (186K)
  • Harksu KIM, Dongtaek KIM, Jaeeung LEE, Youngho CHAI
    Type: PAPER
    Subject area: Human-computer Interaction
    2011 Volume E94.D Issue 8 Pages 1620-1627
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    This paper presents a grid-based, real-time surface modeling algorithm in which the generation of a precise 3D model is possible by considering the user's intention during the course of the spatial input. In order to create the corresponding model according to the user's input data, plausible candidates of wand traversal patterns of grid edges are defined by considering the sequential and directional characteristics of the wand input. The continuity of the connected polygonal surfaces, including the octree space partitioning, is guaranteed without the extra crack-patching algorithm and the pre-defined patterns. Furthermore, the proposed system was shown to be a suitable and effective surface generation tool for the spatial sketching system. It is not possible to implement the unusual input intention of the 3D spatial sketching system using the conventional Marching Cubes algorithm.
    Download PDF (1127K)
  • Kyungkoo JUN
    Type: PAPER
    Subject area: Human-computer Interaction
    2011 Volume E94.D Issue 8 Pages 1628-1635
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    This paper presents the development of a sound-specific vibration interface and its evaluation results by playing three commercial games with the interface. The proposed interface complements the pitfalls of existing frequency-based vibration interfaces such as vibrating headsets, mouses, and joysticks. Those interfaces may bring negative user experiences by generating incessant vibrations because they vibrate in response to certain sound frequencies. But the proposed interface which responds to only target sounds can improve user experiences effectively. The hardware and software parts of the interface are described; the structure and the implementation of a wrist pad that delivers vibration are discussed. Furthermore, we explain a sound-matching algorithm that extracts sound characteristics and a GUI-based pattern editor that helps users to design vibration patterns. The results from evaluating the performance show that the success ratio of the sound matching is over 90% at the volume of 20dB and the delay time is around 400msec. In the survey about user experiences, the users evaluates that the interface is more than four times effective in improving the reality of game playing than without using the vibration interfaces, and two times than the frequency-based ones.
    Download PDF (1140K)
  • Zhengming MA, Jing CHEN, Shuaibin LIAN
    Type: PAPER
    Subject area: Pattern Recognition
    2011 Volume E94.D Issue 8 Pages 1636-1640
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Locally linear embedding (LLE) is a well-known method for nonlinear dimensionality reduction. The mathematical proof and experimental results presented in this paper show that the neighborhood sizes in LLE must be smaller than the dimensions of input data spaces, otherwise LLE would degenerate from a nonlinear method for dimensionality reduction into a linear method for dimensionality reduction. Furthermore, when the neighborhood sizes are larger than the dimensions of input data spaces, the solutions to LLE are not unique. In these cases, the addition of some regularization method is often proposed. The experimental results presented in this paper show that the regularization method is not robust. Too large or too small regularization parameters cannot unwrap S-curve. Although a moderate regularization parameters can unwrap S-curve, the relative distance in the input data will be distorted in unwrapping. Therefore, in order to make LLE play fully its advantage in nonlinear dimensionality reduction and avoid multiple solutions happening, the best way is to make sure that the neighborhood sizes are smaller than the dimensions of input data spaces.
    Download PDF (249K)
  • Chan-Hee HAN, Si-Woong LEE, Hamid GHOLAMHOSSEINI, Yun-Ho KO
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2011 Volume E94.D Issue 8 Pages 1641-1652
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    In this paper, side information refinement methods for Wyner-Ziv video codec are presented. In the proposed method, each block of a Wyner-Ziv frame is separated into a predefined number of groups, and these groups are interleaved to be coded. The side information for the first group is generated by the motion compensated temporal interpolation using adjacent key frames only. Then, the side information for remaining groups is gradually refined using the knowledge of the already decoded signal of the current Wyner-Ziv frame. Based on this basic concept, two progressive side information refinement methods are proposed. One is the band-wise side information refinement (BW-SIR) method which is based on transform domain interleaving, while the other is the field-wise side information refinement (FW-SIR) method which is based on pixel domain interleaving. Simulation results show that the proposed methods improve the quality of the side information and rate-distortion performance compared to the conventional side information refinement methods.
    Download PDF (3118K)
  • Xiaocong JIN, Jun SUN, Yiqing HUANG, Jia SU, Takeshi IKENAGA
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2011 Volume E94.D Issue 8 Pages 1653-1662
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Different encoding modes for variable block size are available in the H.264/AVC standard in order to offer better coding quality. However, this also introduces huge computation time due to the exhaustive check for all modes. In this paper, a fast spatial DIRECT mode decision method for profiles supporting B frame encoding (main profile, high profile, etc.) in H.264/AVC is proposed. Statistical analysis on multiple video sequences is carried out, and the strong relationship of mode selection and rate-distortion (RD) cost between the current DIRECT macroblock (MB) and the co-located MBs is observed. With the check of mode condition, predicted RD cost threshold and dynamic parameter update model, the complex mode decision process can be terminated at an early stage even for small QP cases. Simulation results demonstrate the proposed method can achieve much better performance than the original exhaustive rate-distortion optimization (RDO) based mode decision algorithm by reducing up to 56.8% of encoding time for IBPBP picture group and up to 67.8% of encoding time for IBBPBBP picture group while incurring only negligible bit increment and quality degradation.
    Download PDF (1471K)
  • Zhuo YANG, Sei-ichiro KAMATA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1663-1670
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Fourier transform is a significant tool in image processing and pattern recognition. By introducing a hypercomplex number, hypercomplex Fourier transform treats a signal as a vector field and generalizes the conventional Fourier transform. Inspired from that, hypercomplex polar Fourier analysis that extends conventional polar Fourier analysis is proposed in this paper. The proposed method can handle signals represented by hypercomplex numbers as color images. The hypercomplex polar Fourier analysis is reversible that means it can be used to reconstruct image. The hypercomplex polar Fourier descriptor has rotation invariance property that can be used for feature extraction. Due to the noncommutative property of quaternion multiplication, both left-side and right-side hypercomplex polar Fourier analysis are discussed and their relationships are also established in this paper. The experimental results on image reconstruction, rotation invariance, color plate test and image retrieval are given to illustrate the usefulness of the proposed method as an image analysis tool.
    Download PDF (1430K)
  • Luis Ricardo SAPAICO, Hamid LAGA, Masayuki NAKAJIMA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1671-1682
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    We propose a system that, using video information, segments the mouth region from a face image and then detects the protrusion of the tongue from inside the oral cavity. Initially, under the assumption that the mouth is closed, we detect both mouth corners. We use a set of specifically oriented Gabor filters for enhancing horizontal features corresponding to the shadow existing between the upper and lower lips. After applying the Hough line detector, the extremes of the line that was found are regarded as the mouth corners. Detection rate for mouth corner localization is 85.33%. These points are then input to a mouth appearance model which fits a mouth contour to the image. By segmenting its bounding box we obtain a mouth template. Next, considering the symmetric nature of the mouth, we divide the template into right and left halves. Thus, our system makes use of three templates. We track the mouth in the following frames using normalized correlation for mouth template matching. Changes happening in the mouth region are directly described by the correlation value, i.e., the appearance of the tongue in the surface of the mouth will cause a decrease in the correlation coefficient through time. These coefficients are used for detecting the tongue protrusion. The right and left tongue protrusion positions will be detected by analyzing similarity changes between the right and left half-mouth templates and the currently tracked ones. Detection rates under the default parameters of our system are 90.20% for the tongue protrusion regardless of the position, and 84.78% for the right and left tongue protrusion positions. Our results demonstrate the feasibility of real-time tongue protrusion detection in vision-based systems and motivates further investigating the usage of this new modality in human-computer communication.
    Download PDF (1300K)
  • Tetsuji OGAWA, Kazuya UEKI, Tetsunori KOBAYASHI
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1683-1689
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    We propose a novel method of supervised feature projection called class-distance-based discriminant analysis (CDDA), which is suitable for automatic age estimation (AAE) from facial images. Most methods of supervised feature projection, e.g., Fisher discriminant analysis (FDA) and local Fisher discriminant analysis (LFDA), focus on determining whether two samples belong to the same class (i.e., the same age in AAE) or not. Even if an estimated age is not consistent with the correct age in AAE systems, i.e., the AAE system induces error, smaller errors are better. To treat such characteristics in AAE, CDDA determines between-class separability according to the class distance (i.e., difference in ages); two samples with similar ages are imposed to be close and those with spaced ages are imposed to be far apart. Furthermore, we propose an extension of CDDA called local CDDA (LCDDA), which aims at handling multimodality in samples. Experimental results revealed that CDDA and LCDDA could extract more discriminative features than FDA and LFDA.
    Download PDF (395K)
  • Yu WANG, Jien KATO
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1690-1699
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    In this paper, we deal with the pedestrian detection task in outdoor scenes. Because of the complexity of such scenes, generally used gradient-feature-based detectors do not work well on them. We propose to use sparse 3D depth information as an additional cue to do the detection task, in order to achieve a fast improvement in performance. Our proposed method uses a probabilistic model to integrate image-feature-based classification with sparse depth estimation. Benefiting from the depth estimates, we map the prior distribution of human's actual height onto the image, and update the image-feature-based classification result probabilistically. We have two contributions in this paper: 1) a simplified graphical model which can efficiently integrate depth cue in detection; and 2) a sparse depth estimation method which could provide fast and reliable estimation of depth information. An experiment shows that our method provides a promising enhancement over baseline detector within minimal additional time.
    Download PDF (1894K)
  • Chang LIU, Guijin WANG, Wenxin NING, Xinggang LIN
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1700-1707
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    A novel approach for detecting anomaly in visual surveillance system is proposed in this paper. It is composed of three parts: (a) a dense motion field and motion statistics method, (b) motion directional PCA for feature dimensionality reduction, (c) an improved one-class SVM for one-class classification. Experiments demonstrate the effectiveness of the proposed algorithm in detecting abnormal events in surveillance video, while keeping a low false alarm rate. Our scheme works well in complicated situations that common tracking or detection modules cannot handle.
    Download PDF (932K)
  • Seung Jun BAEK, Daehee KIM, Seong-Jun OH, Jong-Arm JUN
    Type: LETTER
    Subject area: Information Network
    2011 Volume E94.D Issue 8 Pages 1708-1711
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    We consider a queuing model with applications to electric vehicle (EV) charging systems in smart grids. We adopt a scheme where an Electric Service Company (ESCo) broadcasts a one bit signal to EVs, possibly indicating ‘on-peak’ periods during which electricity cost is high. EVs randomly suspend/resume charging based on the signal. To model the dynamics of EVs we propose an M/M/∞ queue with random interruptions, and analyze the dynamics using time-scale decomposition. There exists a trade-off: one may postpone charging activity to ‘off-peak’ periods during which electricity cost is cheaper, however this incurs extra delay in completion of charging. Using our model we characterize achievable trade-offs between the mean cost and delay perceived by users. Next we consider a scenario where EVs respond to the signal based on the individual loads. Simulation results show that peak electricity demand can be reduced if EVs carrying higher loads are less sensitive to the signal.
    Download PDF (256K)
  • Jinho AHN
    Type: LETTER
    Subject area: Dependable Computing
    2011 Volume E94.D Issue 8 Pages 1712-1715
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Sender-based message logging (SBML) with checkpointing has its well-known beneficial feature, lowering highly failure-free overhead of synchronous logging with volatile logging at sender's memory. This feature encourages it to be applied into many distributed systems as a low-cost transparent rollback recovery technique. However, the original SBML recovery algorithm may no longer be progressing in some transient communication error cases. This paper proposes a consistent recovery algorithm to solve this problem by piggybacking small log information for unstable messages received on each acknowledgement message for returning the receive sequence number assigned to a message by its receiver. Our algorithm also enables all messages scheduled to be sent, but delayed because of some preceding unstable messages to be actually transmitted out much earlier than the existing ones.
    Download PDF (863K)
  • Hiroki SAITO, Takashi WATANABE
    Type: LETTER
    Subject area: Rehabilitation Engineering and Assistive Technology
    2011 Volume E94.D Issue 8 Pages 1716-1720
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    The aim of this study is to realize a simplified gait analysis system using wearable sensors. In this paper, a joint angle measurement method using Kalman filter to correct gyroscope signals from accelerometer signals was examined in measurement of hip, knee and ankle joint angles with a wireless wearable sensor system, in which the sensors were attached on the body without exact positioning. The lower limb joint angles of three healthy subjects were measured during gait with the developed sensor system and a 3D motion measurement system in order to evaluate the measurement accuracy. Then, 10m walking measurement was performed under different walking speeds with a healthy subject in order to find the usefulness of the system as a simplified gait analysis system. The joint angles were measured with reasonable accuracy, and the system showed joint angle changes that were similar to those shown in a previous report as walking speed changed. It would be necessary to examine the influence of sensor attachment position and method for more stable measurement, and also to study other parameters for gait evaluation.
    Download PDF (457K)
  • Chang LIU, Guijin WANG, Chunxiao LIU, Xinggang LIN
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 8 Pages 1721-1724
    Published: August 01, 2011
    Released: August 01, 2011
    JOURNALS FREE ACCESS
    Boosting over weak classifiers is widely used in pedestrian detection. As the number of weak classifiers is large, researchers always use a sampling method over weak classifiers before training. The sampling makes the boosting process harder to reach the fixed target. In this paper, we propose a partial derivative guidance for weak classifier mining method which can be used in conjunction with a boosting algorithm. Using weak classifier mining method makes the sampling less degraded in the performance. It has the same effect as testing more weak classifiers while using acceptable time. Experiments demonstrate that our algorithm can process quicker than [1] algorithm in both training and testing, without any performance decrease. The proposed algorithms is easily extending to any other boosting algorithms using a window-scanning style and HOG-like features.
    Download PDF (364K)
feedback
Top