-
Chien-Liang CHEN, Suey WANG, Hsu-Chun YEN
Article type: PAPER
Subject area: Algorithm Theory
2009 Volume E92.D Issue 3 Pages
377-388
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Communication-free Petri nets provide a net semantics for
Basic Parallel Processes, which form a subclass of Milner's Calculus of Communicating Systems (CCS) a process calculus for the description and algebraic manipulation of concurrent communicating systems. It is known that the reachability problem for communication-free Petri nets is NP-complete. Lacking the synchronization mechanism, the expressive power of communication-free Petri nets is somewhat limited. It is therefore importance to see whether the power of communication-free Petri nets can be enhanced without sacrificing their analytical capabilities. As a first step towards this line of research, in this paper our main concern is to investigate, from the decidability/complexity viewpoint, the reachability problem for a number of variants of
communication-free Petri nets, including communication-free Petri nets augmented with ‘
static priorities, ’ ‘
dynamic priorities, ’ ‘
states, ’ ‘
inhibitor arcs, ’ and ‘
timing constraints.’
View full abstract
-
Cherng CHIN, Tien-Hsiung WENG, Lih-Hsing HSU, Shang-Chia CHIOU
Article type: PAPER
Subject area: Algorithm Theory
2009 Volume E92.D Issue 3 Pages
389-400
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Let
u and
v be any two distinct vertices of an undirected graph
G, which is
k-connected. For 1 ≤
w ≤
k, a
w-container
C(
u,
v) of a
k-connected graph
G is a set of
w-disjoint paths joining
u and
v. A
w-container
C(
u,
v) of
G is a
w*-container if it contains all the vertices of
G. A graph G is
w*-connected if there exists a
w*-container between any two distinct vertices. Let κ(
G) be the connectivity of
G. A graph
G is
super spanning connected if
G is
i*-connected for 1 ≤
i ≤ κ(
G). In this paper, we prove that the
n-dimensional burnt pancake graph
Bn is super spanning connected if and only if n ≠ 2.
View full abstract
-
Takaaki GOTO, Kenji RUISE, Takeo YAKU, Kensei TSUCHIDA
Article type: PAPER
Subject area: Software Engineering
2009 Volume E92.D Issue 3 Pages
401-412
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
In software design and development, program diagrams are often used for good visualization. Many kinds of program diagrams have been proposed and used. To process such diagrams automatically and efficiently, the program diagram structure needs to be formalized. We aim to construct a diagram processing system with an efficient parser for our program diagram Hichart. In this paper, we give a precedence graph grammar for Hichart that can parse in linear time. We also describe a parsing method and processing system incorporating the Hichart graphical editor that is based on the precedence graph grammar.
View full abstract
-
Deok-Hwan KIM
Article type: PAPER
Subject area: Contents Technology and Web Information Systems
2009 Volume E92.D Issue 3 Pages
413-421
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.
View full abstract
-
Shu-Ling SHIEH, I-En LIAO, Kuo-Feng HWANG, Heng-Yu CHEN
Article type: PAPER
Subject area: Data Mining
2009 Volume E92.D Issue 3 Pages
422-432
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
This paper proposes an efficient self-organizing map algorithm based on reference point and filters. A strategy called Reference Point SOM (RPSOM) is proposed to improve SOM execution time by means of filtering with two thresholds
T1 and
T2. We use one threshold,
T1, to define the search boundary parameter used to search for the Best-Matching Unit (BMU) with respect to input vectors. The other threshold,
T2, is used as the search boundary within which the BMU finds its neighbors. The proposed algorithm reduces the time complexity from
O(
n2) to
O(
n) in finding the initial neurons as compared to the algorithm proposed by Su et al.[16]. The RPSOM dramatically reduces the time complexity, especially in the computation of large data set. From the experimental results, we find that it is better to construct a good initial map and then to use the unsupervised learning to make small subsequent adjustments.
View full abstract
-
Kentaroh KATOH, Kazuteru NAMBA, Hideo ITO
Article type: PAPER
Subject area: Dependable Computing
2009 Volume E92.D Issue 3 Pages
433-442
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
This paper proposes a scan design for delay fault testability of dual circuits. In normal operation mode, each proposed scan flip flop operates as a master-slave flip flop. In test mode, the proposed scan design performs scan operation using two scan paths, namely master scan path and slave scan path. The master scan path consists of master latches and the slave scan path consists of slave latches. In the proposed scan design, arbitrary two-patterns can be set to flip flops of dual circuits. Therefore, it achieves complete fault coverage for robust and non-robust testable delay fault testing. It requires no extra latch unlike enhanced scan design. Thus the area overhead is low. The evaluation shows the test application time of the proposed scan design is 58.0% of that of the enhanced scan design, and the area overhead of the proposed scan design is 13.0% lower than that of the enhanced scan design. In addition, in testing of single circuits, it achieves complete fault coverage of robust and non-robust testable delay fault testing. It requires smaller test data volume than the enhanced scan design in testing of single circuits.
View full abstract
-
Hirokazu OZAKI, Atsushi KARA, Zixue CHENG
Article type: PAPER
Subject area: Dependable Computing
2009 Volume E92.D Issue 3 Pages
443-450
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of
N units. We assume that any failed unit is instantly replaced by one of the
M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.
View full abstract
-
Yukiko YAMAUCHI, Sayaka KAMEI, Fukuhito OOSHITA, Yoshiaki KATAYAMA, Hi ...
Article type: PAPER
Subject area: Distributed Cooperation and Agents
2009 Volume E92.D Issue 3 Pages
451-459
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
A desired property of large distributed systems is self adaptability against the faults that occur more frequently as the size of the distributed system grows. Self-stabilizing protocols provide autonomous recovery from finite number of transient faults. Fault-containing self-stabilizing protocols promise not only self-stabilization but also containment of faults (quick recovery and small effect) against small number of faults. However, existing composition techniques for self-stabilizing protocols (e.g. fair composition) cannot preserve the fault-containment property when composing fault-containing self-stabilizing protocols. In this paper, we present
Recovery Waiting Fault-containing Composition (
RWFC) framework that provides a composition of multiple fault-containing self-stabilizing protocols while preserving the fault-containment property of the source protocols.
View full abstract
-
Prakasith KAYASITH, Thanaruk THEERAMUNKONG
Article type: PAPER
Subject area: Speech and Hearing
2009 Volume E92.D Issue 3 Pages
460-468
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors,
speech consistency and
speech distinction, a speech quality indicator called
speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by
rank-order inconsistency,
correlation coefficient, and
root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the
articulatory and
intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.
View full abstract
-
Kenta NIWA, Takanori NISHINO, Kazuya TAKEDA
Article type: PAPER
Subject area: Speech and Hearing
2009 Volume E92.D Issue 3 Pages
469-476
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
View full abstract
-
Chiori HORI, Bing ZHAO, Stephan VOGEL, Alex WAIBEL, Hideki KASHIOKA, S ...
Article type: PAPER
Subject area: Speech and Hearing
2009 Volume E92.D Issue 3 Pages
477-488
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
The performance of speech translation systems combining automatic speech recognition (ASR) and machine translation (MT) systems is degraded by redundant and irrelevant information caused by speaker disfluency and recognition errors. This paper proposes a new approach to translating speech recognition results through speech consolidation, which removes ASR errors and disfluencies and extracts meaningful phrases. A consolidation approach is spun off from speech summarization by word extraction from ASR 1-best. We extended the consolidation approach for confusion network (CN) and tested the performance using TED speech and confirmed the consolidation results preserved more meaningful phrases in comparison with the original ASR results. We applied the consolidation technique to speech translation. To test the performance of consolidation-based speech translation, Chinese broadcast news (BN) speech in RT04 were recognized, consolidated and then translated. The speech translation results via consolidation cannot be directly compared with gold standards in which all words in speech are translated because consolidation-based translations are partial translations. We would like to propose a new evaluation framework for partial translation by comparing them with the most similar set of words extracted from a word network created by merging
gradual summarizations of the gold standard translation. The performance of consolidation-based MT results was evaluated using
BLEU. We also propose
Information Preservation Accuracy (IPAccy) and
Meaning Preservation Accuracy (MPAccy) to evaluate consolidation and consolidation-based MT. We confirmed that consolidation contributed to the performance of speech translation.
View full abstract
-
Takashi NOSE, Makoto TACHIBANA, Takao KOBAYASHI
Article type: PAPER
Subject area: Speech and Hearing
2009 Volume E92.D Issue 3 Pages
489-497
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
This paper presents methods for controlling the intensity of emotional expressions and speaking styles of an arbitrary speaker's synthetic speech by using a small amount of his/her speech data in HMM-based speech synthesis. Model adaptation approaches are introduced into the style control technique based on the multiple-regression hidden semi-Markov model (MRHSMM). Two different approaches are proposed for training a target speaker's MRHSMMs. The first one is MRHSMM-based model adaptation in which the pretrained MRHSMM is adapted to the target speaker's model. For this purpose, we formulate the MLLR adaptation algorithm for the MRHSMM. The second method utilizes simultaneous adaptation of speaker and style from an average voice model to obtain the target speaker's style-dependent HSMMs which are used for the initialization of the MRHSMM. From the result of subjective evaluation using adaptation data of 50 sentences of each style, we show that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.
View full abstract
-
Hamed AKBARI, Yukio KOSUGI, Kazuyuki KOJIMA
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 3 Pages
498-505
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
In laparoscopic surgery, the lack of tactile sensation and 3D visual feedback make it difficult to identify the position of a blood vessel intraoperatively. An unintentional partial tear or complete rupture of a blood vessel may result in a serious complication; moreover, if the surgeon cannot manage this situation, open surgery will be necessary. Differentiation of arteries from veins and other structures and the ability to independently detect them has a variety of applications in surgical procedures involving the head, neck, lung, heart, abdomen, and extremities. We have used the artery's pulsatile movement to detect and differentiate arteries from veins. The algorithm for change detection in this study uses edge detection for unsupervised image registration. Changed regions are identified by subtracting the systolic and diastolic images. As a post-processing step, region properties, including color average, area, major and minor axis lengths, perimeter, and solidity, are used as inputs of the LVQ (Learning Vector Quantization) network. The output results in two object classes: arteries and non-artery regions. After post-processing, arteries can be detected in the laparoscopic field. The registration method used here is evaluated in comparison with other linear and nonlinear elastic methods. The performance of this method is evaluated for the detection of arteries in several laparoscopic surgeries on an animal model and on eleven human patients. The performance evaluation criteria are based on false negative and false positive rates. This algorithm is able to detect artery regions, even in cases where the arteries are obscured by other tissues.
View full abstract
-
Keiji YASUDA, Hirofumi YAMAMOTO, Eiichiro SUMITA
Article type: PAPER
Subject area: Natural Language Processing
2009 Volume E92.D Issue 3 Pages
506-511
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
For statistical language model training, target domain matched corpora are required. However, training corpora sometimes include both target domain matched and unmatched sentences. In such a case, training set selection is effective for both reducing model size and improving model performance. In this paper, training set selection method for statistical language model training is described. The method provides two advantages for training a language model. One is its capacity to improve the language model performance, and the other is its capacity to reduce computational loads for the language model. The method has four steps. 1) Sentence clustering is applied to all available corpora. 2) Language models are trained on each cluster. 3) Perplexity on the development set is calculated using the language models. 4) For the final language model training, we use the clusters whose language models yield low perplexities. The experimental results indicate that the language model trained on the data selected by our method gives lower perplexity on an open test set than a language model trained on all available corpora.
View full abstract
-
Hai VU, Tomio ECHIGO, Ryusuke SAGAWA, Keiko YAGI, Masatsugu SHIBA, Kaz ...
Article type: PAPER
Subject area: Biological Engineering
2009 Volume E92.D Issue 3 Pages
512-528
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.
View full abstract
-
Fausto LUCENA, Allan Kardec BARROS, Yoshinori TAKEUCHI, Noboru OHNISHI
Article type: PAPER
Subject area: Biological Engineering
2009 Volume E92.D Issue 3 Pages
529-537
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
The heart rate variability (HRV) is a measure based on the time position of the electrocardiogram (ECG) R-waves. There is a discussion whether or not we can obtain the HRV pattern from blood pressure (BP). In this paper, we propose a method for estimating HRV from a BP signal based on a HIF algorithm and carrying out experiments to compare BP as an alternative measurement of ECG to calculate HRV. Based on the hypotheses that ECG and BP have the same harmonic behavior, we model an alternative HRV signal using a nonlinear algorithm, called heart instantaneous frequency (HIF). It tracks the instantaneous frequency through a rough fundamental frequency using power spectral density (PSD). A novelty in this work is to use fundamental frequency instead of wave-peaks as a parameter to estimate and quantify beat-to-beat heart rate variability from BP waveforms. To verify how the estimate HRV signals derived from BP using HIF correlates to the standard gold measures, i.e. HRV derived from ECG, we use a traditional algorithm based on QRS detectors followed by thresholding to localize the R-wave time peak. The results show the following: 1) The spectral error caused by misestimation of time by R-peak detectors is demonstrated by an increase in high-frequency bands followed by the loss of time domain pattern. 2) The HIF was shown to be robust against noise and nuisances. 3) By using statistical methods and nonlinear analysis no difference between HIF derived from BP and HRV derived from ECG was observed.
View full abstract
-
Shijun LIN, Li SU, Haibo SU, Depeng JIN, Lieguang ZENG
Article type: LETTER
Subject area: VLSI Systems
2009 Volume E92.D Issue 3 Pages
538-540
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Based on the traffic predictability characteristic of Networks-on-Chip (NoC), we propose a pre-allocation based flow control scheme to improve the performance of NoC. In this scheme, routes are pre-allocated and the injection rates of all routes are regulated at the traffic sources according to the average available bandwidths in the links. Then, the number of packets in the network is decreased and thus, the congestion probability is reduced and the communication performance is improved. Simulation results show that this scheme greatly increases the throughput and cuts down the average latency with little area and energy overhead, compared with the switch-to-switch flow control scheme.
View full abstract
-
Yang-Sae MOON, Jinho KIM
Article type: LETTER
Subject area: Data Mining
2009 Volume E92.D Issue 3 Pages
541-544
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Lower-dimensional transformations in similar sequence matching show different performance characteristics depending on the type of time-series data. In this paper we propose a hybrid approach that exploits multiple transformations at a time in a single hybrid index. This hybrid approach has advantages of exploiting the similar effect of using multiple transformations and reducing the index maintenance overhead. For this, we first propose a new notion of
hybrid lower-dimensional transformation that extracts various features using different transformations. We next define the
hybrid distance to compute the distance between the hybrid transformed points. We then formally prove that the hybrid approach performs similar sequence matching correctly. We also present the index building and similar sequence matching algorithms based on the hybrid transformation and distance. Experimental results show that our hybrid approach outperforms the single transformation-based approach.
View full abstract
-
Yoonjeong KIM, SeongYong OHM, Kang YI
Article type: LETTER
Subject area: Application Information Security
2009 Volume E92.D Issue 3 Pages
545-547
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
In this letter, we propose a privacy-preserving authentication protocol with RSA cryptosystem in an RFID environment. For both overcoming the resource restriction and strengthening security, our protocol uses only modular exponentiation with exponent three at RFID tag side, with the padded random message whose length is greater than one-sixth of the whole message length.
View full abstract
-
Hyung Chan KIM, Angelos KEROMYTIS
Article type: LETTER
Subject area: Application Information Security
2009 Volume E92.D Issue 3 Pages
548-551
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Although software-attack detection via dynamic taint analysis (DTA) supports high coverage of program execution, it prohibitively degrades the performance of the monitored program. This letter explores the possibility of collaborative dynamic taint analysis among members of an application community (AC): instead of full monitoring for every request at every instance of the AC, each member uses DTA for some fraction of the incoming requests, thereby loosening the burden of heavyweight monitoring. Our experimental results using a test AC based on the Apache web server show that speedy detection of worm outbreaks is feasible with application communities of medium size (
i.e., 250-500).
View full abstract
-
Gwanggil JEON, Min Young JUNG, Jechang JEONG, Sung Han PARK, Il Hong S ...
Article type: LETTER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 3 Pages
552-554
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
In this letter, a low-cost weighted interpolation scheme (WIS) for deinterlacing within a single frame is discussed. Three useful weights measurements are introduced within the operation window to reduce false decisions on the basis of the LCID algorithm. The WIS algorithm has a simple weight-evaluating structure with low complexity, which therefore makes it easy to implement in hardware. Experimental results demonstrated that the WIS algorithm performs better than previous techniques.
View full abstract
-
Jingjing ZHONG, Siwei LUO, Qi ZOU
Article type: LETTER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 3 Pages
555-558
Published: March 01, 2009
Released on J-STAGE: March 01, 2009
JOURNAL
FREE ACCESS
Boundary detection is one of the most studied problems in computer vision. It is the foundation of contour grouping, and initially affects the performance of grouping algorithms. In this paper we propose a novel boundary detection algorithm for contour grouping, which is a selective attention guided coarse-to-fine scale pyramid model. Our algorithm evaluates each edge instead of each pixel location, which is different from others and suitable for contour grouping. Selective attention focuses on the whole saliency objects instead of local details, and gives global spatial prior for boundary existence of objects. The evolving process of edges through the coarsest scale to the finest scale reflects the importance and energy of edges. The combination of these two cues produces the most saliency boundaries. We show applications for boundary detection on natural images. We also test our approach on the Berkeley dataset and use it for contour grouping. The results obtained are pretty good.
View full abstract