IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E92.D , Issue 2
Showing 1-35 articles out of 35 articles from the selected issue
Special Section on Foundations of Computer Science
• Hirotsugu KAKUGAWA
2009 Volume E92.D Issue 2 Pages 107
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
• Toshimitsu MASUZAWA
Type: INVITED PAPER
2009 Volume E92.D Issue 2 Pages 108-115
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A self-stabilizing protocol is a protocol that achieves its intended behavior regardless of the initial configuration (i.e., global state). Thus, a self-stabilizing protocol is adaptive to any number and any type of topology changes of networks: after the last topology change occurs, the protocol starts to converge to its intended behavior. This advantage makes self-stabilizing protocols extremely attractive for designing highly dependable distributed systems on dynamic networks. While conventional self-stabilizing protocols require that the networks remain static during convergence to the intended behaviors, some recent works undertook the challenge of realizing self-stabilization in dynamic networks with frequent topology changes. This paper introduces some of the challenges as a new direction of research in self-stabilization.
• Hirotatsu KOBAYASHI, Tomomi MATSUI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 116-119
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This paper deals with a strategic issue in the stable marriage model with complete preference lists (i.e., a preference list of an agent is a permutation of all the members of the opposite sex). Given complete preference lists of n men over n women, and a marriage µ, we consider the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ. We show a simple necessary and sufficient condition for the existence of a set of preference lists of women over men. Our condition directly gives an O(n2) time algorithm for finding a set of preference lists, if it exists.
• Hitoshi YAMASAKI, Takayoshi SHOUDAI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 120-129
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A graph is an interval graph if and only if each vertex in the graph can be associated with an interval on the real line such that any two vertices are adjacent in the graph exactly when the corresponding intervals have a nonempty intersection. A number of interesting applications for interval graphs have been found in the literature. In order to find structural features common to structural data which can be represented by intervals, this paper proposes new interval graph structured patterns, called linear interval graph patterns, and a polynomial time algorithm for finding a minimally generalized linear interval graph pattern explaining a given finite set of interval graphs.
• Naoyuki KAMIYAMA, Yuuki KIYONARI, Eiji MIYANO, Shuichi MIYAZAKI, Katsu ...
Type: PAPER
2009 Volume E92.D Issue 2 Pages 130-140
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This paper introduces a new timetabling problem on universities, called interview timetabling. In this problem, some constant number, say three, of referees are assigned to each of 2n graduate students. Our task is to construct a presentation timetable of these 2n students using n timeslots and two rooms, so that two students evaluated by the same referee must be assigned to different timeslots. The optimization goal is to minimize the total number of movements of all referees between two rooms. This problem comes from the real world in the interview timetabling in Kyoto University. We propose two restricted models of this problem, and investigate their time complexities.
• Hirotoshi HONMA, Saki HONMA, Shigeru MASUYAMA
Type: PAPER
2009 Volume E92.D Issue 2 Pages 141-148
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
The spanning tree problem is to find a tree that connects all the vertices of G. This problem has many applications, such as electric power systems, computer network design and circuit analysis. Klein and Stein demonstrated that a spanning tree can be found in O(log n) time with O(n + m) processors on the CRCW PRAM. In general, it is known that more efficient parallel algorithms can be developed by restricting classes of graphs. Circular permutation graphs properly contain the set of permutation graphs as a subclass and are first introduced by Rotem and Urrutia. They provided O(n2.376) time recognition algorithm. Circular permutation graphs and their models find several applications in VLSI layout. In this paper, we propose an optimal parallel algorithm for constructing a spanning tree on circular permutation graphs. It runs in O(log n) time with O(n/ log n) processors on the EREW PRAM.
• Ryoso HAMANE, Toshiya ITOH, Kouhei TOMITA
Type: PAPER
2009 Volume E92.D Issue 2 Pages 149-157
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item iV has the production cost di and each customer ejE has the valuation vj on the bundle ejV of items. When the store sells an item iV at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each iV, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
• Hiroshi SAKAMOTO, Shirou MARUYAMA, Takuya KIDA, Shinichi SHIMOZONO
Type: PAPER
2009 Volume E92.D Issue 2 Pages 158-165
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and $g=\\Omega(\\log n)$ and $g=O\\left(\\frac{n}{log_kn}\\ ight)$ for strings from k-letter alphabet[12].
• Sung Kwon KIM, Jung-Sik CHO, Soo-Cheol KIM
Type: PAPER
2009 Volume E92.D Issue 2 Pages 166-171
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Let T be a node-weighted tree with n nodes, and let π(u, v) denote the path between two nodes u and v in T. We address two problems: (i) Path Maximum Query: Preprocess T so that, for a query pair of nodes u and v, the maximum weight on π(u, v) can be found quickly. (ii) Path Maximum Sum Query: Preprocess T so that, for a query pair of nodes u and v, the maximum weight sum subpath of π(u, v) can be found quickly. For the problems we present solutions with O(1) query time and O(n log n) preprocessing time.
• Hirotaka KATO, Satoshi MATSUMOTO, Tetsuhiro MIYAHARA
Type: PAPER
2009 Volume E92.D Issue 2 Pages 172-180
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
An elementary formal system, EFS for short, is a kind of logic program over strings, and regarded as a set of rules to generate a language. For an EFS Γ, the language L(Γ) denotes the set of all strings generated by Γ. We consider a new form of EFS, called a restricted two-clause EFS, and denote by $r{\\cal EF\\!S}$ the set of all restricted two-clause EFSs. Then we study the learnability of $r{\\cal EF\\!S}$ in the exact learning model. The class $r{\\cal EF\\!S}$ contains the class of regular patterns, which is extensively studied in Learning Theory. Let Γ* be a target EFS in $r{\\cal EF\\!S}$ of learning. In the exact learning model, an oracle for superset queries answers “yes” for an input EFS Γ in $r{\\cal EF\\!S}$ if L(Γ) is a superset of L*), and outputs a string in L*) - L(Γ), otherwise. An oracle for membership queries answers “yes” for an input string w if w is included in L*), and answers “no”, otherwise. We show that any EFS in $r{\\cal EF\\!S}$ is exactly identifiable in polynomial time using membership and superset queries. Moreover, for other types of queries, we show that there exists no polynomial time learning algorithm for $r{\\cal EF\\!S}$ by using the queries. This result indicates the hardness of learning the class $r{\\cal EF\\!S}$ in the exact learning model, in general.
• Ryoji TAKAMI, Yusuke SUZUKI, Tomoyuki UCHIDA, Takayoshi SHOUDAI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 181-190
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Two-Terminal Series Parallel (TTSP, for short) graphs are used as data models in applications for electric networks and scheduling problems. We propose a TTSP term graph which is a TTSP graph having structured variables, that is, a graph pattern over a TTSP graph. Let${\\cal TG_{TTSP}}$ be the set of all TTSP term graphs whose variable labels are mutually distinct. For a TTSP term graph g in ${\\cal TG_{TTSP}}$, the TTSP graph language of g, denoted by L(g), is the set of all TTSP graphs obtained from g by substituting arbitrary TTSP graphs for all variables in g. Firstly, when a TTSP graph G and a TTSP term graph g are given as inputs, we present a polynomial time matching algorithm which decides whether or not L(g) contains G. The minimal language problem for the class ${\\cal L_{TTSP}}=\\{L(g) \\mid g\\in {\\cal TG_{TTSP}}\\}$ is, given a set S of TTSP graphs, to find a TTSP term graph g in ${\\cal TG_{TTSP}}$ such that L(g) is minimal among all TTSP graph languages which contain all TTSP graphs in S. Secondly, we give a polynomial time algorithm for solving the minimal language problem for ${\\cal L_{TTSP}}$. Finally, we show that ${\\cal L_{TTSP}}$ is polynomial time inductively inferable from positive data.
• Seiichiro TANI, Masaki NAKANISHI, Shigeru YAMASHITA
Type: PAPER
2009 Volume E92.D Issue 2 Pages 191-199
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This paper describes a general quantum lower bounding technique for the communication complexity of a function that depends on the inputs given to two parties connected via paths, which may be shared with other parties, on a network of any topology. The technique can also be employed to obtain a lower-bound of the quantum communication complexity of some functions that depend on the inputs distributed over all parties on the network. As a typical application, we apply our technique to the distinctness problem of deciding whether there are a pair of parties with identical inputs, on a k-party ring; almost matching upper bounds are also given.
• Yoshiaki TAKATA, Hiroyuki SEKI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 200-210
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Trust negotiation is an authorizing technique based on digital credentials, in which both a user and a server gradually establish trust in each other by repeatedly exchanging their credentials. A trust negotiation strategy is a function that answers a set of credentials to disclose to the other party, depending on policies and the set of already disclosed credentials. The disclosure tree strategy (DTS), proposed by Yu et al., is one of the strategies that satisfies preferable properties. DTS in a simple implementation requires exponential time and space; however, neither an efficient algorithm nor the lower-bound of its complexity was known. In this paper, we investigate the computational complexity of DTS. We formulate subproblems of DTS as problems on derivation trees of a context-free grammar (CFG), and analyze the computational complexity of the subproblems using the concepts of CFGs. As a result, we show that two subproblems EVL and MSET of DTS are NP-complete and NP-hard, respectively, while both are solvable in polynomial time if we modify EVLnot to require non-redundancy and MSET not to answer any subset useless for leading the negotiation to success.
• Yoshiharu ASAKURA, Yukikazu NAKAMOTO
Type: PAPER
2009 Volume E92.D Issue 2 Pages 211-219
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Role-based access control (RBAC) is widely used as an access control mechanism in various computer systems. Since an organization's lines of authority influence the authorized privileges of jobs, roles also form a hierarchical structure. A role graph is a model that represents role hierarchies and is suitable for the runtime phase of RBAC deployment. Since a role graph cannot take various forms for given roles and cannot handle abstraction of roles well, however, it is not suitable for the design phase of RBAC deployment. Hence, an extended role graph, which can take a more flexible form than that of a role graph, is proposed. The extended role graph improves diversity and clarifies abstraction of roles, making it suitable for the design phase. An equivalent transformation algorithm (ETA), for transforming an extended role graph into an equivalent role graph, is also proposed. Using the ETA, system administrators can deploy efficiently RBAC by using an extended role graph in the design phase and a standard role graph in the runtime phase.
• Haruhiko SATO, Masahito KURIHARA, Sarah WINKLER, Aart MIDDELDORP
Type: PAPER
2009 Volume E92.D Issue 2 Pages 220-234
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
In equational theorem proving, convergent term rewriting systems play a crucial role. In order to compute convergent term rewriting systems, the standard completion procedure (KB) was proposed by Knuth and Bendix and has been improved in various ways. The multi-completion system MKB developed by Kurihara and Kondo accepts as input a set of reduction orders in addition to equations and efficiently simulates parallel processes each of which executes the KB procedure with one of the given orderings. Wehrman and Stump also developed a new variant of completion procedure, constraint-based completion, in which reduction orders need not be given by using automated modern termination checkers. As a result, the constraint-based procedures simulate the execution of parallel KB processes in a sequential way, but naive search algorithms sometimes cause serious inefficiency when the number of the potential reduction orders is large. In this paper, we present a new procedure, called a constraint-based multi-completion procedure MKBcs, by augmenting the constraint-based completion with the framework of the multi-completion for suppressing the combinatorial explosion by sharing inferences among the processes. The existing constraint-based system Slothrop, which basically employs the best-first search, is more efficient when its built-in heuristics for process selection are appropriate, but when they are not, our system is more efficient. Therefore, both systems have their role to play.
• Keiichirou KUSAKARI, Masahiko SAKAI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 235-247
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A static dependency pair method, proposed by us, can effectively prove termination of simply-typed term rewriting systems (STRSs). The theoretical basis is given by the notion of strong computability. This method analyzes a static recursive structure based on definition dependency. By solving suitable constraints generated by the analysis result, we can prove the termination. Since this method is not applicable to every system, we proposed a class, namely, plain function-passing, as a restriction. In this paper, we first propose the class of safe function-passing, which relaxes the restriction by plain function-passing. To solve constraints, we often use the notion of reduction pairs, which is designed from a reduction order by the argument filtering method. Next, we improve the argument filtering method for STRSs. Our argument filtering method does not destroy type structure unlike the existing method for STRSs. Hence, our method can effectively apply reduction orders which make use of type information. To reduce constraints, the notion of usable rules is proposed. Finally, we enhance the effectiveness of reducing constraints by incorporating argument filtering into usable rules for STRSs.
• Akio FUJIYOSHI
Type: PAPER
2009 Volume E92.D Issue 2 Pages 248-254
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
In this paper, we study deterministic linear pushdown tree automata (deterministic L-PDTAs) and some variations. Since recognition of an input tree by a deterministic L-PDTA can be done in linear time, deterministic L-PDTAs are applicable to many kinds of applications. A strict hierarchy will be shown among the classes of tree languages defined by a variety of deterministic L-PDTAs. It will be also shown that deterministic L-PDTAs are weakly equivalent to nondeterministic L-PDTAs.
• Naonori TANIMOTO, Katsunobu IMAI, Chuzo IWAMOTO, Kenichi MORITA
Type: LETTER
2009 Volume E92.D Issue 2 Pages 255-257
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A number-conserving cellular automaton (NCCA) is a cellular automaton such that all states of cells are represented by integers and the total number of its configuration is conserved throughout its computing process. In constrast to normal cellular automata, there are infinitely many assignments of states for NCCAs with a constant state number. As for von Neumann neighbor(radius one) NCCAs with rotation-symmetry, a local function can be represented by summation of four binary functions. In this paper, we show that the minimum size of state set of rotation-symmetric von Neumann neighbor NCCA is 5 by using this representation.
Regular Section
• Yu WU, Taisuke IZUMI, Fukuhito OOSHITA, Hirotsugu KAKUGAWA, Toshimitsu ...
Type: PAPER
Subject area: Computation and Computational Models
2009 Volume E92.D Issue 2 Pages 258-268
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Resource search is a fundamental problem in large-scale and highly dynamic Peer-to-Peer (P2P) systems. Unstructured search approaches are widely used because of their flexibility and robustness. However, such approaches incur high communication cost. The index-dissemination-based search is a kind of efficient unstructured search approach. We investigate such approaches with respect to minimize the system communication cost. Based on a dynamic system model that peers continuously leave and join, we solve two problems. One problem is how to efficiently disseminate and maintain a given number of indices. Another is to determine the optimal number of indices for each resource object of a given popularity. Finally, we propose an optimized index dissemination scheme which is fully decentralized and self-adaptive. A remarkable advantage is that the scheme yields no additional communication cost to achieve the self-adaptive feature.
• Kazuteru NAMBA, Hideo ITO
Type: PAPER
Subject area: Dependable Computing
2009 Volume E92.D Issue 2 Pages 269-282
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).
• Muhammad Anwaar MANZAR, Tanweer Ahmad CHEEMA, Abdul JALIL, Ijaz Mansoo ...
Type: PAPER
Subject area: Pattern Recognition
2009 Volume E92.D Issue 2 Pages 283-297
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Image matching is an important area of research in the field of artificial intelligence, machine vision and visual navigation. This paper presents a new image matching scheme suitable for visual navigation. In this scheme, gray scale images are sliced and quantized to form sub-band binary images. The information in the binary images is then signaturized to form a vector space and the signatures are sorted as per significance. These sorted signatures are then normalized to transform the represented image pictorial features in a rotation and scale invariant form. For the image matching these two vector spaces from both the images are compared in the transformed domain. This comparison yields efficient results directly in the image spatial domain avoiding the need of image inverse transformation. As compared to the conventional correlation, this comparison avoids the wide range of square error calculations all over the image. In fact, it directly guides the solution to converge towards the estimate given by the adaptive prediction for a high speed performance in an aerial video sequence. A four dimensional solution population scheme has also been presented with a matching confidence factor. This factor helps in terminating the iterations when the essential matching conditions have been achieved. The proposed scheme gives robust and fast results for normal, scaled and rotated templates. Speed comparison with older techniques shows the computational viability of this new technique and its much lesser dependence on image size. The method also shows noise immunity at 30dB AWGN and impulsive noise.
• I Gusti Bagus Baskara NUGRAHA, Hiroyoshi MORITA
Type: PAPER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 2 Pages 298-309
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Delivering video streaming service over the Internet encounters some challenges. Two of them are heterogeneity of networks capacity and variability of video data rate. The capacity of network segments are constrained. Meanwhile, the rate of video data to be transmitted is highly variable in order to get near-constant images quality. Therefore, to send variable bit rate (VBR) video data over capacity-constrained network, its bit rate should be adjusted. In this paper a system to adjust the transmission bit rate of VBR MPEG video data called Transcoding-after-Smoothing (TaS), which is a combination of bit rate transcoding and bit rate smoothing algorithm, is proposed. The system smoothes out transmission rate of video data while at the same time also performs transcoding on some video frames when necessary in order to keep the transmission bit rate below a certain threshold value. Two kinds of TaS methods are proposed. One method does not have transcoding preference, while the other method uses frame type preference where an intra-coded frame is the last one that will be transcoded. These methods are implemented in our video server where a VBR video data is accessed by a client. Our experiment results show that the first TaS method yields significant reduction in the number of transcoded frames compared with the second TaS method and conventional frame-level transcoding. However, the second method performs better in minimizing the quality distortion.
• Mohamed GHONEIM, Norimichi TSUMURA, Toshiya NAKAGUCHI, Takashi YAHAGI, ...
Type: PAPER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 2 Pages 310-318
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
The block based motion estimation technique is adopted by various video coding standards to reduce the temporal redundancy in video sequences. The core of that technique is the search algorithm implementedto find the location of the best matched block. Indeed, the full search algorithm is the most straightforward and optimal but computationally demanding search algorithm. Consequently, many fast and suboptimal search algorithms have been proposed. Reduction of the number of location being searched is the approach used to decrease the computational load of full search. In this paper, hybridization between an adaptive search algorithm and the full search algorithm is proposed. The adaptive search algorithm benefits from the correlation within spatial and temporal adjacent blocks. At the same time, a feature domain based matching criteria is used to reduce the complexity resulting from applying the pixel based conventional criteria. It is shown that the proposed algorithm produces good quality performance and requires less computational time compared with popular block matching algorithms.
• Achyut SAPKOTA, Kazuo OHMI
Type: PAPER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 2 Pages 319-326
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Particle Image Velocimetry (PIV) is a widely used tool for the measurement of the different kinematic properties of the fluid flow. In this measurement technique, a pulsed laser light sheet is used to illuminate a flow field seeded with tracer particles and at each instance of illumination, the positions of the particles are recorded on digital CCD cameras. The resulting two camera frames can then be processed by various techniques to obtain the velocity vectors. One such techniques involve the tracking of the individual particles so as to identify the displacement of the every particles present in the flow field. The displacement of individual particles thus determined gives the velocity information if divided by known time interval. The accuracy as well as efficiency of such measurement systems depend upon the reliability of the algorithms to track those particles. In the present work, a cellular neural network based algorithm has been proposed. Performance test has been carried out using the standard flow images. It performs well in comparison to the existing algorithms in terms of reliability, accuracy and processing time.
• Young-Chang KIM, Jae-Woo CHANG
Type: LETTER
Subject area: Database
2009 Volume E92.D Issue 2 Pages 327-331
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
The deployment of historical trajectories of moving objects has greatly increased for various applications in road networks. For instance, similar patterns of moving-object trajectories are very useful for designing the transportation network of a new city. In this paper, we define a spatio-temporal similarity measure based on a road network distance, rather than a Euclidean distance. We also propose a new similar trajectory search algorithm based on the spatio-temporal measure by using an efficient pruning mechanism. Finally, we show the efficiency of our algorithm, both in terms of retrieval accuracy and retrieval efficiency.
• Moonju PARK
Type: LETTER
Subject area: Dependable Computing
2009 Volume E92.D Issue 2 Pages 332-335
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This letter analyzes quantum-based scheduling of real-time tasks when each task is allowed to have a different quantum size. It is shown that generalized quantum-based scheduling dominates preemption threshold scheduling in the sense that if tasks are schedulable by preemption threshold scheduling then the tasks must be schedulable by generalized quantum-based scheduling, but the converse does not hold. To determine the schedulability of tasks in quantum-based scheduling, a method to calculate the worst case response time is also presented.
• Kentaroh KATOH, Kazuteru NAMBA, Hideo ITO
Type: LETTER
Subject area: Dependable Computing
2009 Volume E92.D Issue 2 Pages 336-341
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This paper presents a scan design for delay fault testability of 2-rail logic circuits. The flip flops used in the scan design are based on master-slave ones. The proposed scan design provides complete fault coverage in delay fault testing of 2-rail logic circuits. In two-pattern testing with the proposed scan design, initial vectors are set using the set-reset operation, and the scan-in operation for initial vectors is not required. Hence, the test application time is reduced to about half that of the enhanced scan design. Because the additional function is only the set-reset operation of the slave latch, the area overhead is small. The evaluation shows that the differences in the area overhead of the proposed scan design from those of the standard scan design and the enhanced scan design are 2.1 and -14.5 percent on average, respectively.
• Takashi WATANABE, Kenji KUROSAWA, Makoto YOSHIZAWA
Type: LETTER
Subject area: Rehabilitation Engineering and Assistive Technology
2009 Volume E92.D Issue 2 Pages 342-345
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A Feedback Error Learning (FEL) scheme was found to be applicable to joint angle control by Functional Electrical Stimulation (FES) in our previous study. However, the FEL-FES controller had a problem in learning of the inverse dynamics model (IDM) in some cases. In this paper, methods of applying the FEL to FES control were examined in controlling 1-DOF movement of the wrist joint stimulating 2 muscles through computer simulation under several control conditions with several subject models. The problems in applying FEL to FES controller were suggested to be in restricting stimulation intensity to positive values between the minimum and the maximum intensities and in the case of very small output values of the IDM. Learning of the IDM was greatly improved by considering the IDM output range with setting the minimum ANN output value in calculating ANN connection weight change.
• Trung-Nghia VU, Syng-Yup OHN
Type: LETTER
Subject area: Pattern Recognition
2009 Volume E92.D Issue 2 Pages 346-348
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
We propose a new filter method for feature selection for SELDI-TOF mass spectrum datasets. In the method, a new relevance index was defined to represent the goodness of a feature by considering the distribution of samples based on the counts. The relevance index can be used to obtain the feature sets for classification. Our method can be applied to mass spectrum datasets with extremely high dimensions and process the clinical datasets with practical sizes in acceptable calculation time since it is based on simple counting of samples. The new method was applied to the three public mass spectrum datasets and showed better or comparable results than conventional filter methods.
• Deok-Su NA, Myung-Jin BAE
Type: LETTER
Subject area: Speech and Hearing
2009 Volume E92.D Issue 2 Pages 349-352
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries. However, an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally, unit-selection is conducted using multiple prosodic targets. The experimental results show that the proposed method improves the naturalness of synthesized speech.
• Min-Woo PARK, Jong-Tae PARK, Gwang-Hoon PARK, Doug-Young SUH
Type: LETTER
Subject area: Image Processing and Video Processing
2009 Volume E92.D Issue 2 Pages 353-356
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This letter introduces a cost-effective chrominance compensation scheme. The proposed method is applied to both ‘INTER 16 × 16’ and ‘SKIP’ modes in only anchor P-pictures. By testing using JVT common test condition, simulation results show that proposed method can obtain average BD-PSNR gains for U and V as 0.14dB and 0.13dB, respectively while maintaining almost the same BD-PSNR's for Y. For the range of low bit-rate, it is observed that average BD-PSNR gains for Y, U and V are 0.14dB, 0.49dB and 0.53dB, respectively. Necessary computational complexity is very marginal because the number of anchor P-pictures is very small in comparison with whole coded video sequences. However it can be found that the proposed method can significantly improve the coding efficiencies of color components.
• Yingjun TANG, De XU, Guanghua GU, Shuoyan LIU
Type: LETTER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 2 Pages 357-360
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
We present a novel model, named Category Constraint-Latent Dirichlet Allocation (CC-LDA), to learn and recognize natural scene category. Previous work had to resort to additional classifier after obtaining image topic representation. Our model puts the category information in topic inference, so every category is represented in a different topics simplex and topic size, which is consistent with human cognitive habit. The significant feature in our model is that it can do discrimination without combined additional classifier, during the same time of getting topic representation. We investigate the classification performance with variable scene category tasks. The experiments have demonstrated that our learning model can get better performance with less training data.
• Ning LI, De XU
Type: LETTER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 2 Pages 361-364
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
This letter proposes a neurobiological approach for action recognition. In this approach, actions are represented by a visual-neuron feature (VNF) based on a quantitative model of object representation in the primate visual cortex. A supervised classification technique is then used to classify the actions. The proposed VNF is invariant to affine translation and scaling of moving objects while maintaining action specificity. Moreover, it is robust to the deformation of actors. Experiments on publicly available action datasets demonstrate the proposed approach outperforms conventional action recognition models based on computer-vision features.
• JiYing WU, QiuQi RUAN, Gaoyun AN
Type: LETTER
Subject area: Image Recognition, Computer Vision
2009 Volume E92.D Issue 2 Pages 365-368
Published: February 01, 2009
Released: February 01, 2009
JOURNALS FREE ACCESS
A novel bimodal method for face recognition under low-level lighting conditions is proposed. It fuses an enhanced gray level imageand an illumination-invariant geometric image at the feature-level. To further improve the recognition performance under large variations in attributions such as poses and expressions, discriminant features are extracted from source images using the wavelet transform-based method. Features are adaptively fused to reconstruct the final face sample. Then FLD is used to generate a supervised discriminant space for the classification task. Experiments show that the bimodal method outperforms conventional methods under complex conditions.