A 34-inch inorganic EL panel for a high-definition TV with a simple structure has been successfully developed by combining color-conversion materials and BaAl2S4: Eu blue phosphor. The luminance and maximum efficacy of the blue phosphor devices are higher than 2,300 cd/m2 for applied AC voltage of threshold-voltage plus 60 V at operation frequency of 120 Hz and 2.5 lm/W, respectively. The panels, which were fabricated using these technologies, achieved a peak luminance of over 400 cd/m2, a color gamut of larger than 95% of the NTSC color area in CIE (x, y), and a wide viewing angle.
This paper presents a dynamic range expansion technique for CMOS image sensors using dual charge storage in a pixel and multiple exposures. Each pixel contains two photodiodes, PD1 and PD2 whose sensitivity can be set independently by the accumulation time. The difference of charge accumulation time in both photodiodes can be widely controlled to expand the dynamic range of the sensor. It allows flexible control of the dynamic range since the accumulation time in the PD2 signal is adjustable. The multiple exposure technique used in the sensor reduces the motion blur in the synthesized wide dynamic range image when capturing fast-moving objects. It also reduces the signal-to-noise ratio dip at the switching point of the PD1 signal to the PD2 signals in the synthesized wide dynamic range image. To reduce the read out time, a comparator-controlled selective readout of the PD1 and PD2 signals has been tested. The synthesis of the captured images to produce wide dynamic range image is also described. A wide dynamic range image sensor with 320x240 pixels has been implemented and tested. It is found that the use of 4 exposures in one frame for the short accumulation time signals is sufficient for the reduction of motion blur in the synthesized wide dynamic range image, and the signal-to-noise ratio dipping at the switching point of the PD1 to PD2 signals is reduced by 6 dB using 4 short-time exposures.
Three formal verification approaches targeting C language based hardware designs, which are the central verification technologies for C-based hardware design flows, are presented. First approach is to statically analyze C design descriptions to see if there is any inconsistency/inadequate usages, such as array overbounds accesses, uses of values of variables before initialization, deadlocks, and others. It is based on local analysis of the descriptions and hence applicable to large design descriptions. The key issue for this approach is how to reason about various dependencies among statements as precisely as possible with as short time as possible. Second approach is to model check C design descriptions. Since simple model checking does not work well for large descriptions, automatic abstractions or reductions of descriptions and their refinements are integrated with model checking methods such that reasonably large designs can be processed. By concentrating on particular types of properties, there can be large reductions of design sizes, and as a result, real life designs could be model checked. The last approach is to check equivalence between two C design descriptions. It is based on symbolic simulations of design descriptions. Since there can be large numbers of execution paths in large design descriptions, various techniques to reduce the numbers of execution paths to be examined are incorporated. All of the presented methods use dependence analysis on data, control, and others as their basic analysis techniques. System dependence graph for programming languages are extended to deal with C based hardware designs that have structural hierarchy as well. With those techniques, reasonably large design descriptions can be checked.
In order to cope with the increasing leakage power and the increasing device variability in VLSI's, the required control size of both the space-domain and the time-domain is decreasing. This paper shows the several recent fine-grain voltage engineerings for the low power VLSI circuit design. The space-domain fine-grain voltage engineering includes the fine-grain power supply voltage with 3D-structured on-chip buck converters with the maximum power efficiency up to 71.3% in 0.35-µm CMOS and the fine-grain body bias control to reduce power supply voltage in 90-nm CMOS. The time-domain fine-grain voltage engineering includes accelerators for the power supply voltage hopping with a 5-ns transition time in 0.18-µm CMOS, the power supply noise canceller with the 32% power supply noise reduction in 90-nm CMOS, and backgate bias accelerators for fast wake-up with 1.5-V change of backgate voltage in 35ns in 90-nm CMOS.
Scheduling, an important step in high-level synthesis, is essentially a searching process in the solution space. Due to the vastness of the solution space and the complexity of the imposed constraints, it is usually difficult to explore the solution space efficiently. In this paper, we present a random walk based perturbation method to explore the schedule space. The method works by limiting the search within a specifically defined sub-solution space (SSS), where schedules in the SSS can be found in polynomial time. Then, the SSS is repeatedly perturbed by using an N-dimension random walk so that better schedules can be searched in the new SSS. To improve the search efficiency, a guided perturbation strategy is presented that leads the random walk toward promising directions. Experiments on well-known benchmarks show that by controlling the number of perturbations, our method conveniently makes tradeoff between schedule quality and runtime. In reasonable runtime, the proposed method finds schedules of better quality than existing methods.
In behavioral synthesis for resource shared architecture, multiplexers are inserted between registers and functional units as a result of binding if necessary. Multiplexer optimization in binding is important for performance, area and power of a synthesized circuit. In this paper, we propose a binding algorithm to reduce total amount of multiplexer ports. Unlike most of the previous works in which binding is performed by a constructive algorithm, our approach is based on an iterative improvement algorithm. Starting point of our approach is initial functional unit binding and initial register binding. Both functional unit binding and register binding are modified by local improvements based on taboo search iteratively. The binding in each iteration is feasible, hence actual value of total amount of multiplexer ports can be optimized. The smart neighborhood which considers an effect of sharing of connection is used in the proposed method for effective reduction of total amount of multiplexer ports. Additionally, the massive modification of binding is performed by regular intervals to achieve a further reduction of total amount of multiplexer ports and further robustness for an initial binding. Experimental results show that our approach can reduce total amount of multiplexer ports by 30% on an average compared to a traditional binding algorithm with computation time of several seconds to a few minutes. Also, results of robustness evaluation show that our approach barely depends on initial binding.
Power dissipation by data communications on LSI depends on not only the binding and floorplan of functional units and registers but how data communications are executed. Data communications depend on the binding, and the binding depends on the schedule of operations. Therefore, it is important to obtain the best schedule which leads to the best binding and floorplan to minimize the power dissipated by data communication. In this paper a schedule exploration method is presented to search the best one which achieves the minimized energy dissipation of data communications.
This paper proposes a behavioral synthesis system for asynchronous circuits with bundled-data implementation. The proposed system is based on a behavioral synthesis method for synchronous circuits and extended on operation scheduling and control synthesis for bundled-data implementation. The proposed system synthesizes an RTL model and a simulation model from a behavioral description specified by a restricted C language, a resource library, and a set of design constraints. This paper shows the effectiveness of the proposed system in terms of area and latency through comparisons among bundled-data implementations synthesized by the proposed system, synchronous counterparts, and bundled-data implementations synthesized by using a behavioral synthesis method for synchronous circuits directly.
To perform functional formal verification, model checking for assertions has attracted attentions. In SystemVerilog, assertions are allowed to include “local variables”, which are used to store and refer to data values locally within assertions. For the purpose of model checking, a finite automaton called “checker” is generated. In the previous approach for checker generation by Long and Seawright, the checker introduces new state variables corresponding to a local variable. The number of the introduced state variables for each local variable, is linear to the size of a given assertion. In this paper, we show an algorithm for checker generation in order to reduce the number of the introduced state variables. In particular, our algorithm requires only one such variable for each local variable. We also show experimental results on bounded model checking for our algorithm compared with the previous work by Long and Seawright.
A GIDL (Gate Induced Drain Leakage) current model for advanced MOSFETs is proposed and implemented into HiSIM2, complete surface potential based MOSFET model. The model considers two tunneling mechanisms, the band-to-band tunneling and the trap assisted tunneling. Totally 7 model parameters are introduced. Simulation results of NFETs and PFETs reproduce measurements for any device size without binning of model parameters. The influence of the GIDL current is investigated with circuits, which are sensitive to the change of the stored charge due to the GIDL current.
Synchronous design methodology is widely used for today's digital circuits. However, it is difficult to reuse a highly-optimized synchronous module for a specific clock frequency to other systems with different global clocks, because logic depth between FFs should be tailored for the clock frequency. In this paper, we focus on asynchronous design, in which each module works at its best performance, and apply it to an IEEE-754-standard single-precision floating-point divider. Our divider is ready to be built into a system with arbitrary clock frequency and achieves its peak performance and area- and power-efficiency. This paper also reports an implementation result and performance evaluation of the proposed divider on a Xilinx Virtex-4 FPGA. The evaluation results show that our divider achieves smaller area and lower power consumption than the synchronous dividers with comparable throughput.
In this paper, we propose a partially-parallel irregular LDPC decoder for IEEE 802.11n standard targeting high throughput applications. The proposed decoder has several merits: (i) The decoder is designed based on a novel delta-value based message passing algorithm which facilitates the decoding throughput by redundant computation removal. (ii) Techniques such as binary sorting, parallel column operation, high performance pipelining are used to further speed up the message-passing procedure. The synthesis result in TSMC 0.18 CMOS technology demonstrates that for (648, 324) irregular LDPC code, our decoder can achieve 8 times increasement in throughput, reaching 418Mbps at the frequency of 200MHz.
We developed a new open source multi-core processor simulator SimCell from scratch. SimCell is modeled around the Cell Broadband Engine. In this paper, we describe the advantages of the functional level simulator SimCell. From the verification of the simulation speed, we confirm that SimCell achieves a practical simulation speed. And, we show the features of a cycle-accurate version of SimCell called SimCell/CA (CA stands for cycle accurate). The gap of execution cycles between SimCell/CA and IBM simulator is 0.8% on average. Through a real case study using SimCell, we clarify the usefulness of SimCell for processor architecture research.
For compiler developers, one big issue is how to describe a specification of its intermediate representation (IR), which consists of various entities like symbol tables, syntax trees, analysis information and so on. As IR is a central data structure of a compiler, its precise specification is always strongly desired. However, the formalization of an actual IR is not an easy task since it tends to be large, has complex interdependency between its entities, and depends on a specific implementation language. In this paper, as a first step to solve this problem, we propose a new data model for IR, called IIR. The goal of IIR is to describe a specification of IR declaratively without depending on its concrete implementation detail. The main idea is to model all entities of IR as relations with explicit identifiers. By this, we can develop an IR model transliterally from an actual IR, and describe its specification by using the full expressiveness of conventional logic languages. The specification is inherently executable and can be used to check the validity of IR in compile time. As a practical case study, we formalized an IR of our production compiler in IIR, and developed a type system for it in Prolog. Experimental results about size and performance are shown.
Separation logic is an extension of Hoare logic to verify imperative programs with pointers and mutable data-structures. Although there exist several implementations of verifiers for separation logic, none of them has actually been itself verified. In this paper, we present a verifier for a fragment of separation logic that is verified inside the Coq proof assistant. This verifier is implemented as a Coq tactic by reflection to verify separation logic triples. Thanks to the extraction facility to OCaml, we can also derive a certified, stand-alone and efficient verifier for separation logic.
Multiple sequence alignment (MSA) is a useful tool in bioinformatics. Although many MSA algorithms have been developed, there is still room for improvement in accuracy and speed. We have developed an MSA program PRIME, whose crucial feature is the use of a group-to-group sequence alignment algorithm with a piecewise linear gap cost. We have shown that PRIME is one of the most accurate MSA programs currently available. However, PRIME is slower than other leading MSA programs. To improve computational performance, we newly incorporate anchoring and grouping heuristics into PRIME. An anchoring method is to locate well-conserved regions in a given MSA as anchor points to reduce the region of DP matrix to be examined, while a grouping method detects conserved subfamily alignments specified by phylogenetic tree in a given MSA to reduce the number of iterative refinement steps. The results of BAliBASE 3.0 and PREFAB 4 benchmark tests indicated that these heuristics contributed to reduction in the computational time of PRIME by more than 60% while the average alignment accuracy measures decreased by at most 2%. Additionally, we evaluated the effectiveness of iterative refinement algorithm based on maximal expected accuracy (MEA). Our experiments revealed that when many sequences are aligned, the MEA-based algorithm significantly improves alignment accuracy compared with the standard version of PRIME at the expense of a considerable increase in computation time.
Let T be a set of hidden strings and S be a set of their concatenations. We address the problem of inferring T from S. Any formalization of the problem as an optimization problem would be computationally hard, because it is NP-complete even to determine whether there exists T smaller than S, and because it is also NP-complete to partition only two strings into the smallest common collection of substrings. In this paper, we devise a new algorithm that infers T by finding common substrings in S and splitting them. This algorithm is scalable and can be completed in O(L)-time regardless of the cardinality of S, where L is the sum of the lengths of all strings in S. In computational experiments, 40, 000 random concatenations of randomly generated strings were successfully decomposed, as well as the effectiveness of our method for this problem was compared with that of multiple sequence alignment programs. We also present the result of a preliminary experiment against the transcriptome of Homo sapiens and describe problems in applications where real large-scale cDNA sequences are analyzed.
We study the predecessor and control problems for Boolean networks (BNs). The predecessor problem is to determine whether there exists a global state that transits to a given global state in a given BN, and the control problem is to find a sequence of 0-1 vectors for control nodes in a given BN which leads the BN to a desired global state. The predecessor problem is useful both for the control problem for BNs and for analysis of landscape of basins of attractions in BNs. In this paper, we focus on BNs of bounded indegree and show some hardness results on the computational complexity of the predecessor and control problems. We also present simple algorithms for the predecessor problem that are much faster than the naive exhaustive search-based algorithm. Furthermore, we show some results on distribution of predecessors, which leads to an improved algorithm for the control problem for BNs of bounded indegree.
In this paper, we propose a fully pipelined multishift QR algorithm to compute all the eigenvalues of a symmetric tridiagonal matrix on parallel machines. Existing approaches for parallelizing the tridiagonal QR algorithm, such as the conventional multishift QR algorithm and the deferred shift QR algorithm, have suffered from either inefficiency of processor utilization or deterioration of convergence properties. In contrast, our algorithm realizes both efficient processor utilization and improved convergence properties at the same time by adopting a new shifting strategy. Numerical experiments on a shared memory parallel machine (Fujitsu PrimePower HPC2500) with 32 processors show that our algorithm is up to 1.9 times faster than the conventional multishift algorithm and up to 1.7 times faster than the deferred shift algorithm.
Data cube construction is a commonly used operation in data warehouses. Since both the volume of data stored and analyzed in a data warehouse and the amount of computation involved in data cube construction are very large, incremental maintenance of data cube is really effective. In this paper, we employ an extendible multidimensional array model to maintain data cubes. Such an array enables incremental cube maintenance without relocating any data dumped at an earlier time, while computing the data cube efficiently by utilizing the fast random accessing capability of arrays. In this paper, we first present our data cube scheme and related maintenance methods, and then present the corresponding physical implementation scheme. We developed a prototype system based on the physical implementation scheme and performed evaluation experiments based on the prototype system.
Dynamic simulations are essential for understanding the mechanism of how biochemical networks generate robust properties to environmental stresses or genetic changes. However, typical dynamic modeling and analysis yield only local properties regarding a particular choice of plausible values of kinetic parameters, because it is hard to measure the exact values in vivo. Global and firm analyses are needed that consider how the changes in parameter values affect the results. A typical solution is to systematically analyze the dynamic behaviors in large parameter space by searching all plausible parameter values without any biases. However, a random search needs an enormous number of trials to obtain such parameter values. Ordinary evolutionary searches swiftly obtain plausible parameters but the searches are biased. To overcome these problems, we propose the two-phase search method that consists of a random search and an evolutionary search to effectively explore all possible solution vectors of kinetic parameters satisfying the target dynamics. We demonstrate that the proposed method enables a nonbiased and high-speed parameter search for dynamic models of biochemical networks through its applications to several benchmark functions and to the E. coli heat shock response model.
Comparative analyses of enzymatic reactions provide important information on both evolution and potential pharmacological targets. Previously, we focused on the structural formulae of compounds, and proposed a method to calculate enzymatic similarities based on these formulae. However, with the proposed method it is difficult to measure the reaction similarity when the formulae of the compounds constituting each reaction are completely different. The present study was performed to extract substructures that change within chemical compounds using the RPAIR data in KEGG. Two approaches were applied to measure the similarity between the extracted substructures: a fingerprint-based approach using the MACCS key and the Tanimoto/Jaccard coefficients; and the Topological Fragment Spectra-based approach that does not require any predefined list of substructures. Whether the similarity measures can detect similarity between enzymatic reactions was evaluated. Using one of the similarity measures, metabolic pathways in Escherichia coli were aligned to confirm the effectiveness of the method.
Background: Glycans, or sugar chains, are one of the three types of chain (DNA, protein and glycan) that constitute living organisms; they are often called “the third chain of the living organism”. About half of all proteins are estimated to be glycosylated based on the SWISS-PROT database. Glycosylation is one of the most important post-translational modifications, affecting many critical functions of proteins, including cellular communication, and their tertiary structure. In order to computationally predict N-glycosylation and O-glycosylation sites, we developed three kinds of support vector machine (SVM) model, which utilize local information, general protein information and/or subcellular localization in consideration of the binding specificity of glycosyltransferases and the characteristic subcellular localization of glycoproteins. Results: In our computational experiment, the model integrating three kinds of information achieved about 90% accuracy in predictions of both N-glycosylation and O-glycosylation sites. Moreover, our model was applied to a protein whose glycosylation sites had not been previously identified and we succeeded in showing that the glycosylation sites predicted by our model were structurally reasonable. Conclusions: In the present study, we developed a comprehensive and effective computational method that detects glycosylation sites. We conclude that our method is a comprehensive and effective computational prediction method that is applicable at a genome-wide level.
The present paper discusses a head-needed strategy and its decidable classes of higher-order rewrite systems (HRSs), which is an extension of the head-needed strategy of term rewriting systems (TRSs). We discuss strong sequential and NV-sequential classes having the following three properties, which are mandatory for practical use: (1) the strategy reducing a head-needed redex is head normalizing (2) whether a redex is head-needed is decidable, and (3) whether an HRS belongs to the class is decidable. The main difficulty in realizing (1) is caused by the β-reductions induced from the higher-order reductions. Since β-reduction changes the structure of higher-order terms, the definition of descendants for HRSs becomes complicated. In order to overcome this difficulty, we introduce a function, PV, to follow occurrences moved by β-reductions. We present a concrete definition of descendants for HRSs by using PV and then show property (1) for orthogonal systems. We also show properties (2) and (3) using tree automata techniques, a ground tree transducer (GTT), and recognizability of redexes.
Several information organization, access, and filtering systems can benefit from different kind of document representations than those used in traditional Information Retrieval (IR). Topic Detection and Tracking (TDT) is an example of such a domain. In this paper we demonstrate that traditional methods for term weighing do not capture topical information and this leads to inadequate representation of documents for TDT applications. We present various hypotheses regarding the factors that can help in improving the document representation for Story Link Detection (SLD) — a core task of TDT. These hypotheses are tested using various TDT collections. From our experiments and analysis we found that in order to obtain a faithful representation of documents in TDT domain, we not only need to capture a term's importance in traditional IR sense, but also evaluate its topical behavior. Along with defining this behavior, we propose a novel measure that captures a term's importance at the collection level as well as its discriminating power for topics. This new measure leads to a much better document representation as reflected by the significant improvements in the results.
We present a semi-supervised technique of object extraction for natural image matting. At first, we present a novel unsupervised graph-spectral algorithm for extraction of homogeneous regions in an image. We next derive a semi-supervised scheme from this unsupervised algorithm. In our method, it is sufficient for users to draw strokes only in one of object and background regions. The semi-supervised optimization problem is solved with an iterative method where memberships are propagated from strokes to their surroundings. We suggest a guideline for placement of strokes by exploiting the same iterative solution process in the unsupervised algorithm. We project the color vectors with the linear discriminant analysis to improve the color discriminability and speed up the convergence of the iterative method. Performance of the proposed method is examined for some images and the results are compared with other methods and ground truth mattes.
Measuring a bidirectional reflectance distribution function (BRDF) requires long time because a target object must be illuminated from all incident angles and the reflected light must be measured from all reflected angles. In this paper, we introduce a rapid BRDF measuring system using an ellipsoidal mirror and a projector. Since the system changes incident angles without a mechanical drive, dense BRDF can be rapidly measured. Moreover, it is shown that the S/N ratio of the measured BRDF can be significantly increased by multiplexed illumination based on the Hadamard matrix.
In the present paper, we propose an automatic snapping method that aligns fuzzy objects in a multi-resolution grid system in order to improve the efficiency of sketch-based CAD systems. The sketch-based CAD system that we have previously realized successfully identifies sketch drawings as primitive geometrical curve objects by treating the sketches as fuzzy objects, the fuzziness of which is associated with the roughness of the drawing manner. However, when the system aligns the identified objects with a grid system, difficulties in the grid resolution setting arise because the identified objects often consist of both fine and coarse portions and thus require different grid resolution settings for proper alignment. Meanwhile, the resolution problem with respect to cursor point snapping has been solved by multi-resolution fuzzy grid snapping (MFGS), which realizes automatic selection of the snapping resolution by treating the cursor as a fuzzy point, the fuzziness of which is associated with the roughness of the pointing manner of the user. The present paper proposes a method to apply MFGS to fuzzy objects in order to resolve the difficulties involved in the setting of the snapping resolution of the sketch-based CAD system. Experimental results show that users can align identified objects to an appropriate resolution through MFGS by controlling the roughness of the drawing manner.
This paper proposes a discriminative named entity recognition (NER) method from automatic speech recognition (ASR) results. The proposed method uses the confidence of the ASR result as a feature that represents whether each word has been correctly recognized. Consequently, it provides robust NER for the noisy input caused by ASR errors. The NER model is trained using ASR results and reference transcriptions with named entity (NE) annotation. Experimental results using support vector machines (SVMs) and speech data from Japanese newspaper articles show that the proposed method outperformed a simple application of text-based NER to the ASR results, especially in terms of improving precision.
The lecture is one of the most valuable genres of audiovisual data. Though spoken document processing is a promising technology for utilizing the lecture in various ways, it is difficult to evaluate because the evaluation require a subjective judgment and/or the verification of large quantities of evaluation data. In this paper, a test collection for the evaluation of spoken lecture retrieval is reported. The test collection consists of the target spoken documents of about 2, 700 lectures (604 hours) taken from the Corpus of Spontaneous Japanese (CSJ), 39 retrieval queries, the relevant passages in the target documents for each query, and the automatic transcription of the target speech data. This paper also reports the retrieval performance targeting the constructed test collection by applying a standard spoken document retrieval (SDR) method, which serves as a baseline for the forthcoming SDR studies using the test collection.
This paper describes a query-by-humming (QbH) music information retrieval (MIR) system based on a novel tonal feature and statistical modeling. Most QbH-MIR systems use a pitch extraction method in order to obtain tonal features of an input humming. In these systems, pitch extraction errors inevitably occur and degrade the performance of the system. In the proposed system, a cross-correlation function between two logarithmic frequency spectra is calculated as a tonal feature instead of a difference of two successive pitch frequencies, and probabilistic models are prepared for all tone intervals existing in the database. The similarity scores between an input humming and musical pieces in a database are calculated using the probabilistic models. The advantages of this system are that it can obtain more appropriate tonal features than the pitch-based method, and it is also robust against inaccurate humming by the user thanks to its statistical approach. From experimental results, the top-1 retrieval accuracy given by the proposed method was 86.8%, which was more than 10 points higher than the conventional single pitch method. Moreover, several integration methods were applied to the proposed method with several conditions. The majority decision method showed the highest accuracy, and 5% reduction of retrieval error was obtained.
The CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) idea is widely used as a HIP (Human Interactive Proof) for distinguishing between humans and computer programs. These are automated tests that humans can pass, but that current computer programs can't handle. Breaking a CAPTCHA generally involves solving a difficult Artificial Intelligence problem. There are demands for new technologies that are stronger against automatic attacks by machines, without making it too hard for humans to pass the tests. In this paper, we propose a concept called the Oblivious CAPTCHA, as a fifth-factor technology for a practical CAPTCHA. The Oblivious CAPTCHA uses tasks such as identifying the English alphabetic characters in a set of mixed alphabetic and non-alphabetic characters, or counting the English alphabetic characters in a string. In experiments we found that the Oblivious CAPTCHA was easy for users, because human beings can recognize images of alphabetic characters quickly and accurately, but this is difficult for computers, because OCR tech-niques tend to misrecognize non-alphabetic characters as though they were alphabetic. This shows our approach is practical. We also describe novel algorithms for enhancing the skill gap between humans and computers that can be used with many existing CAPTCHAs.
Although human activities in the World Wide Web are increasing rapidly due to the advent of many online services and applications, we still need to appraise how things such as a merchandise in a store or pictures in a museum receive attention in the real world. To measure people's attention in the physical world, we propose SPAL, a Sensor of Physical-world Attention using Laser scanning. It is challenging to use a laser scanner because it provides only front-side circumference of any detected objects in a measurement area. Unlike cameras, a laser scanner poses no privacy problem because it does not recognize and record an individual. SPAL includes many important factors when calculating people's attention, i.e., lingering time, direction of people, distance to a target object. To obtain such information for calculation, we develop three processing modules to extract information from raw data measured by a laser scanner. We define two attention metrics and two measurement models to compute people's attention. To validate the proposed system, we implemented a prototype of SPAL and conducted experiments in the real-world environment. The results show that the proposed system is a good candidate for determining people's attention.
Covariate shift is a situation in supervised learning where training and test inputs follow different distributions even though the functional relation remains unchanged. A common approach to compensating for the bias caused by covariate shift is to reweight the loss function according to the importance, which is the ratio of test and training densities. We propose a novel method that allows us to directly estimate the importance from samples without going through the hard task of density estimation. An advantage of the proposed method is that the computation time is nearly independent of the number of test input samples, which is highly beneficial in recent applications with large numbers of unlabeled samples. We demonstrate through experiments that the proposed method is computationally more efficient than existing approaches with comparable accuracy. We also describe a promising result for large-scale covariate shift adaptation in a natural language processing task.
Information Retrieval (IR) test collections are growing larger, and relevance data constructed through pooling are suspected of becoming more and more incomplete and biased. Several studies have used IR evaluation metrics specifically designed to handle this problem, but most of them have only examined the metrics under incomplete but unbiased conditions, using random samples of the original relevance data. This paper examines nine metrics in more realistic settings, by reducing the number of pooled systems and the number of pooled documents. Even though previous studies have shown that metrics based on a condensed list, obtained by removing all unjudged documents from the original ranked list, are effective for handling very incomplete but unbiased relevance data, we show that these results do not hold when the relevance data are biased towards particular systems or towards the top of the pools. More specifically, we show that the condensed-list versions of Average Precision, Q-measure and normalised Discounted Cumulative Gain, which we denote as AP', Q' and nDCG', are not necessarily superior to the original metrics for handling biases. Nevertheless, AP' and Q' are generally superior to bpref, Rank-Biased Precision and its condensed-list version even in the presence of biases.
Distributional similarity has been widely used to capture the semantic relatedness of words in many NLP tasks. However, parameters such as similarity measures must be manually tuned to make distributional similarity work effectively. To address this problem, we propose a novel approach to synonym identification based on supervised learning and distributional features, which correspond to the commonality of individual context types shared by word pairs. This approach also enables the integration with pattern-based features. In our experiment, we have built and compared eight synonym classifiers, and showed a drastic performance increase of over 60% on F-1 measure, compared to the conventional similarity-based classification. Distributional features that we have proposed are better in classifying synonyms than the conventional common features, while the pattern-based features have appeared almost redundant.
The Grid has increasingly gathered the attention and interest of scientists and researchers as a building block technology for computational infrastructure. Because of the recent increasing demands on the Grid, the utilization of the Grid has been explored in various scientific research areas. In reality, however, the Grid is not fully utilized well in today's practical scientific research areas treating security-sensitive data, especially in biomedical research areas. This is partly due to the lack of know-how about how access control can be achieved in the actual applications despite the maturity of security technologies related to authentication and authorization. From this perspective, in this paper, we present a Grid-aware access control mechanism leveraging MyProxy, GSI, PERMIS and XSLT, which we have built into a clinical database for Parkinson's research and diagnosis. In particular, we focus on how these technologies are used to satisfy the access control requirements derived from a clinical database. Also, it is shown that the proposed access control mechanism can be operated with low-cost administration and acceptable overhead.
Recently, P2P networks have been evolving rapidly. Efficient authentication of P2P network nodes remains a difficult task. As described herein, we propose an authentication method called Hash-based Distributed Authentication Method (HDAM), which realizes a decentralized efficient mutual authentication mechanism for each pair of nodes in a P2P network. It performs distributed management of public keys using Web of Trust and a Distributed Hash Table. The scheme markedly reduces both the memory size requirement and the overhead of communication data sent by the nodes. Simulation results show that HDAM can reduce the required memory size by up to 95%. Furthermore, the results show that HDAM is more scalable than the conventional method: the communication overhead of HDAM is O(log p).
NGN (Next Generation Network) refers to a network developed to be suitable for an environment in which there is a convergence between wired, wireless communications and broadcasting. Most telecommunication providers have a plan or are conducting a migration of their network to NGN. One of the most important issues for constructing NGN is to provide end-to-end QoS (Quality of Service). This paper aims to propose carriers, who begin to evolve their network to NGN, a construction strategy in the interests of QoS by introducing an example of nationwide NGN in Korea. In viewing the strategy, we develop converged services, and define services provided through NGN first. Next, we define our own standard of service performance metrics, network performance metrics, and quality of service policy from the transport stratum view point. Then we design and construct NGN. Finally, we verify end-to-end performance objectives comparing predefined metrics with collected measurement data on the NGN and derive improvements. This paper deals with voice and video telephony data to analyze the traffic characteristics statistically. It should be noted that voice and video are real time data and reflect the absolute need for QoS in the network.
A large number of embedded computers, such as network appliances and sensors, have rapidly spread out to home and office environments in the last few years. These embedded computers have enough CPU power to execute the software components that can control hardware. Managing distributed components together can enhance human activity and change the real world into a “Smart Space.” We name such collaboration of components “federated service” or “application.” In this paper, we have developed and evaluated a novel middleware named uBlocks which enables users to build and manage applications. uBlocks, unlike other distributed application-building middleware, is distinguished by two major features. The first is a flexible communication mechanism named RT/Dragon. RT/Dragon enables the connection of heterogeneous components. The second is the universal modeling of various distributed components to support building applications by multiple users in parallel. Additionally, to enable building applications in a simple way, we provide various user interfaces (UI) for multi-modal visualization: 2D/3D User Interface, and a web interface. These features lead to reduce the cost of building and managing distributed applications by the user. This research proves that the idea of building applications by users is practical and effective.
Regional disparity has become a major problem in both China and Japan. Governments are emphasizing developing rural areas (Japan: depopulation region) to decrease the economic gap they have with major urban centers. This paper compares two successful cases in each country and identifies a sustainable development approach to establishing local autonomy to develop business and/or industry. The factors of success in these two cases were analyzed, and some implications and insights were clarified. Local specialties should be taken into account, and adopting a suitable strategy and establishing autonomy for the local people is a realistic and effective approach to developing the socio-economy of regional areas and to narrowing the disparity. During this development, a wise leader or a team of leaders is essential, and this leader (s) also must have open communication with the local community.
In recent years, a communications system, the Body Area Network, which uses the human body as a transmission path has attracted attention, and there is increasing expectation that it will be used more widely. However, there are still several points on the signal transmission mechanism of using the human body in this way that remain to be clarified, and there has been little research into the interaction of electromagnetic waves and the human body. Therefore, we used the Finite Difference Time Domain (FDTD) method to calculate the Efield distributions around simple and realistic models of the whole human body in free space with a weareble device. Moreover, E-field calculations were carried out when the positions of the body were changed. Our results show that using the simple homogeneous whole human body model is valid for the E-field calculation, and the dominant component of the E-field is normal to the body/air interface in all the positions that the human body assumes in daily life. Furthermore, in the state where the human body is shunted to the Earth ground, it was shown clearly that the E-field distribution is not mostly different from when a body is floating in free space. It can be concluded that these results provide useful information in improving the design of wearable devices.