On-chip interconnects are becoming a major power consumer in scaled VLSI design. Consequently, bus power reduction has become effective for total power reduction on chip multiprocessors and system-on-a-chip requiring long interconnects as buses. In this paper, we advocate the use of bus serialization to reduce bus power consumption. Bus serialization decreases the number of wires and increases the pitch between the wires. The wider pitch decreases the coupling capacitances of the wires, and consequently reduces bus power consumption. Evaluation results indicate that our technique can reduce bus power consumption by 30% in the 45nm technology process.
This paper presents a type theoretical framework for the verification of compiler optimizations. Although today's compiler optimizations are fairly advanced, there is still not an appropriate theoretical framework for the verification of compiler optimizations. To establish a generalized theoretical framework, we introduce assignment types for variables that represent how the value of variables will be calculated in a program. In this paper, first we introduce our type system. Second we prove soundness of our type system. This implies that the given two programs are equivalent if their return values are equal in types. Soundness ensures that many structure preserving optimizations preserve program semantics. Furthermore, by extending the notion of type equality to order relations, we redefine several optimizations and prove that they also preserve program semantics.
In this paper, we show an algorithm of LTL (linear temporal logic) model checking for LL-GG-TRS with regular tree valuation. The class LL-GG-TRS is defined as a subclass of term rewriting systems, and extends the class of pushdown systems (PDS) in the sence that pushdown stack of PDS is extended to tree structure. By this extension, we can model recursive programs with exception handling.
We formally verify the correctness of Transition System Reduction (TSR), an algorithm used in modelcheckers for temporal logics. Formalizing TSR as a function, we formulate and prove its correctness within the proof assistant PVS. We show how to use a well-ordering on a certain set in a termination proof for the loop-based TSR algorithm. We further detail TSR's partial-correctness proof. The formal framework for these proofs is a part of our research for a rigorous verification environment for reactive systems.
We present a DNA sequence analyzing system for a cellular slime mold called Dictyostelium discoideum. The upstream sequences may include cis-elements which are involved in the temporal and spatial regulation of transcription. Our goal is to identify the cis-elements with statistic methods. For this purpose, we have developed a distributed system. Its main components are an alignment program based on dynamic programming which estimates candidates for cis-elements on upstream sequences, and a statistical analyzer which checks if the candidate elements have a statistical property. The system supports SOAP and the components can be deployed and collaborate on the web. In this paper, we mainly discuss the system architecture and evaluate its efficiency.
Making effective use of cache can give good performance. In this paper, Array-Based Cache conscious trees (ABC trees for short) are proposed for realizing good performance of not only search operation but also update operation. The logical structure and manipulation of an ABC tree are similar to those of a B+-tree. The initial space of an array for an ABC tree as it is supposed to be a complete tree is allocated. This allows the tree to have contiguous memory space for its core and to reduce the number of pointers in it. As a result, the key capacity of a node increases and we can make effective use of cache. We also present an enhancement of ABC trees, which can increase the capacity of an ABC tree with overflow nodes. We describe how we can decide whether to create an overflow node when a node overflows for performance. Some experimental studies show that ABC trees can give good performance of operations under certain conditions.
A database service provider (DSP) is a provider of an Internet service for maintaining data so that users can access their data any time and anywhere via the Internet. The DSP model involves several challenges, including the issue of data confidentiality. In this paper we propose a Usage Control (UCON) model and architecture that can be enforced to support data confidentiality in the DSP model. Usage Control (UCON) is a unified model of access control that has been recently introduced as next generation access control. The basic idea of our UCON model for DSPs is separation of the control domain in a DSP into two parts: a database provider domain and a database user domain. In the database provider domain, the access control system controls access by users to database services. In the database user domain, the access control system controls access by other users to a user's database. Through this separation, we can define an access control policy for each domain independently.
A scalable load distribution method for view divergence control of statistically defined data freshness in replicated database systems is proposed. This method enables a number of mobile and fixed client nodes to continue their processes even when they lose connectivity to a network or do not have sufficient bandwidth to meet application requirements, which is very likely to occur to mobile client nodes. This can be achieved by renewing copies of data in client nodes while they maintain connectivity to a network so that their copies of data are sufficiently fresh to meet application requirements while they lose network connectivity. The load distribution for view divergence control is achieved by determining multiple sets of replicas from which client nodes retrieve the values of data through read transactions. Client nodes calculate the value of data that reflect updates which have already reached one or more elements in the determined set of replicas. We show that our method reduces the load of processing read transactions to less than about 1/40 of that in the original method in order to improve data freshness to about 2/5 of the maximum update delay in a large-scale network.
We propose a cellular automaton model of a tumor tissue consisting of a square lattice, tumor cells, cytotoxic T lymphocytes (CTLs) and cytokine. Considering that CTLs circulate in vivo and migrate into a tumor tissue, the square is open and CTLs move through the square. By calculating the cellular automaton model, we obtained these results: the growth of a mass of tumor cells surrounded by CTLs; the rejection of tumor cells by CTLs; and an approximate equilibrium between tumor cells and CTLs. Analysis of the results indicates that the attachment between tumor cells and CTLs is important for the rejection of tumor cells by CTLs.
A new logic-based mobile agent framework named Maglog is proposed in this paper. In Maglog, a concept called “field” is introduced. By means of this concept, the following functions are realized: (1) agent migration, which is a function that enables agents to migrate between computers, (2) inter-agent communication, which is indirect communication with other agents through the field, (3) adaptation, which is a function that enables agents to execute programs stored in the field. We have implemented Maglog in a Java environment. The program of an agent, which is a set of Prolog clauses, is translated into Java source code by our Maglog translator, and is then compiled into Java classes by a Java compiler. The effectiveness of Maglog is confirmed through descriptions of two applications: a distributed e-learning system and a scheduling arrangement system.
The word and unification problems for term rewriting systems (TRSs)are most important ones and their decision algorithms have various useful applications in computer science. Algorithms of deciding joinability for TRSs are often used to obtain algorithms that decide these problems. In this paper, we first show that the joinability problem is undecidable for linear semi-constructor TRSs. Here, a semi-constructor TRS is such a TRS that all defined symbols appearing in the right-hand side of each rewrite rule occur only in its ground subterms. Next, we show that this problem is decidable both for confluent semi-constructor TRSs and for confluent semi-monadic TRSs. This result implies that the word problem is decidable for these classes, and will be used to show that unification is decidable for confluent semi-constructor TRSs in our forthcoming paper.
An important issue in implementing high-level programming languages for use by translators into C is how to support high-level language features not available in C. Java's exception handling is one such feature, and translating it into portable C, which only uses C-style control structures, involves some challenges. Previous studies have proposed ways of translating the Java-style try-catch construct into C. In this paper, we propose a new scheme for implementing it in an efficient and portable manner. We use our parallel language OPA, which is an extended Java language, and its translator. In our scheme, we do not use troublesome setjmp/longjmp routines for non-local jumps. Instead, we check the occurrences of exceptions using functions' return values. Java also has the try-finally construct, mainly used for cleaning up, which cannot be translated directly into C-style control structures. To implement it, we developed a new scheme with integer values corresponding to continuation targets. Compared with other techniques, ours has advantages in both the runtime overhead and the generated code size. For these two features, using some benchmark programs, we measured the performance of our scheme and compared it with those of some other schemes.
We offer a pattern-matching algorithm based on incomplete regular expression (IRE, for short) types. IRE types extend the regular expression types introduced by Hosoya and Pierce in the programming language XDuce. Pattern-matching for IRE-typed expressions provides a capability to uniformly access “context” parts of XML document trees within the framework of pattern-matching theory; we do not rely on external facilities (or notations) such as XPath. In order to describe our pattern-matching algorithm, we adopt a rule-based approach; that is, we present our algorithm as a set of a few simple transformation rules. These rules simulate a transition in non-deterministic top-down tree automata(though we do not deal with tree automata explicitly), while also accumulating bindings for pattern variables. Our pattern-matching algorithm is sound and complete: it enumerates all correct but no incorrect solutions. We give rigorous proofs of these properties. A small but non-trivial example illustrates the expressiveness of our framework.
The SC language system was developed to provide a transformation-based language extension scheme for SC languages (extended/plain C languages with an S-expression-based syntax). Using this system, many flexible extensions to the C language can be implemented by means of transformation rules over S-expressions at low cost, mainly because of the preexisting Common Lisp capabilities for manipulating S-expressions. This paper presents the LW-SC (Lightweight-SC) language as an important application of this system, featuring nested functions (i.e., functions defined inside other functions). A function can manipulate its caller's local variables (or local variables of its indirect callers) by indirectly calling a nested function of its callers. Thus, many high-level services with “stack walk” can be easily and elegantly implemented by using LW-SC as an intermediate language. Moreover, such services can be implemented efficiently because we designed and implemented LW-SC to provide “lightweight” nested functions by aggressively reducing the costs of creating and maintaining nested functions. The GNU C compiler also provides nested functions as an extension to C, but our sophisticated translator to standard C is more portable and efficient for occasional “stack walk.”
Chip multiprocessors (CMPs), which recently became available with the advance of LSI technology, can outperform current superscalar processors by exploiting thread-level parallelism (TLP). However, the effectiveness of CMPs unfortunately depends greatly on their applications. In particular, they have so far not brought any significant benefit to non-numerical programs. This study explores what techniques are required to extract large amounts of TLP in non-numerical programs. We focus particularly on three techniques: thread partitioning with various control structure levels, speculative thread execution, and speculative register communication. We evaluate these techniques by examining the upper bound of the TLP, using trace-driven simulations. Our results are as follows. First, little TLP can be extracted without both of the speculations in any of the partitioning levels. Second, with the speculations, available TLP is still limited in conventional function-level and loop-level partitioning. However, it increases considerably with basic block-level partitioning. Finally, in basic block-level partitioning, focusing on control-equivalence instead of post-domination can significantly reduce the compile time, with a modest degradation of TLP.
The amount of task-level parallelism (TLP) in a runtime workload is useful information for determining the efficient usage of multiprocessors. This paper presents mechanisms for dynamically estimating the amount of TLP in runtime workloads. Modifications are made to the operating system (OS) to collect information about processor utilization and task activities, from which the TLP can be calculated. By effectively utilizing the time stamp counter (TSC) hardware, the task activities can be monitored with fine time resolution, which enables the TLP to be estimated with fine granularity. We implemented the mechanisms on a recent version of Linux. Evaluation results indicate that the mechanisms can estimate the TLP accurately for various workloads. The overheads imposed by the mechanisms are small.
When large matrix problems are treated, the locality of storage reference is very important. Usually higher locality of storage reference is attained by means of block algorithms. This paper introduces an implementation of block Householder transformation based on the block reflector (Schreiber, 1988) or “GGT” representation rather than on the method using “WYT” representations or compact “WYT” or “YTYT”(Bischof, 1993, etc.). This version of block Householder transformation can be regarded as a most natural extension of the original non-blocked Householder transformation, with the matrix elements of the algorithm changed from numbers to small matrices. Thus, an algorithm that uses the non-blocked version of Householder transformation can be converted into the corresponding block algorithm in the most natural manner. To demonstrate the implementation of the Householder method based on the block reflector described in this paper, block tridiagonalization of a dense real symmetric matrix is carried out to calculate the required number of eigenpairs, following the idea of the two-step reduction method(Bischof, 1996, etc.).
We have been developing a volumetric computing graphics cluster system in the form of a PC cluster with its rendering and calculation performance enhanced by graphics boards and dedicated devices. The goal of the system is to perform real-time simulation for a practical surgical simulator. A space-partition scheme for parallel processing inevitably requires data communications for both simulation and visualization. We studied the effects of both types of communications through experiments. On the basis of the results, we discuss a performance model and propose a performance metric for time-restricted processing. To provide an example, we evaluated our VGCluster system by using the proposed metric. The metric shows the effect of sustaining scalability by using a dedicated image-composition device.
High-speed signed-digit (SD) architectures for weighted-to-residue (WTOR) and residue-to-weighted (RTOW) number conversions with the moduli set (2n, 2n-1, 2n+1) are proposed. The complexity of the conversion has been greatly reduced by using compact forms for the multiplicative inverse and the properties of modular arithmetic. The simple relationships of WTOR and RTOW number conversions result in simpler hardware requirements for the converters. The primary advantages of our method are that our conversions use the modulo m signed-digit adder (MSDA) only and that the constructions are simple. We also investigate the modular arithmetic between binary and SD number representation using circuit designs and a simulation, and the results show the importance of SD architectures for WTOR and RTOW number conversions. Compared to other converters, our methods are fast and the execution times are independent of the word length. We also propose a high-speed method for converting an SD number to a binary number representation.
Test compression/decompression schemes using variable-length coding, e.g., Huffman coding, efficiently reduce the test application time and the size of the storage on an LSI tester. In this paper, we propose a model of an adaptive decompressor for variable-length coding and discuss its property. By using a buffer, the decompressor can operate at any input and output speed without a synchronizing feedback mechanism between an ATE and the decompressor, i.e., the proposed decompressor model can adapt to any test environment. Moreover, we propose a method for reducing the size of the buffer embedded in the decompressor. Since the buffer size depends on the order in which test vectors are input, reordering test vectors can reduce the buffer size. The proposed algorithm is based on fluctuations in buffered data for each test vector. Experimental results show a case in which the ordering algorithm reduced the size of the buffer by 97%.
XML is widely applied to represent and exchange data on the Internet. However, XML documents from different sources may convey nearly or exactly the same information but may be different on structures. In previous work, we have proposed LAX(Leaf-clustering based Approximate XML join algorithm), in which the two XML document trees are divided into independent subtrees and the approximate similarity between them are determined by the tree similarity degree based on the mean value of the similarity degrees of matched subtrees. Our previous experimental results show that LAX, comparing with the tree edit distance, is more efficient in performance and more effective for measuring the approximate similarity between XML documents. However, because the tree edit distance is extremely time-consuming, we only used bibliography data of very small sizes to compare the performance of LAX with that of the tree edit distance in our previous experiments. Besides, in LAX, the output is oriented to the pair of documents that have larger tree similarity degree than the threshold. Therefore, when LAX is applied to the fragments divided from large XML documents, the hit subtree selected from the output pair of fragment documents that has large tree similarity degree might not be the proper one that should be integrated. In this paper, we propose SLAX (Subtree-class Leaf-clustering based Approximate XML join algorithm) for integrating the fragments divided from large XML documents by using the maximum match value at subtree classes. And we conduct further experiments to evaluate SLAX, comparing with LAX, by using both real large bibliography and bioinformatics data. The experimental results show that SLAX is more effective than LAX for integrating both large bibliography and bioinformatics data at subtree classes.
Identity based encryption (ΙΒε) schemes have been flourishing since the very beginning of this century. In ΙΒε, proving the security of a scheme in the sense of IND-ID-CCA2is widely believed to be sufficient to claim that the scheme is also secure in the senses of both SS-ID-CCA2 and NM-ID-CCA2. The justification for this belief is the relations among indistinguishability (IND), semantic security (SS) and non-malleability (NM). However these relations have been proved only for conventional public key encryption (ΡΚε) schemes in previous works. The fact is that ΙΒε and ΡΚε have a difference of special importance, i.e., only in ΙΒε can the adversaries perform a particular attack, namely, the chosen identity attack. In this paper we have shown that security proved in the sense of IND-ID-CCA2 is validly sufficient for implying security in any other sense in ΙΒε. This is to say that the security notion, IND-ID-CCA2, captures the essence of security for all ΙΒε schemes. To show this, we first formally defined the notions of security for ΙΒε, and then determined the relations among IND, SS and NM in ΙΒε, along with rigorous proofs. All of these results take the chosen identity attack into consideration.
Side channel attacks are a serious menace to embedded devices with cryptographic applications, which are utilized in sensor and ad hoc networks. In this paper, we discuss how side channel attacks can be applied against message authentication codes, even if the countermeasures are taken to protect the underlying block cipher. In particular, we show that EMAC, OMAC, and PMAC are vulnerable to our attacks. We also point out that our attacks can be applied against RMAC, TMAC, and XCBC. Based on simple power analysis, we show that several key bits can be extracted, and based on differential power analysis, we present a selective forgery against these MACs. Our results suggest that protecting block ciphers against side channel attacks is insufficient, and countermeasures are needed for MACs as well.
In the execution of signature on a smart card, side channel attacks such as simple power analysis (SPA) have become serious threat12). There are the fixed procedure method and the indistinguishable method for SPA resistant methods. The indistinguishable method conceals all branch instructions by using indistinguishable addition formulae but may reveal the hamming-weight when an addition chain with the un-fixed-hamming-weight is used. In the case of hyper-elliptic curve, the indistinguishable method has not been proposed yet. In this paper, we give an indistinguishable addition formulae of hyper-elliptic curve. We also give algorithms which output the fixed-hamming-weight representation for indistinguishable addition formulae and works with or without computation table, which can dissolve the above mentioned problem on the indistinguishable method and are also applied to an elliptic curve scalar multiplication.
Semiotic analysis is often used for describing the inter-relationship of structure, function and behavior of any artifacts as the means for designing various computerized tools for machine diagnosis and operation procedure. In this study, a graphical method called Multilevel Flow Models (MFM) is applied for supporting machine maintenance work of commercially available Micro Gas Turbine System (MGTS), to describe and handle the relationships between goals and functions that exist in various parameters of MGTS including signal, alarm and fault. A new three-step methed including alarm validation, fault condition checkup and fault identification is proposed for fault diagnosis based on MFM. A trial software has been developed by using Visual C++ and Excel for monitoring and diagnosing the MGTS based on the proposed fault diagnosis method. And it was tested by several typical actual fault cases, to show that the proposed method is efficient to monitor the running state of MGTS and to diagnose the real reasou of fault message from the operation software provided by the vendor of Micro Gas Turbine.
Word Sense Disambiguation (WSD) is the task of choosing the right sense of a polysemous word given a context. It is obviously essential for many natural language processing applications such as human-computer communication, machine translation, and information retrieval. In recent years, much attention have been paid to improve the performance of WSD systems by using combination of classifiers. In (Kittler, Hatef, Duin, and Matas 1998), six combination rules including product, sum, max, min, median, and majority voting were derived with a number of strong assumptions, that are unrealistic in many situations and especially in text-related applications. This paper considers a framework of combination strategies based on different representations of context in WSD resulting in these combination rules as well, but without the unrealistic assumptions mentioned above. The experiment was done on four words interest, line, hard, serve; on the DSO dataset it showed high accuracies with median and min combination rules.
Going one step ahead feedback querying in integrating users into a search process, navigation is the more recent approach to finding images in a large image collection by using content-based information. Rather than using queries or going into a feedback querying process that would be both heavy in terms of human-computer interaction and computer processing time, navigation on a pre-computed data structure is easier and smoother for the user. In particular, we found Galois' lattices to be convenient structures for that purpose. However, while properties extracted from images are usually real-valued data, most of the time a navigation structure has to deal with binary links from an image (or a group of images) to another. A trivial solution to get a binary relationship from real-valued data is to apply a threshold, but this solution not only leads to a loss of information but also tends to create sparse areas in the lattice. In this paper, we propose a technique to incrementally build a Galois' lattice from real-valued properties by taking into account the existing structure, thus limiting the size of the lattice by avoiding the creation of sparse nodes. Experiments showed that this technique produces a navigation structure of better quality, making search process faster and more efficient, thus improving user's experience.
Detecting skin regions of an image is one of under interest arcas in computer vision and graphics. It can be primary step in several applications like advanced human-computer interaction, biometric authentication of contextual image retrieval. Several different researches and challenges have been done to classify the image regions into two groups of skin and non-skin areas. Most of them discussed well in theoretical aspects like effect of different color spaces or classifiers on large datasets. Here it is tried to have practical look to the problem. In a real final system we will have some limitations on speed, memory and even training dataset in different conditions. After a short overview to some common previous methods, a novel skin model is proposed based on LVQ neural networks. Consequently a study on homogenization methods gives more accurate choice for this step. It is shown with the experiments that this scheme can reserve advantages of several different methods at once. It can perform comparable accuracy of 87.92% with acceptable speed while covering wide variety of skin color with a limited size of training data.
Real-time streaming services are attracting attention. However, an adversary can compromise the safety of these services in ways such as data tampering, spoofing, and repudiation. In this paper we propose a real-time Stream Authentication scheme for Video streams called SAVe. Each packet in the stream is authenticated to correspond to the packet loss seen in UDP-based streaming. The amount of redundancy distributed to each frame is also adjusted according to the importance of each frame, to take account of the special characteristics of video such as differences in importance of and dependencies between frames. Since temporal and spatial compression techniques are adopted for video stream encoding, SAVe is efficient in terms of making important frames robust to packet loss. The simulation results show that the authentication rate is on average approximately equivalent to that of previously proposed schemes. An improvement of 50% in the playing rate over previously proposed schemes can be seen when the packet loss rate is 20%.
This paper examines the problem of evaluating systems that aim at finding one highly relevant document with high precision. Such a task is important for modern search environments such as the Web where recall is unimportant and/or unmeasurable. Reciprocal Rank is a metric designed for finding one relevant document with high precision, but it can only handle binary relevance. We therefore introduce a new metric called O-measure, for high precision, high relevance search, and show (a) How the task of finding one relevant document is different from that of finding as many relevant documents as possible, in both binary and graded relevance settings;and (b) How the task of finding one highly relevant document is different from that of finding any one relevant document. We use four test collections and the corresponding sets of formal runs from the NTCIR-3 Crosslingual Information Retrieval track to compare the tasks and metrics in terms of resemblance, stability and discrimination power in system ranking.
Light reflected from an object to a camera is a mixture of light from specular and diffuse reflection. This has important implications for many computer vision tasks, such as image matching and understanding. Many applications, for example, digital contents production, photorealistic image synthesis, and motion analysis, may require the diffuse and specular reflection to be separated. We present an approach for separating the diffuse and specular components of object surface reflection. This approach is based on the well-known dichromatic reflection model, however it separates reflections from reflectance fields constructed for every point on a 3-D object surface. Our method can prevent having to segment image into several uniformly-colored areas. Our method can thus separate reflection from an object surface with a complicated texture. We analyzed the properties of the reflectance field constructed from original frames and showed how to separate reflection components for each 3-D point. Experiments on real scenes showed that our method was successful.
Rendering high quality animations from 3D models is a computationally expensive and challenging task. Recently, distributed computing technologies have been deployed for real-time and effective animation rendering. However, specialized programming skills and specific hardware or software are required to perform animation rendering on the basis of distributed computing technology. Therefore, a distributed computing technology to enable efficient and easy execution of animation rendering is required. In this paper, we propose a method for distributed animation rendering based on Grid computing. The method we propose does not impose any special distributed computing technology on users' and it facilitates effective implementation of distributed animation rendering. In this research, we also developed a scheduler that automatically distributes the animation rendering task to the resources in a Grid computing environment. We constructed a prototype based on the proposed method and confirmed its effectiveness through several animation rendering experiments.
Using effective target width (We) in Fitts' law has been widely used for evaluating one directional pointing tasks. However, concrete methods of calculating We have not been officially unified. This paper concentrates on resolving this problem. A specially designed and controlled experiment is described. The results reveal that the method of mapping all the abscissa data into one unified relative coordinate system to do calculation is better for modeling human computer interfaces than dividing data into two groups according to corresponding target sides and mapping them into two separate coordinate systems.
This paper proposes a method for gathering researchers' homepages(or entry pages) by applying new simple and effective page group models for exploiting the mutual relations between the structure and content of a page group, aiming at narrowing down the candidates with a very high recall. First, 12 property-based keyword lists that correspond to researchers' common properties are created and are assigned either organization-related or other. Next, several page group models (PGMs) are introduced taking into consideration the link structure and URL hierarchy. Although the application of PGMs generally causes a lot of noises, modified PGMs with two original techniques are introduced to reduce these noises. Then, based on the PGMs, the keywords are propagated to a potential entry page from its surrounding pages, composing a virtual entry page. Finally, the virtual entry pages that score at least a threshold number are selected. The effectiveness of the method is shown by comparing it to a single-page-based method through experiments using a 100GB web data set and a manually created sample data set.
Recently, techniques developed in the field of computer graphics and virtual reality have been applied to many environments, with the result that measuring the 3D shapes of real objects has become increasingly important. However, few methods have been proposed to measure the 3D shape of transparent objects such as glass and acrylics. In this paper, we introduce three methods that estimate the surface shape of transparent objects by using polarization analysis. The first method determines the surface shape of a transparent object by using knowledge established in the research field of thermodynamics. The second method determines the surface shape of a transparent object by using knowledge established in the research field of differential geometry. The third method gives an initial value of the surface shape and then determines the true surface shape of a~transparent object by iterative computation. At the end of the paper, we discuss the advantages and the disadvantages of these three methods.
Program analysis techniques have improved steadily over the past several decades, and these techniques have made algorithms and secret data contained in programs susceptible to discovery. Obfuscation is a technique to protect against this threat. Obfuscation schemes that encode variables have the potential to hide both algorithms and secret data. We define five types of attack — data-dependency attacks, dynamic attacks, instruction-guessing attacks, numerical attacks, and brute force attacks — that can be launched against existing obfuscation schemes. We then propose an obfuscation scheme which encodes variables in a code using an affine transformation. Our scheme is more secure than that of Sato, et al. because it can protect against dependency attacks, instruction-guessing attacks, and numerical attacks. We describe the implementation of our scheme as an obfuscation tool for C/C++ code.
Redundant coding is a basic method of improving the reliability of detection and survivability after image processing. It embeds watermarks repeatedly in every frame or region and can thus prevent errors due to the accumulation of frames or regions during watermark detection. Redundant coding, however, is not always effective after image processing because the watermark signal may be attenuated by the accumulation procedure when image processing removes watermarks from specific frames or regions. We therefore propose a method of detection to prevent the attenuation of the watermark signal by accumulating a subset of the regions so that the accumulated region has a minimal bit-error rate, which is estimated from the region. Experimental evaluations using actual motion pictures have revealed that the new method can improve watermark survivability after MPEG encoding by an average of 15.7% and can widely be used in correlation-based watermarking.
This paper proposes a framework based on a new architecture that allows distributed smartcards to interact with one another as well as with application programs on their hosts. Since these interactions are handled distribution-transparently through message dispatching agents deployed on each host, the smartcards can autonomously conduct distributed protocols without turning to off-card application programs. The proposed framework thus reduces the complexity of application programs and makes it easier to develop smartcard-based services that offer a high level of functionality. The feasibility of the framework is evaluated and confirmed by implementing a smartcard-based optimistic fair trading protocol for electronic vouchers on this framework.
The scale-free (SF) structures that commonly appear in many complex networks are a hot topic in social, biological, and information sciences. These self-organized generation mechanisms are expected to be useful for efficient communication or robust connectivity in socio-technological infrastructures. This paper is the first review of geographical SF network models. We discuss the essential generation mechanisms for inducing the structures with power-law behavior, and consider the properties of planarity and link length. Distributed design of geographical SF networks without the crossing and long-range links that cause interference and dissipation problems is very important for many applications such as communications, power grids, and sensor systems.
Routing changes of the interior gateway protocol (IGP), especially unexpected ones, can significantly affect the connectivity of a network. Although such changes can occur quite frequently in a network, most operators have hardly noticed them because of a lack of effective tools. In this paper, we introduce Rtanaly, a system to (i) detect IGP routing changes in real-time and instantly alert operators of the detected changes, (ii) quantify routing changes over the long term to provide operators with a general view on the routing stability of a network, (iii) estimate the impact of routing changes, and (iv) help operators troubleshoot in response to unexpected changes. Rtanaly has the following features: (i) it supports all three widely deployed IGPs- OSPFv2, OSPFv3, and IS-IS, (ii) it uses a completely passive approach, (iii) it visually displays the measurement results, and(iv) it is accessible through the web. We present the results of measurements that we have performed with Rtanaly as well as some observed pathological behavior to show its effectiveness. We have released the first version of Rtanaly as free software and its distribution is based on a BSD-style license.
The specification of access control policies for large, multi-organization applications is difficult and error-prone. Sophisticated policies are needed for fine-grained control of access to large numbers of entities, resulting in many policies specified by different security administrators. Techniques such as role based access control (RBAC) have been proposed to group policies and provide a framework for inheriting policies based on role hierarchies. RBAC does not prevent inconsistencies and conflicts arising in the policy specifications, though, which can lead to information leaks or prevent required access. This paper proposes an approach using free variable tableaux to detect conflicts and redundant policies resulting from the combination of various types of authorization and constraint policies. This approach uses static analysis to enable complete detection of modality and static constraint policy conflicts.
A variety of access technologies have been deployed over the last few years, and mobile nodes now have multiple network interfaces. In addition, mobile communications has been an active research area for several years. We have developed a link aggregation system with Mobile IP. This link aggregation system, called SHAKE, is intended to improve the throughput and reliability of wireless communication by making it possible to use multiple Internet-linked mobile nodes simultaneously and disperse the traffic between mobile nodes and a correspondent node. This paper describes a collaboration mechanism that provides packet-based and flow-based traffic distributive control based on user preferences and the cooperating nodes' states. Evaluations of a prototype implementation show that the mechanism is advantageous for a cooperative communication system like SHAKE. The results of experiments reveal that the flow-based traffic distribution is effective when the delay jitters of the nodes are extremely different.
Unlinkability, the property that prevents an adversary recognizing whether outputs are from the same user, is an important concept in RFID. Although hash-based schemes can provide unlinkability by using a low-cost hash function, existing schemes are not scalable since the server needs O(N) hash calculations for every ID matching, where N is the number of RFID devices. Our solution is the K-steps ID matching scheme, which can reduce the number of hash calculations on the server to O(logN). In this paper, we explain the protocol, describe a test implementation, and discuss the application of this scheme to practical RFID systems. We also compare the scheme with other hash-based schemes from various viewpoints.
Due to the infrastructure-less, dynamic, and broadcast nature of radio transmissions, communications in mobile ad-hoc networks, MANETs, are susceptible to malicious traffic analysis. After performing traffic analysis, an attacker conducts an intensive attack (i.e., a target-oriented attack) against a target node specified by the traffic analysis. Because of the degradation of both throughput and security of routing, traffic analysis and its subsequent target-oriented attack are known as serious problems in regards to MANETs. Basically, position information of routing nodes is very sensitive data in MANETs, where even nodes not knowing each other establish a network temporarily. It is desirable that position information is kept secret. All of these problems are especially prominent in position-based routing protocols of MANETs. Therefore a new position-based routing protocol, which keeps routing nodes anonymous, thereby preventing possible traffic analysis, is proposed. The proposed scheme uses a time-variant temporary identifier, Temp ID, which is computed from the time and position of a node and used for keeping the node anonymous. Only the position of a destination node is required for the route discovery, and the Temp ID is used for establishing a route for sending data. A receiver dynamic-handshake scheme is designed for determining the next hop on-demand by using the Temp ID. The level of anonymity and the performance of this scheme were evaluated. The evaluation results show that the proposed scheme ensures the anonymity of both route and nodes and robustness against a target-oriented attack and other attacks. Moreover, this scheme does not depend on node density as long as nodes are connected in the network.