IPSJ Online Transactions
Online ISSN : 1882-6660
ISSN-L : 1882-6660
Volume 4
Displaying 1-20 of 20 articles from this issue
  • Momchil Marinov, Koichi Miyazaki, Junji Mawaribuchi
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 1-8
    Published: 2011
    Released on J-STAGE: January 29, 2011
    JOURNAL FREE ACCESS
    In this research we investigate both the model limitation of the fitting to the market price and the accuracy of the approximation in a Lattice Construction method over Deterministic Volatility Models (DVM) using simple statistical tests. Li (2000/2001) proposed the Lattice Construction method which can express the market price flexibly using appropriate DVMs. However, this method has the implicit influence of approximation caused by recombining which is visible for DVM Lattice and is not common for Black-Scholes Lattice model. As a novelty approach we propose a new verification methodology to identify the model limitation or accuracy of lattice approximation for DVMs. It is difficult to discuss about the approximation issue in DVMs because they don't have closed-form solutions like Black-Scholes model. The presented statistical tests use the option model price distribution generated by Monte Carlo simulation technique which doesn't include the approximation of recombining. This specific characteristic of Monte Carlo method is used for capturing the influence of approximation caused by recombining in the DVMs. In that way we can verify whether the estimated models have model limitation of fitting to the market price or (and) whether there is a problem regarding the accuracy of lattice approximation in reproducing the market price.
    Download PDF (411K)
  • Joseph M. Pasia, Hernán Aguirre, Kiyoshi Tanaka
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 9-22
    Published: 2011
    Released on J-STAGE: January 29, 2011
    JOURNAL FREE ACCESS
    Path relinking is a population-based heuristic that explores the trajectories in decision space between two elite solutions. It has been successfully used as a key component of several multi-objective optimizers, especially for solving bi-objective problems. Its unique characteristic of performing the search in the objective and decision spaces makes it interesting to study its behavior in many objective optimization. In this paper, we focus on the behavior of pure path relinking, propose several variants of the path relinking that vary on their strategies of selecting solutions, and analyze its performance using several many-objective NK-landscapes as instances. In general, results of the study show that the path relinking becomes more effective in improving the convergence of the algorithm as we increase the number of objectives. Also, it is shown that the selection strategy associated to path relinking plays an important role to emphasize either convergence or spread of the algorithm. This study provides useful insights for practitioners on how to exploit path relinking to enhance multi-objective evolutionary algorithms for complex combinatorial optimization problems.
    Download PDF (1352K)
  • Kazuyuki Hara, Kentaro Katahira, Kazuo Okanoya, Masato Okada
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 23-32
    Published: 2011
    Released on J-STAGE: January 29, 2011
    JOURNAL FREE ACCESS
    Node-perturbation learning (NP-learning) is a kind of statistical gradient descent algorithm that estimates the gradient of an objective function through application of a small perturbation to the outputs of the network. It can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. In this paper, we show that node-perturbation learning can be formulated as on-line learning in a linear perceptron with noise, and we can derive the differential equations of order parameters and the generalization error in the same way as for the analysis of learning in a linear perceptron through statistical mechanical methods. From analytical results, we show that cross-talk noise, which originates in the error of the other outputs, increases the generalization error as the output number increases.
    Download PDF (455K)
  • Son Truong Nguyen, Shigeru Oyanagi
    Article type: Interconnection Network
    Subject area: Regular Paper
    2011 Volume 4 Pages 33-42
    Published: 2011
    Released on J-STAGE: March 04, 2011
    JOURNAL FREE ACCESS
    Network-on-Chip (NoC) is becoming a popular solution for communication on System-on-Chips. A router is a major component of NoC which is responsible for handling the communication. Its architecture significantly impacts on the performance of NoC. In this paper, we propose a low latency router architecture based on virtual output queuing (VOQ). The number of pipeline stages of a packet transfer can be reduced to one stage, by using VOQ buffers and speculatively performing switch allocation and switch traversal in parallel. This paper also proposes a multiple VOQ architecture for which each input port maintains multiple queues for each output channel to improve the throughput of the router. We have implemented the proposed router on FPGA and evaluated in terms of communication latency, throughput and hardware amount. The experimental results show that in a 4 × 4 two-dimensional mesh network, the proposed multiple VOQ router reduces the communication latency by 25% and cost of area by 15.6% as compared to the look-ahead speculative virtual channel router.
    Download PDF (679K)
  • Yuji Kosuga, Miyuki Hanaoka, Kenji Kono
    Article type: Security
    Subject area: Regular Paper
    2011 Volume 4 Pages 43-56
    Published: 2011
    Released on J-STAGE: March 04, 2011
    JOURNAL FREE ACCESS
    An SQL injection attack is one of the most serious security threats to web applications. It allows an attacker to access the underlying database and execute arbitrary commands, which may lead to sensitive information disclosure. The primary way to prevent SQL injection attacks is to sanitize the user-supplied input. However, this is usually performed manually by developers and so is a laborious and error-prone task. Although security tools assist the developers in verifying the security of their web applications, they often generate a number of false positives/negatives. In this paper, we present our technique called Sania, which performs efficient and precise penetration testing by dynamically generating effective attacks through investigating SQL queries. Since Sania is designed to be used in the development phase of web applications, it can intercept SQL queries. By analyzing the SQL queries, Sania automatically generates precise attacks and assesses the security according to the context of the potentially vulnerable slots in the SQL queries. We evaluated our technique using real-world web applications and found that our solution is efficient. Sania generated more accurate attacks and less false positives than popular web application vulnerability scanners. We also found previously unknown vulnerabilities in a commercial product that was just about to be released and in open-source web applications.
    Download PDF (563K)
  • Toshinori Kojima, Masato Asahara, Kenji Kono, Ai Hayakawa
    Article type: Distributed Computing
    Subject area: Regular Paper
    2011 Volume 4 Pages 57-72
    Published: 2011
    Released on J-STAGE: March 04, 2011
    JOURNAL FREE ACCESS
    Network coordinates (NCs) enable the efficient and accurate estimation of network latency by mapping the geographical relationship among all nodes to Euclidean space. Many researchers have proposed NC-based strategies to reduce the lookup latency of distributed hash tables (DHTs). However, these strategies are limited in the improvement of the lookup latency; the nearest node to which a query should be forwarded is not always included in the consideration scope of a node. This is because conventional latency improvement strategies assign node IDs independent of the underlying physical network and still have the possibility of detour routing. In this paper, we propose an NC-based method of constructing a topology-aware DHT by Proximity Identifier Selection (PIS/NC). PIS/NC constructs a logical ID space of a DHT from the Euclidean space constructed by NCs; a node ID corresponds to the network coordinate of the node. By doing this, the consideration scope of a node always contains the nearest node, thus, we can expect a great reduction in lookup latency. Unlike the conventional PIS strategy that poses unavoidable issues due to uneven ID distribution, PIS/NC tries to moderate these issues by a simple optimization, provided by a PIS/NC stabilizer. The PIS/NC stabilizer detects an uneven distribution of node IDs locally, and then recalculates some IDs so that the unevenness is moderated. As case studies, this paper presents Canary and Harpsichord, which are PIS/NC-based CAN and Chord, respectively. Simulation results show that PIS/NC-based DHTs improve lookup latency. Under the environment using the Transit-Stub model, where SAT-Match and DHash++ are only able to reduce the median lookup latency by 19% of CAN and 9% of Chord, respectively, Canary and Harpsichord reduce it by 40% and 35%, respectively. We also verify that the PIS/NC stabilizer moderates the non-uniform distribution of node IDs.
    Download PDF (1838K)
  • Hajime Fujita, Yutaka Ishikawa
    Article type: Cluster Computing
    Subject area: Regular Paper
    2011 Volume 4 Pages 73-83
    Published: 2011
    Released on J-STAGE: March 18, 2011
    JOURNAL FREE ACCESS
    In this paper we propose DTS (Distributed TCP Splicing), a new mechanism for performing content-aware TCP connection switching in a broadcast-based single IP address cluster. Broadcast-based design enables each cluster node to continue to provide services to clients even when other nodes in the cluster fail. Each connection request from a client is first distributed among the cluster using the consistent hashing method, in order to share the request inspection workload. Then the connection is transferred to an appropriate node according to the content of the request. DTS is implemented on the Linux kernel module and does not require any modification to the main kernel code, server applications, or client applications. With an 8-node server configuration, a DTS cluster with multiple request inspectors achieves about 3.4 times higher connection throughput compared to the single inspector configuration. A SPECweb 2005 Support benchmark is also conducted with a four node cluster, where DTS reduces the total amount of disk accesses with a locality-aware request distribution and almost halves the number of file downloads that fail to meet the speed requirement.
    Download PDF (830K)
  • Son Truong Nguyen, Shigeru Oyanagi
    Article type: Interconnection Network
    Subject area: Regular Paper
    2011 Volume 4 Pages 84-93
    Published: 2011
    Released on J-STAGE: March 18, 2011
    JOURNAL FREE ACCESS
    Designing high throughput and low latency on-chip networks with reasonable area overhead is becoming a major technical challenge. This paper proposes an architecture of router with on-the-fly virtual channel (VC) allocation for high performance on-chip networks. By performing the VC allocation based on the result of switch allocation, the dependency between the VC allocation and switch traversal is removed and these stages can be concurrently performed in a non-speculative fashion. In this manner, the pipeline of a packet transfer can be shortened without the penalty of area. The proposed architecture has been implemented on FPGA and evaluated in terms of network latency, throughput and area overhead. The experimental results show that, the proposed router with on-the-fly VC allocation can reduce the network latency by 40.9%, and improve throughput by 47.6% as compared to the conventional VC router. In comparison with the look-ahead speculative router, it improves the throughput by 8.8% with 16.7% reduction of area for control logic.
    Download PDF (483K)
  • Yoshihisa Abe, Hiroshi Yamada, Kenji Kono
    Article type: Process Scheduling
    Subject area: Regular Paper
    2011 Volume 4 Pages 94-113
    Published: 2011
    Released on J-STAGE: March 18, 2011
    JOURNAL FREE ACCESS
    Idle resources can be exploited not only to run important local tasks such as data backup and virus checking, but also to make contributions to society by participating in distributed computing projects. When executing background processes to utilize such valuable idle resources, we need to control them explicitly to avoid foreground performance degradation. Otherwise, the user will be discouraged from exploiting idle resources. In this paper, we show that we can detect resource contention between foreground and background processes and properly control background process execution at the user level, without modifications to the underlying operating system or user applications. We infer resource contention from changes in the approximated resource shares of background processes. In deriving those resource shares, our approach takes advantage of dynamically enabled probes. Also, it takes account of different resource types and can handle multiple background processes with varied resource needs. Our experiments show that our system keeps the increase in foreground execution time due to background processes below 16.9% - often much lower in most of our experiments.
    Download PDF (2902K)
  • Sho Suzuki, Keiichirou Kusakari, Frédéric Blanqui
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 114-125
    Published: 2011
    Released on J-STAGE: March 30, 2011
    JOURNAL FREE ACCESS
    The static dependency pair method is a method for proving the termination of higher-order rewrite systems à la Nipkow. It combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed λ-calculi. Argument filterings and usable rules are two important methods of the dependency pair framework used by current state-of-the-art first-order automated termination provers. In this paper, we extend the class of higher-order systems on which the static dependency pair method can be applied. Then, we extend argument filterings and usable rules to higher-order rewriting, hence providing the basis for a powerful automated termination prover for higher-order rewrite systems.
    Download PDF (330K)
  • Shuji Yamada, Jinko Kanno, Miki Miyauchi
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 126-133
    Published: 2011
    Released on J-STAGE: March 31, 2011
    JOURNAL FREE ACCESS
    This article provides a mathematical formula for determining the optimal sizes of two different sized spheres to maximize the packing density when randomized loose packing is employed in containers with various shapes. The formula was evaluated with numerous computer simulations involving over a million of spheres.
    Download PDF (9969K)
  • Hiroki Toyokawa, Kinji Kimura, Yusaku Yamamoto, Masami Takata, Akira A ...
    Article type: Auto-tuning
    Subject area: Regular Paper
    2011 Volume 4 Pages 134-146
    Published: 2011
    Released on J-STAGE: May 18, 2011
    JOURNAL FREE ACCESS
    An auto-tuning technique is devised for fast pre/postprocessing for the singular value decomposition of dense square matrices with the Dongarra or the Bischof-Murata algorithms. The computation speed of these two algorithms varies depending on a parameter and specification of computers. By dividing these algorithms into several parts and by modeling each of them, we can estimate their computation times accurately. This enables us to choose an optimal parameter and the faster algorithm prior to execution. Consequently the pre/postprocessing is done faster and the singular value decomposition is applied faster to dense square matrices. Numerical experiments show the effectiveness of the proposed auto-tuning function. The I-SVD library, which incorporates this auto-tuning function, has been published.
    Download PDF (1309K)
  • Kobkrit Viriyayudhakorn, Mizuhito Ogawa
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 147-159
    Published: 2011
    Released on J-STAGE: July 08, 2011
    JOURNAL FREE ACCESS
    Associative search is information retrieval based on the similarity between two different items of text information. This paper reports experiments on associative search on a large number of short documents containing a small set of words. We also show the extension of the set of words with a semantic relation, and investigate its effect. As an instance, experiments were performed on 49, 767 professional (non-handicapped) Shogi game records with 1, 923 next move problems for evaluation. The extension of the set of words by pairing under semantic relations, called semantic coupling, is examined to see the effect of enlarging the word space from unigrams to bigrams. Although the search results are not as precise as next move search, we observe improvement by filtering the unigram search result with the bigram search, especially in the early phase of Shogi games. This also fits our general feeling that the bigram search detects castle patterns well.
    Download PDF (3294K)
  • Naoyoshi Aikawa, Tetsuya Sakai, Hayato Yamana
    Article type: Research Papers
    Subject area: Recommended Paper
    2011 Volume 4 Pages 160-168
    Published: 2011
    Released on J-STAGE: July 11, 2011
    JOURNAL FREE ACCESS
    Community question answering (CQA) sites such as Yahoo! Chiebukuro are known to be very useful resources for automatic question answering (QA) systems. However, CQA users often post questions expecting not some general truths but rather opinions of different people. We believe that a QA system should act according to these different question types. We therefore define two question types based on whether the questioner expects subjective or objective answers, and report on an automatic question classification experiment. We achieve over 80% weighted accuracy using uni-gram and bi-gram features learned by Naïve Bayes with smoothing. We also discuss the inter-annotator agreement and its impact on automatic classification accuracy, as well as what kind of questions tend to be misclassified.
    Download PDF (247K)
  • Young-joo Chung, Masashi Toyoda, Masaru Kitsuregawa
    Article type: Research Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 169-182
    Published: 2011
    Released on J-STAGE: July 11, 2011
    JOURNAL FREE ACCESS
    Web spamming has emerged to deceive search engines and obtain a higher ranking in search result lists which brings more traffic and profits to web sites. Link farm is one of the major spamming techniques, which creates a large set of densely inter-linked spam pages to deceive link-based ranking algorithms that regard incoming links to a page as endorsements to it. Those link farms need to be eliminated when we are searching, analyzing and mining the Web, but they are also interesting social activities in the cyberspace. Our purpose is to understand dynamics of link farms, such as, how much they are growing or shrinking, and how their topics change over time. Such information is helpful in developing new spam detection techniques and tracking spam sites for observing their topics. Especially, we are interested in where we can find emerging spam sites that is useful for updating spam classifiers. In this paper, we study overall size/topic distribution and evolution of link farms in large-scale Japanese web archives for three years containing four million hosts and 83 million links. As far as we know, the overall characteristics of link farms in a time-series of web snapshots of this scale have never been explored. We propose a method for extracting link farms and investigate their size distribution and topics. We observe the evolution of link farms from the perspective of size growth and change in topic distribution. We recursively decomposed host graphs into link farms and found that from 4% to 7% of hosts were members of link farms. This implies we can remove quite a number of spam hosts without contents analysis. We also found the two dominant topics, “Adult” and “Travel”, accounted for over 60% of spam hosts in link farms. The size evolution of link farms showed that many link farms maintained for years, but most of them did not grow. The distribution of topics in link farms was not significantly changed, but hosts and keywords related to each topic dynamically changed. These results suggest that we can observe topic changes in each link farm, but we cannot efficiently find emerging spam sites by monitoring link farms. This implies that to detect newly created spam sites, monitoring current link farm is not enough. Detecting sites that generate links to spam sites would be an effective approach.
    Download PDF (1562K)
  • Khan Md. Mahfuzus Salam, Tetsuro Nishino, Kazutoshi Sasahara, Miki Tak ...
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 183-192
    Published: 2011
    Released on J-STAGE: July 28, 2011
    JOURNAL FREE ACCESS
    Songbirds have been actively studied for their complex brain mechanism of sensor-motor integration during song learning. Male Bengalese finches learn singing by imitating external models to produce songs. In general, birdsong which is string of sounds is represented by a sequence of letters called song notes. In this study, we focus on information-theoretic analysis of these sequential data to explore the complexity and diversity of birdsong, and learning process throughout song development. We design and develop the analysis tool which has many features to do analysis for the sequential data. For experiment, we employ thirteen male Bengalese finches, each with different bouts of song data. By applying ethological data mining to these data, we discover that the finches follow two types of song learning mechanism: practice mode and adopt mode. In addition, over the analysis we find that it is possible to visualize the song features, e.g., traditional transmission, by contour surface diagram of the transition matrix. Furthermore, we can easily identify the families from these contour surface diagrams, which is a very challenging task in general. Our obtained results indicate that analysis based on data mining is a versatile technique to explore new aspects related to behavioral science.
    Download PDF (828K)
  • Yoshiharu Kojima, Masahiko Sakai, Naoki Nishida, Keiichirou Kusakari, ...
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 193-216
    Published: 2011
    Released on J-STAGE: October 03, 2011
    JOURNAL FREE ACCESS
    The reachability problem for an initial term, a goal term, and a rewrite system is to decide whether the initial term is reachable to goal one by the rewrite system or not. The innermost reachability problem is to decide whether the initial term is reachable to goal one by innermost reductions of the rewrite system or not. A context-sensitive term rewriting system (CS-TRS) is a pair of a term rewriting system and a mapping that specifies arguments of function symbols and determines rewritable positions of terms. In this paper, we show that both reachability for right-linear right-shallow CS-TRSs and innermost reachability for shallow CS-TRSs are decidable. We prove these claims by presenting algorithms to construct a tree automaton accepting the set of terms reachable from a given term by (innermost) reductions of a given CS-TRS.
    Download PDF (497K)
  • Hisanobu Tomari, Kei Hiraki
    Article type: Low-Power Method
    Subject area: Regular Paper
    2011 Volume 4 Pages 217-227
    Published: 2011
    Released on J-STAGE: October 14, 2011
    JOURNAL FREE ACCESS
    Power consumption has become an important factor in the design of high-performance computer systems. The power consumption of newer systems is now published but is unknown for many older systems. Data for only two or three generations of systems are insufficient for projecting the performance/power of future systems. We measured the performance and power consumption of 70 computer systems from 1989 to 2011. Our collection of computers included desktop and laptop personal computers, workstations, handheld devices and supercomputers. This is the first paper reporting the performance and power consumption of systems over twenty years, using a uniform method. The primary benchmark we used was Dhrystone. We also used NAS Parallel Benchmarks and CPU2006 suite. The Dhrystone/power ratio was found to be growing exponentially. The data we obtained indicates that the Dhrystone result and the CINT2006 in SPEC CPU2006 correlate closely. The NAS Parallel Benchmarks and CFP2006 results also correlate. Using the trend of Dhrystone/power that we obtained, we predict that the Dhrystone/power ratio will reach 2, 963 VAX MIPS/Watt in 2018, when exaflops machines are expected to appear.
    Download PDF (527K)
  • Nan Dun, Kenjiro Taura, Akinori Yonezawa
    Article type: Distributed System
    Subject area: Regular Paper
    2011 Volume 4 Pages 228-239
    Published: 2011
    Released on J-STAGE: October 14, 2011
    JOURNAL FREE ACCESS
    GMount is a high-performance distributed file system with locality-aware metadata lookups and small installation effort. GMount organizes computer nodes in a decentralized hierarchical overlay to unify separate local file systems into a global shared namespace and achieve locality-aware metadata lookups. GMount offers not only better performance when application executes with considerable data access locality, but also the ability to effortlessly and rapidly enable data sharing among clusters, clouds, and supercomputers. This paper presents performance evaluation of the latest GMount implementation by using both micro-benchmark and real-world data-intensive applications. Experimental results demonstrate that GMount has highly scalable metadata and I/O operation performance when data access locality is common, and the performance of GMount is practically useful for routine data-intensive computing practice.
    Download PDF (2592K)
  • Mona Abo-El Dahb, Yao Zhou, Umair Farooq Siddiqi, Yoichi Shiraishi
    Article type: Regular Papers
    Subject area: Regular Paper
    2011 Volume 4 Pages 240-250
    Published: 2011
    Released on J-STAGE: December 05, 2011
    JOURNAL FREE ACCESS
    The conventional steepest descent method in the back propagation process of an artificial neural network (ANN) is replaced by Simulated Evolution algorithm. This is called SimE-ANN and is applied to the estimation of landslide. In the experimental results, the errors of displacement and resistance of the piles in SimE-ANN are 50.2% and 28.0% smaller than those of the conventional ANN in average over 10 sets of data, respectively. However, the experimental results also show the effects of overtraining of SimE-ANN and the appropriate selection of training data should be investigated as future work.
    Download PDF (1243K)
feedback
Top