Interdisciplinary Information Sciences
Online ISSN : 1347-6157
Print ISSN : 1340-9050
ISSN-L : 1340-9050
Volume 15, Issue 1
Displaying 1-12 of 12 articles from this issue
Special Section: GSIS Interdisciplinary Research Project on Value of Information
  • Toshimitsu MIYAMOTO, Jun MAKISHI, Gen KITAGATA, Takuo SUGANUMA, Norio ...
    2009 Volume 15 Issue 1 Pages 1-11
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    In this paper, we propose a scheme to effectively support cooperative works by controlling information flow from real space (RS) to digital space (DS), and from DS to RS, based on the concept of “Symbiotic Computing.” In practice, our scheme controls availability of shared workspace in DS for cooperative works, according to the situation of tasks and workers in RS, in order to accelerate cooperative works, along with reinforcing “value placed on information.” Using this scheme, advanced support to improve quality of intellectual cooperative works can be realized. In this paper, we apply our proposed scheme to group learning domain. This is an educational activity domain where group members including a teacher and several students cooperatively solve the given problems. Applying to this domain, we show that the proposed scheme, i.e., suitable availability control of shared workspace in DS, can accelerate the cooperative works. From experiment results, we found that the cooperative problem solving by teacher and students were accelerated. Also we confirmed that the learning outcomes were improved, by controlling availability of the shared workspace, according to the progress of the group learning process. From these results we evaluated the effectiveness of our proposal.
    Download PDF (197K)
  • Syoichi IWASAKI, Guohui LIU
    2009 Volume 15 Issue 1 Pages 13-23
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    In this tutorial review, we discussed on the neural substrate of utility and the reason why decision-making often deviates from rationality, failing to serve people’s best long-term interests. It was pointed out that there are two valuation systems in the brain, one cognitive and the other affective. These two valuation systems perform automatic computations of values based on knowledge (cognitive valuation) and past history of learning (i.e., instrumental conditioning) and evolution (affective valuation). In the instrumental conditioning a signal such as money comes to be associated with primary reinforcer like food and thus acquires reinforcing power. Utility in economics can be equated with expectation of this secondary reinforcer, especially monetary reward. Final decision is made under the control of supervisory control system in the prefrontal cortex. The decision may wander from rationality when affective valuation system demands its request too pressingly to resist or the supervisor controller is not efficient enough to handle the situation due to temporally distracted by other tasks or for some other reasons like age (being too young or too old to be equipped with an efficient supervisory controller) and individual difference in impulsivity.
    Download PDF (322K)
  • Kazuhisa SHINOZAWA
    2009 Volume 15 Issue 1 Pages 25-35
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    The purpose of this paper is to make a conceptual analysis on the value of information from a philosophical point of view. In the paper, confining myself to a short philosophical comment, I take the following steps. (1) By referring to the Shannon’s classical paper on communication theory, I reconfirm one of the philosophically basic problems about the value of information, namely, the twisted relation between the elimination of meaning and our common-sense understanding of the value. (2) To elucidate the relation from a philosophical point of view, I take advantage of the Aristotle’s schematic overview called semantic triangle. The schema shows itself to have a systematic and wide-ranging conception. (3) Through analyzing the Aristotelian semantic triagle, I revaluate his philosophical insight that, in order to have a penetrating view of the value of information, we need more light on the value system from the viewpoint of human time, which is contrasted with physical or engineering time. What essentially matters is the relation between human time and Aristotelian common sense. (4) In temporary conclusion, I propose that to focuse attention on human time lead us to an adequate assessment on the crucial divide between quality and quantity in the value of information.
    Download PDF (117K)
  • Hisa MORISUGI, Jane ROMERO, Takayuki MORIGUCHI
    2009 Volume 15 Issue 1 Pages 37-43
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    The problem of making inferences about the ratio of two normal variables occurs in many fields such as bioassay, bioequivalence and ecology. In this paper the theory of the ratio of two normal variables is applied in evaluating the value of time, which is a major component in cost-benefit analysis of transport projects. The subjective value of time is the ratio of the marginal rate of substitution between travel time and travel cost. This paper explores the building of confidence intervals for the subjective value of time, applying methods that make inferences about the mean for the ratio of normal variates: direct substitution of Fieller’s pdf and t-test method. In addition, we also propose a method to minimize the length of the confidence interval.
    Download PDF (122K)
Special Section: High-Performance Computing
  • Sabine ROLLER, Michael RESCH, Martin GALLE, Wolfgang BEZ
    2009 Volume 15 Issue 1 Pages 45-49
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    This paper provides a comprehensive collection of applications from different fields that have all one thing in common: They are typical representatives of their research field with high need for High Performance Computing resources, and they all show high sustained performance on the HPC vector system NEC-SX8. The paper also describes future needs to address when describing the next generation of applications.
    Download PDF (292K)
  • Akihiro MUSA, Yoshiei SATO, Ryusuke EGAWA, Hiroyuki TAKIZAWA, Koki OKA ...
    2009 Volume 15 Issue 1 Pages 51-66
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    Thanks to the highly effective memory bandwidth of the vector systems, they can achieve the high computation efficiency for computation-intensive scientific applications. However, they have been encountering the memory wall problem and the effective memory bandwidth rate has decreased, resulting in the decrease in the bytes per flop rates of recent vector systems from 4 (SX-7 and SX-8) to 2 (SX-8R) and 2.5 (SX-9). The situation is getting worse as many functions units and/or cores will be brought into a single chip, because the pin bandwidth is limited and does not scale. To solve the problem, we propose an on-chip cache, called vector cache, to maintain the effective memory bandwidth rate of future vector supercomputers. The vector cache employs a bypass mechanism between the main memory and register files under software controls. We evaluate the performance of the vector cache on the NEC SX vector processor architecture with bytes per flop rates of 2 B/FLOP and 1 B/FLOP, to clarify the basic characteristics of the vector cache. For the evaluation, we use the NEC SX-7 simulator extended with the vector cache mechanism. Benchmark programs for performance evaluation are two DAXPY-like loops and five leading scientific applications. The results indicate that the vector cache boosts the computational efficiencies of the 2 B/FLOP and 1 B/FLOP systems up to the level of the 4 B/FLOP system. Especially, in the case where cache hit rates exceed 50%, the 2 B/FLOP system can achieve a performance comparable to the 4 B/FLOP system. The vector cache with the bypass mechanism can provide the data both from the main memory and the cache simultaneously. In addition, from the viewpoints of designing the cache, we investigate the impact of cache associativity on the cache hit rate, and the relationship between cache latency and the performance. The results also suggest that the associativity hardly affects the cache hit rate, and the effects of the cache latency depend on the vector loop length of applications. The cache shorter latency contributes to the performance improvement of the applications with shorter loop lengths, even in the case of the 4 B/FLOP system. In the case of longer loop lengths of 256 or more, the latency can effectively be hidden, and the performance is not sensitive to the cache latency. Finally, we discuss the effects of selective caching using the bypass mechanism and loop unrolling on the vector cache performance for the scientific applications. The selective caching is effective for efficient use of the limited cache capacity. The loop unrolling is also effective for the improvement of performance, resulting in a synergistic effect with caching. However, there are exceptional cases; the loop unrolling worsens the cache hit rate due to an increase in the working space to process the unrolled loops over the cache. In this case, an increase in the cache miss rate cancels the gain obtained by unrolling.
    Download PDF (351K)
  • Kentaro SANO, Yoshiaki HATSUDA, Luzhou WANG, Satoru YAMAMOTO
    2009 Volume 15 Issue 1 Pages 67-78
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    This paper evaluates the performance of the 2D FDTD computation on our FPGA-based array processor. So far, we have proposed the systolic computational-memory architecture for custom computing machines tailored for numerical computations with difference schemes, and implemented the array-processor based on this architecture with a single ALTERA StratixII FPGA. The array processor is composed of a two-dimensional array of programmable PEs with mesh network so that computations on a grid are performed in parallel. We wrote and executed codes for the 2D FDTD computation on the array-processor. We obtained almost the same results by FPGA as those by AMD Athlon64 processor. In comparison with AMD Athlon64 processor running at 2.4 GHz, the array-processor operating at 106 MHz achieved over 7 times faster computation for the 2D FDTD problem, which corresponds to the actual performance of 16.2 GFlop/s. The high utilization of the adders and the multipliers of the array processor means that the architecture is also suitable for the FDTD method.
    Download PDF (411K)
  • Harald KLIMACH, Sabine P. ROLLER, Claus-Dieter MUNZ
    2009 Volume 15 Issue 1 Pages 79-83
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    This paper outlines the distribution of an aeroacoustic application in a heterogeneous super computing environment. PACX-MPI, which is used for this distribution allows the coupling of different architectures without leaving the MPI context in the application itself. This makes the usage of a heterogeneous infrastructure very convenient from the applications point of view.
    Integrated simulation of fluid flow with its aeroacoustic is a typical multi-scale task with different numerical requirements in the involved parts. The different requirements can be spatially separated as the noise generating object is generally rather small when compared to the area which is to be computed for the sound propagation. A natural division into a domain of noise generation and a domain of noise propagation arises and the two different computational domains are only loosely coupled with numerical requirements very distinct from each other. We demonstrate how the parallel simulation of a 3D aeroacoustic testcase can benefit of the heterogeneity in the infrastructure by mapping the heterogeneous computational domain on the appropriate architectures.
    Download PDF (154K)
  • Fredrik UNGER, Arne BIASTOCH
    2009 Volume 15 Issue 1 Pages 85-90
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    NEMO is a fluid dynamics code used for oceanographic research. Within the TERAFLOP Workbench in cooperation with the Leibniz-Institut für Meereswissenschaften (IFM-GEOMAR) in Kiel a performance assessment and improvement campaign was carried out, ranging from MPI to memory addressing in solvers. At The High Performance Computing Center Stuttgart (HLRS) tests were made using a large configuration of SX nodes running NEMO at 2.1 Teraflop/s. The improved code is running the test case 29% faster on 512 SX-8 CPUs.
    Download PDF (407K)
  • Katharina BENKERT, Bernhard MÜLLER, Michael M. RESCH
    2009 Volume 15 Issue 1 Pages 91-98
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    Many questions in the field of supernova core collapse still remain unanswered due to the complex multi-faceted nature of the problem. A direct computation of the full neutrino radiation hydrodynamics would require a sustained performance of PetaFlop/s and is therefore unfeasible on today’s supercomputers. The modeling required to reduce the computational effort is accompanied by the ambiguity which physical effects are indispensable. As input parameters also contain a certain amount of uncertainty, parameter studies are necessary. For this reason, supernova simulations still require TFlop/s and a careful mapping of the software onto the given hardware is necessary to assure the maximum performance possible.
    In this paper, we describe the necessary extensions to the partly existing MPI parallelization of the simulation code PROMETHEUS/VERTEX from the Max-Planck Institute for Astrophysics in Garching. With a complete distributed memory parallelization, turn-around times can be decreased substantially. We show for a 15 solar mass model that an efficient usage of up to 32 nodes NEC SX-8 is possible and therefore turn-around times can be reduced by a factor of nearly seven.
    Download PDF (458K)
Regular Papers
  • Shinji IIZUKA, Yohji AKAMA, Yutaka AKAZAWA
    2009 Volume 15 Issue 1 Pages 99-113
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    In order to characterize the (a)symmetries of cut-and-project sets, we prove the following: any cut-and-project set with the two projections being injective on the lattice is fixed by an affine transformation if and only if (1) the window restricted on the projection of the lattice is fixed by another affine transformation, and (2) both affine transformations induce via the two projections the same transformation on the lattice. By this theorem, we prove that any Pisot tilings are asymmetric with respect to any affine transformations.
    Download PDF (361K)
  • Chih-Peng CHU, Jin-Gu PAN, Fu-Chuan LAI, Chorng-Jian LIU
    2009 Volume 15 Issue 1 Pages 115-124
    Published: 2009
    Released on J-STAGE: March 25, 2009
    JOURNAL FREE ACCESS
    This paper shows that a selective piracy-detection strategy for the monopolist is rational in a reproducible software market. Different detection strategies and the corresponding price/penalty strategies adopted by the monopolist, with the considerations of detection costs and network externality, are demonstrated in a simplified two-period model. Under some specific circumstances, the software monopolist could gain a higher profit if not detecting piracy (closing one eye) at the beginning but detecting it (opening the other eye) later. In short, detection strategies would be time-inconsistent. Moreover, from the social planner’s perspective, the monopolist’s best detection strategy is not always socially optimal; that is, not-enforcing copyright protection may be privately and socially beneficial.
    Download PDF (136K)
feedback
Top