International Journal of Networking and Computing
Online ISSN : 2185-2847
Print ISSN : 2185-2839
ISSN-L : 2185-2839
Volume 11, Issue 1
Displaying 1-6 of 6 articles from this issue
Special Issue on Workshop on Advances in Parallel and Distributed Computational Models 2020
  • Susumu Matsumae, Masahiro Shibata
    2021 Volume 11 Issue 1 Pages 1
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    The 22nd Workshop on Advances in Parallel and Distributed Computational Models (APDCM), which was held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS) on May 18 - May 22, 2020, aims to provide a timely forum for the exchange and dissemination of new ideas, techniques and research in the field of the parallel and distributed computational models. The APDCM workshop has a history of attracting participation from reputed researchers worldwide. The program committee has encouraged the authors of accepted papers to submit full-versions of their manuscripts to the International Journal of Networking and Computing (IJNC) after the workshop. After a thorough reviewing process, with extensive discussions, four articles on various topics have been selected for publication on the IJNC special issue on APDCM. On behalf of the APDCM workshop, we would like to express our appreciation for the large efforts of reviewers who reviewed papers submitted to the special issue. Likewise, we thank all the authors for submitting their excellent manuscripts to this special issue. We also express our sincere thanks to the editorial board of the International Journal of Networking and Computing, in particular, to the Editor-in-chief Professor Koji Nakano. This special issue would not have been possible without his support.
    Download PDF (35K)
  • Anne Benoit, Valentin Le Fèvre, Padma Raghavan, Yves Robert, Hongyang ...
    2021 Volume 11 Issue 1 Pages 2-26
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    This paper focuses on the resilient scheduling of parallel jobs on high-performance computing (HPC) platforms to minimize the overall completion time, or the makespan. We revisit the classical problem while assuming that jobs are subject to failures caused by transient or silent errors, and hence may need to be re-executed each time they fail to complete successfully. This work generalizes the classical framework where jobs are known offline and do not fail: in this framework, list scheduling that gives priority to the longest jobs is known to be a 3-approximation when imposing to use shelves, and a 2-approximation without this restriction. We show that when jobs can fail, using shelves can be arbitrarily bad, but unrestricted list scheduling remains a 2-approximation. The paper focuses on the design of several heuristics, some list-based and some shelf-based, along with different priority rules and backfilling strategies. We assess and compare their performance through an extensive set of simulations using both synthetic jobs and log traces from the Mira supercomputer.
    Download PDF (1125K)
  • Gabriel Bathie, Loris Marchal, Yves Robert, Samuel Thibault
    2021 Volume 11 Issue 1 Pages 27-49
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    This work focuses on dynamic DAG scheduling under memory constraints. We target a shared-memory platform equipped with $p$ parallel processors. The goal is to bound the maximum amount of memory that may be needed by any schedule using p processors to execute the DAG. We refine the classical model that computes maximum cuts by introducing two types of memory edges in the DAG, black edges for regular precedence constraints and red edges for actual memory consumption during execution. A valid edge cut cannot include more than $p$ red edges. This limitation had never been taken into account in previous works, and dramatically changes the complexity of the problem, which was polynomial and becomes NP-hard. We introduce an Integer Linear Program (ILP) to solve it, together with an efficient heuristic based on rounding the rational solution of the ILP. In addition, we propose an exact polynomial algorithm for series-parallel graphs. We further study the extension of the approach where the scheduler is dynamically constrained to select tasks (among ready tasks) so that the total memory used does not exceed some threshold. We provide an extensive set of experiments, both with randomly-generated graphs and with graphs arising from practical applications, which demonstrate the impact of resource constraints on peak memory usage.
    Download PDF (639K)
  • Gokarna Sharma, Ramachandran Vaidyanathan, Jerry L. Trahan
    2021 Volume 11 Issue 1 Pages 50-77
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles and communicate with other robots using colored lights (the robots with lights model. This model assumes obstructed visibility where a robot cannot see another robot if a third robot is positioned between them on the straight line connecting them. In this paper, we consider robot movements to be on a grid (integer plane) of unbounded size. In any given step a robot positioned at a grid point can move only to an adjacent grid point to its north, south, east or west. The grid setting naturally discretizes the 2-dimensional plane and finds applications in many real-life robotic systems. The Complete Visibility problem is to reposition N autonomous robots (starting at arbitrary, but distinct, initial positions) so that, on termination, each robot is visible to all others. The objective in this problem is to simultaneously minimize (or provide trade-off between) two fundamental performance metrics: (i) time to solve Complete Visibility and (ii) area occupied by the solution. We also consider the number of distinct colors used by each robot light. We provide the first O(max{D,N})-time algorithm for Complete Visibility in the asynchronous setting, where D is the diameter of the initial configuration. The area occupied by the final configuration is O(N^2); both the time and area are optimal. The time is randomized if no symmetry breaking mechanism is available for the robots. The number of colors used in our algorithm depends on whether leader election is required or not: (i) 17 colors if leader election is not required and (ii) 32 colors if leader election is required.
    Download PDF (815K)
  • Chung-Hsing Hsu, Neena Imam
    2021 Volume 11 Issue 1 Pages 78-101
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    High Performance Computing has been a driving force behind important tasks such as scientific discovery and deep learning. It tends to achieve performance through greater concurrency and heterogeneity, where the underlying complexity of richer topologies is managed through software abstraction. In this paper, we present our assessment of NVSHMEM, an experimental programming library that supports the Partitioned Global Address Space programming model for NVIDIA GPU clusters. NVSHMEM offers several concrete advantages. One is that it reduces overheads and software complexity by allowing communication and computation to be interleaved vs. separating them into different phases. Another is that it implements the OpenSHMEM specification to provide efficient fine-grained one-sided communication, streamlining away overheads due to tag matching, wildcards, and unexpected messages which have compounding effect with increasing concurrency. It also offers ease of use by abstracting away low-level configuration operations that are required to enable low-overhead communication and direct loads and stores across processes. We evaluated NVSHMEM in terms of usability, functionality, and scalability by running two math kernels, matrix multiplication and Jacobi solver, and one full application, Horovod, on the 27,648-GPU Summit supercomputer. Our exercise of NVSHMEM at scale contributed to making NVSHMEM more robust and preparing it for production release.
    Download PDF (967K)
Regular Paper
  • Taichi Nakamura, Yuiko Sakuma, Hiroaki Nishi
    2021 Volume 11 Issue 1 Pages 102-119
    Published: 2021
    Released on J-STAGE: January 15, 2021
    JOURNAL OPEN ACCESS
    Recently, the developments of data communication networks and advancements in the processing capacity of computers have significantly increased the amount of data that can be applied to a service. These data might result in future innovations. However, the privacy violation of data has become a problem. For example, images of customers’ faces captured with a surveillance camera may lead to new marketing strategies, as the reactions of the customers can be measured from their facial expressions. However, we must consider the privacy of customers when entrusting the images to a third-party data analyst. For solving this privacy problem, researchers have been developing anonymization technologies that are used to preserve privacy by deleting the private information from the original data. However, conventional anonymization techniques cannot appropriately anonymize high-dimensional data such as facial images. This is attributed to the fact that conventional anonymization techniques do not consider the complex relationships between dimensions and the semantic loss of data. However, studies regarding machine learning have been actively conducted; particularly, neural networks (NN) have developed remarkably since the advent of AlexNet [8]. Both machine learning and anonymization share the common idea in their bases to abstract statistical information from a given dataset . Therefore, the machine learning technology might enhance the functionality of anonymization techniques. In this study, we propose a method to apply the results of machine learning to anonymization based on the aforementioned common idea , and the method is named multi-input k-anonymizer unit (MIKU). Notably, MIKU has two modules called S and G maps for mapping a given data, and NNs are used for these modules to generate more natural anonymized data for humans than directly anonymized data, which is generated only by processing the pixel values of facial images. To evaluate MIKU, a direct anonymization method is used, without any NN, for the facial images. The facial images of CelebA [9] are used for both qualitative and quantitative evaluations. The qualitative evaluation is conducted by analyzing the anonymized facial images obtained using different methods, and the quantitative evaluation is performed using the Fréchet inception distance(FID) [4]. In the qualitative evaluation, there are cases where the quality of the anonymized images generated using the comparison method is low because unnatural edges and blurs are created on the anonymized facial images. However, MIKU maintains the quality and attributes of the original facial images even in those cases. In the quantitative evaluation, a different result is obtained when k = 2 anonymity. However, on anonymity greater than 2, the facial images anonymized using MIKU have higher quality than those anonymized using the comparison method.
    Download PDF (4220K)
feedback
Top