The 20th Workshop on Advances in Parallel and Distributed Computational Models (APDCM), which was held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS) on May 21--May 25, 2018, in Vancouver, Canada, aims to provide a timely forum for the exchange and dissemination of new ideas, techniques and research in the field of the parallel and distributed computational models.
The APDCM workshop has a history of attracting participation from reputed researchers worldwide. The program committee has encouraged the authors of accepted papers to submit full-versions of their manuscripts to the International Journal of Networking and Computing (IJNC) after the workshop. After a thorough reviewing process, with extensive discussions, five articles on various topics have been selected for publication on the IJNC special issue on APDCM.
On behalf of the APDCM workshop, we would like to express our appreciation for the large efforts of reviewers who reviewed papers submitted to the special issue. Likewise, we thank all the authors for submitting their excellent manuscripts to this special issue. We also express our sincere thanks to the editorial board of the International Journal of Networking and Computing, in particular, to the Editor-in-chief Professor Koji Nakano. This special issue would not have been possible without his support.
Large-scale platforms currently experience errors from two di?erent sources, namely fail-stop errors (which interrupt the execution) and silent errors (which strike unnoticed and corrupt data). This work combines checkpointing and replication for the reliable execution of linear work?ows on platforms subject to these two error types. While checkpointing and replication have been studied separately, their combination has not yet been investigated despite its promising potential to minimize the execution time of linear work?ows in error-prone environments. Moreover, combined checkpointing and replication has not yet been studied in the presence of both fail-stop and silent errors. The combination raises new problems: for each task, we have to decide whether to checkpoint and/or replicate it to ensure its reliable execution. We provide an optimal dynamic programming algorithm of quadratic complexity to solve both problems. This dynamic programming algorithm has been validated through extensive simulations that reveal the conditions in which checkpointing only, replication only, or the combination of both techniques, lead to improved performance.
Input/output (I/O) from various sources often contend for scarcely available bandwidth. For example, checkpoint/restart (CR) protocols can help to ensure application progress in failure-prone environments. However, CR I/O alongside an application's normal, requisite I/O can increase I/O contention and might negatively impact performance. In this work, we consider different aspects (system-level scheduling policies and hardware) that optimize the overall performance of concurrently executing CR-based applications that share I/O resources. We provide a theoretical model and derive a set of necessary constraints to minimize the global waste on a given platform. Our results demonstrate that Young/Daly's optimal checkpoint interval, despite providing a sensible metric for a single, undisturbed application, is not sufficient to optimally address resource contention at scale. We show that by combining optimal checkpointing periods with contention-aware system-level I/O scheduling strategies, we can significantly improve overall application performance and maximize the platform throughput. Finally, we evaluate how specialized hardware, namely burst buffers, may help to mitigate the I/O contention problem. Overall, these results provide critical analysis and direct guidance on how to design efficient, CR ready, large -scale platforms without a large investment in the I/O subsystem.
The increasingly crowded wireless spectrum has led to the need of new paradigms in wireless communications and the cognitive radio techniques that can offer promising solutions to the spectrum scarcity. In cognitive radio networks licensed primary users and unlicensed secondary users are allowed to coexist in the same frequency spectrum. Beamforming and MIMO technology can be used to minimize the interference from the secondary users to the primary users while improving the quality of communications when each node is equipped with multiple antennas to form an antenna array. However, equipping multiple antennas at each radio node is not feasible in many applications. In this paper, we consider the radio network with single antenna at each node. We first propose a cooperative network architecture in the network layer. The architecture consists of cooperative clusters used for distributed beamforming, and a routing backbone of the clusters that can avoid the interference to the primary users in the relay route. Distributed algorithms are designed for self-formation of the cooperative clusters and routing backbone. Then we propose a computationally efficient secondary users selection scheme in the link layer for the communications between two cooperative clusters while minimizing the interference to the primary users. The simulation results show that the proposed protocols and algorithms are effective and efficient in terms of time and energy.
Due to worsening machine balance, a lightweight irregular application can utilize only a small fraction of the peak computational capacity on modern processors. Performance of such an application is also unpredictable due to the scattered data accesses. Even though architectural features such as a cache system, hardware prefetchers etc., that reduce the cost of irregular access, are commonly found in most of the modern processors, their design parameters differ widely from one processor to another. Therefore, a performance improving programming technique still needs extensive tuning to gain maximum benefit on a target processor. In this scenario, achieving portable performance becomes difficult. This work proposes a block streaming machine model and hypothesizes that an algorithm based on the model has predictable execution time. To enable adaptation of this model for irregular applications, we also provide algorithmic transformations that can be used to replace the scattered accesses with streaming accesses in a cost predictable way. Further, we experimentally demonstrate usefulness of the model and the transformations for static lightweight irregular computations such as those performed by a numerical partial differential equation solver on modern multicore processors.
In this paper, we consider a uniform k-partition problem in a population protocol model. The uniform k-partition problem divides a population into k groups of the same size. For this problem, we give a symmetric protocol with designated initial states under global fairness. The proposed protocol requires 3k-2 states for each agent. Since any protocol for the uniform k-partition problem requires ?(k) states to indicate a group, the space complexity of the proposed protocol is asymptotically optimal.
Order preserving encryption techniques are treated as some of the most efficient encryption schemes for securing numeric data in a database. Such schemes are popular because they resolve performance degradation issues, which are significant problems in database encryption. However, in some applications, the order itself is sensitive information, and should be hidden. Conventional order preserving encryption techniques published so far, did not consider this issue. Therefore, in this study, we consider three techniques that protect the order information and also show good performance. The three methods hide the data order such that comparison operators can be handled efficiently and performance degradation can be prevented. Our methods work on the top of an order preserving encryption scheme and enhance the security of data. Experimental results demonstrate the efficiency and effectiveness of the proposed three methods.