2021 Volume 10 Issue 8 Pages 463-468
Distributed large scale neural networks are widely used in complicated image recognition and natural language processing among several organizations, but it takes much maintenance cost for keeping the nodes (i.e, computation servers) performance and their network topology in a churn environment that nodes insertion/deletion occurs frequency. To reduce the cost in the environment, we proposed Distributed Skip Mesh List Architecture, which can provide high stability against node insertion/deletion and automatic node management to a distributed large scale neural network management. In the evaluation, we confirmed that it reduces the maintenance cost (e.g., transmission messages for managing nodes) with high stability.