Abstract
Deep neural networks and their learning methods( deep learning) are attracting attentions of many people in late years with many successful results. Although error backpropagation is generally used for training multilayer neural networks, its performance depends heavily on the initial parameters of networks. In this research, we proposed methods of improving Xavier Initialization for neural networks with ReLU (Rectified Linear Unit) and another modification of Xavier Initialization for neural networks dealing high dimensional numbers.