Convolutional neural networks can express models very well, but this ability gives us overfitting problem. To avoid overfitting problem, dropout technique are proposed. However, it is sometimes not enough because convolutional neural networks include a huge number of weights and biases. In this paper, we propose a combination of dropout technique and parallel convolution. In our method, neural network has two parallel convolution layer, and average value of these two convolution layer is used as the input to the pooling layer. Our proposed neural network could improve generalization ability about 4% for the unknown test patterns.