This paper is a couple years old but still relevant. The dangers of over-fitting a neural network are substantial.

… we train several standard architectures on a copy of the data where the true labels were replaced by random labels. Our central finding can be summarized as:

Deep neural networks easily fit random labels… The effective capacity of neural networks is sufficient for memorizing the entire data set …Extending on this first set of experiments, we also replace the true images by completely random pixels and observe that convolutional neural networks continue to fit the data with zero training error. This shows that despite their structure, convolutional neural nets

can fit random noise… Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error.