Intro
In this post we train a standard convolutional architecture on MNIST and then feed random noise inputs to the trained neural net.The standard architecture ouputs a softmax over the 10 class labels interpreted as the probability of each class given then input. On a noise input I suppose we would want all of the 10 probabilities to be small; specifically close to 1/10 since they must sum to 1.
Results
The following histogram is produced by feeding 500 random noise inputs into an MNIST-trained architecture and counting the magnitude of the maximally activated class probability for each image. We can see for instance that ~20 of the noise images activated a class probability at over 90%.
It does not appear that neural nets handle noise inputs well.
No comments:
Post a Comment