All the other demos are examples of Supervised Learning, so in this demo I wanted to show an example of Unsupervised Learning. We are going to train an autoencoder on MNIST digits.
An autoencoder is a regression task where the network is asked to predict its input (in other words, model the identity function). Sounds simple enough, except the network has a tight bottleneck of a few neurons in the middle (in the default example only two!), forcing it to create effective representations that compress the input into a low-dimensional code that can be used by the decoder to reproduce the original input.
In the visualization below the x and y axis are the firings of neurons at some layer for every digit. Of special interest is visualizing the bottleneck layer because we think of this layer as the compressed code of every digit. For example, when the network sees the digit 8 (which is 784 numbers that give the pixel values), it compresses that down to 2 numbers: the firing of neuron 1 and firing of neuron 2. These two values are enough for the decoder network that follows to reproduce all 784 original numbers. As an example, suppose the 8 activates neurons 1 and 2 to 0.5 and 0.9, we would plot that digit 8 at position (0.5, 0.9) in the visualization.