GINN: Geometric Illustration of Neural Networks

Different seeds were used during training to create 4 different visualisations, each with their own dynamics:

Clicking the above buttons will take you to the interactive illustrations of neural network non-linear boundaries. They are designed to show the non-linear boundaries of neurons in a neural network. The point at which any neuron in a 3 layer (each of which has 16 neurons) neural network moves from inactive to active can be visualised as a line (for ReLUs, in this case). The image on the left can either show the target data or the network's predictions. The loss curve at the top accompanies the visualisation to relate loss and performance.

An individual data item is an (x, y) pixel location and the target is a single binary value representing either a black or white intensity.

Therefore, this visualisation shows the ENTIRE DATA DOMAIN at all times - the image on the left is NOT the input to the network. Instead, an individual pixel location is fed into it to produce predictions

Help and Features

Roughly from the top down, these are the features of this demonstration.

Loss Curve, top The loss as training progressed for this run. Note that this is not in linear (or logarithmic) scale. We have selected a varying range of points for visualisation, with more corresponding to early training.
Slider This slider is the mechanism for exploring how things change during training. Select some neurons (bottom right) and scroll through the training iterations.
Image, bottom left This image can either show the network output (predictions) at all possible input locations, thus visualising the function this network is mapping, or it can show the original training data. The buttons to the right switch between these states.
Buttons, bottom right These buttons allow you to select and unselect neurons for visualisation to explore how their non-linear boundaries change as learning progresses.
Spacebar, +, - Spacebar sets the slider in automatic motion, + increases its speed, and - decreases it.

Network diagram: colours match the illustration


The inputs are (x, y) pixel locations and the output is the predicted probability that the given (x, y) location is a white pixel. This is trained as a classifier with two classes.

Each arrow represents a weight and bias in the network.