affNIST

Download: here

The affNIST dataset for machine learning is based on the well-known MNIST dataset. MNIST, however, has become quite a small set, given the power of today's computers, with their multiple CPU's and sometimes GPU's. affNIST is made by taking images from MNIST and applying various reasonable affine transformations to them. In the process, the images become 40x40 pixels large, with significant translations involved, so much of the challenge for the models is to learn that a digit means the same thing in the upper right corner as it does in the lower left corner.

Research into "capsules" has suggested that it is beneficial to directly model the position (or more general "pose") in which an object is found. affNIST aims to facilitate that by providing the exact transformation that has been applied to make each data case, as well as the original 28x28 image. This allows one to train a model to normalize the input, or to at least recognize in which ways it has been deformed from a more normal image.

Another effect of the transformations is that there is simply much more data: every original MNIST image has been transformed in many different ways. In theory it's an infinite dataset; in practice it's based on 70,000 originals and I've made 32 randomly chosen transformed versions of each original (a different 32 for each original), leading to a total of about two million training + validation cases.

Here are some examples. The left column shows the original MNIST digit (centered in a 40x40 image), and the other 16 columns show transformed versions.

Data representation

The dataset is split into training, validation, and test data. The test data was created by transforming the 10,000 test cases from the original MNIST dataset, the training data came from 50,000 MNIST training cases, and the validation data came from the remaining 10,000 MNIST training cases.

The data is provided in the widely used Matlab format, which is also perfectly legible for Python programs through the scipy.io.matlab.loadmat function.

For completeness, three versions of the dataset are provided: The data contains eight components. All of them are stored in a matrix where each column describes one training case.

The representation of affine transformations

If one applies rotation, shearing, scaling, and translation, then the order of those operations matters. My experiments (unpublished) with capsules suggest that the easiest order for a neural network (and perhaps for a human, too) to understand is that first rotation is applied, then shearing, then scaling, and finally translation. For example, first applying translation and then rotation would have the undesirable effect that a translation to the right might end up taking the image instead downward, if the rotation is 90 degrees.

Another way to make the numbers in the "nice" representation easier for both humans and artificial neural networks to understand is to make the origin not be in the upper left corner, but rather in the center of the image, i.e. right between the 4 most central pixels.

On the other hand, the matrices that describe the transformation simply tell you how to linearly go from homogeneous coordinates in one space to homogeneous coordinates in another. In both of those spaces, the origin is at the upper left pixel.

The matrices have the advantage that they describe a linear transformation of coordinates. The "nice" representation has the advantage that it describes the transformation in a more intuitive way.

Miscellaneous

If you have any published work that uses affNIST, please let me know and I'll place a link to it here.

Neural networks enjoy having much training data, but computers can sometimes find it a bit hefty. To make the download easier, I've provided ZIP'd versions of the files. However, after you unzip that, it's still big. In case your computer finds it easier to load just a little bit of training data at a time (my computer certainly does), I've also made the data available split up in batches. Each batch contains one transformation of every MNIST original.

I made 32 different transformation of each MNIST training case, meaning that there are about two million training / validation data cases. If you'd like to use more, e.g. 64 different transformations, please let me know.

The affNIST dataset is made freely available, without restrictions, to whoever wishes to use it, in the hope that it may help advance machine learning research, but without any warranty.



Most recent edit: August 5th, 2013.

Home page