TensorFlow Keras Basic Image Classification Tutorial

In this tutorial, we show you how to use the PEDL API to train an image classifier based on the MNIST fashion image dataset. This tutorial is inspired by the official TensorFlow Basic Image Classification Tutorial.

This tutorial requires the PEDL CLI. For installation procedures: Install PEDL CLI

Overview

In this tutorial, we will walk you through a PEDL implementation of the MNIST fashion image classifier. The full code is available to download at MNIST TF KERAS or in the Code Sample section.

PEDL requires two functions to build a TensorFlow Keras model:

  1. make_data_loaders

  2. build_model

In our code example, the file main.py contains both of these functions.

Additionally, the PEDL frameworks expects two files to be provided:

  1. an entry point (__init__.py)

  2. an experiment configuration file (const.yaml)

__init__.py is our entry point. It imports make_data_loaders and the model class, which contains build_model. The experiment configuration file (*.yaml) contains information on hyperparameters as well as details on the searcher such as the metric used for optimization.

To build and train our model we make use of the tensorflow and tf.keras libraries.

import tensorflow as tf
from tensorflow import keras

Downloading and Preparing the MNIST fashion Dataset

We create a dataloader based on the MNIST fashion dataset. We first download the data through the keras.dataset API and, after some rescaling, turn it into a InMemorySequence.

def make_data_loaders(experiment_config, hparams):

    fashion_mnist = keras.datasets.fashion_mnist
    (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
    train_images, test_images = train_images / 255.0, test_images / 255.0

    batch_size = pedl.get_hyperparameter("batch_size")
    train = data.InMemorySequence(data=train_images, labels=train_labels, batch_size=batch_size)
    test = data.InMemorySequence(data=test_images, labels=test_labels, batch_size=batch_size)

    return train, test
../_images/MNIST_fashion.png

Building the Model

Each PEDL experiment expects the implementation of a trial class. This class must inherit from an appropriate base class that corresponds to the framework we would like to use.

In our case, we are using a TFKeras model and thus our trial class subclasses PEDL’s TFKerasTrial class. In the trial class, we define our build_model function.

We will use the original tf.keras libraries to build a keras.Sequential model:

class MNISTTrial(TFKerasTrial):
    def build_model(self, hparams) -> keras.Sequential:

        model = keras.Sequential(
            [
                keras.layers.Flatten(input_shape=(28, 28)),
                keras.layers.Dense(128, activation="relu"),
                keras.layers.Dense(10),
            ]
        )

        model.compile(
            optimizer="adam",
            loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
            metrics=["accuracy"],
        )
        return model

To view all model code in one place either download it from MNIST TF KERAS or go to the Code Sample section.

Training the Model

Lastly, we create a configuration *.yaml file which contains information such as searcher metric and batch size.

To start the experiment, we run:

pedl experiment create const.yaml .

Here, the first argument (const.yaml) specifies the experiment configuration file and the second argument (.) the location of the directory that contains our model definition files.

Once the experiment is started, you will see a notification like this:

Preparing files (../mnist_tf_keras) to send to master... 2.5KB and 4 files
Created experiment xxx
Activated experiment xxx

Evaluating the Model

Thanks to PEDL, model evaluation is done automatically for you. To access information on both training and validation performance, simply go to the webUI by entering the address of your PEDL_MASTER in your web browser.

Once you are on the PEDL landing page, you can find your experiment either via the experiment ID (xxx) or via its description.

Code Sample

Putting it all together, our main.py file looks like this.

import tensorflow as tf
from tensorflow import keras

import pedl
from pedl.frameworks.keras import data
from pedl.frameworks.keras import TFKerasTrial


def make_data_loaders(experiment_config, hparams):

    fashion_mnist = keras.datasets.fashion_mnist
    (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
    train_images, test_images = train_images / 255.0, test_images / 255.0

    batch_size = pedl.get_hyperparameter("batch_size")
    train = data.InMemorySequence(data=train_images, labels=train_labels, batch_size=batch_size)
    test = data.InMemorySequence(data=test_images, labels=test_labels, batch_size=batch_size)

    return train, test


class MNISTTrial(TFKerasTrial):
    def build_model(self, hparams):
        model = keras.Sequential(
            [
                keras.layers.Flatten(input_shape=(28, 28)),
                keras.layers.Dense(128, activation="relu"),
                keras.layers.Dense(10),
            ]
        )

        model.compile(
            optimizer="adam",
            loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
            metrics=["accuracy"],
        )
        return model

This code is also available at MNIST TF KERAS.

Next Steps