Pytorch’s MNIST in PEDL

This tutorial shows you how to use PEDL’s API to train MNIST using the Pytorch library. The code is based on the offical pytorch MNSIT example.

This tutorial requires the PEDL CLI. For installation procedures: Install PEDL CLI


PEDL’s Pytorch framework requires a 5 main functions:

  1. make_data_loaders

  2. build_model

  3. optimizer

  4. train_batch

  5. evaluate_batch

and one class PyTorchTrial. The PyTorchTrial class contains all the required functions except make_data_loaders.

We will put these functions into 2 files: and

Additionally, the PEDL frameworks expects two files to be provided:

  1. an entry point (

  2. an experiment configuration file (const.yaml) is our entry point. It imports make_data_loaders and the PyTorchTrial class. The experiment configuration file (*.yaml) contains information on hyperparameters as well as details on the searcher such as the metric used for optimization.

In this tutorial, we will use torch, torchvision, and numpy libraries to build and train our model.

Download and Prepare MNIST Dataset

First, we use PEDL’s download_data function to download the dataset. At the beginning of an experiment, PEDL calls download_data. Once downloaded, we provide the directory to torchvision’s MNIST dataset function to generate the train and validation datasets. We pass these datasets into PEDL DataLoader and return the objects.

def download_data(experiment_config: Dict[str, Any], hparams: Dict[str, Any]) -> str:
    download_directory = "/tmp/work_dir/MNIST"
    url = experiment_config["data"]["url"]
    url_path = urllib.parse.urlparse(url).path
    basename = url_path.rsplit("/", 1)[1]

    if not os.path.exists(download_directory):

    filepath = os.path.join(download_directory, basename)
    if not os.path.exists(filepath):"Downloading {}".format(url))

        r = requests.get(url, stream=True)
        with open(filepath, "wb") as f:
            for chunk in r.iter_content(chunk_size=8192):
                if chunk:

    shutil.unpack_archive(filepath, download_directory)

    return os.path.dirname(download_directory)

def get_dataset(data_dir: str, train: bool) -> Any:
    return datasets.MNIST(
                # These are the precomputed mean and standard deviation of the
                # MNIST data; this normalizes the data to have zero mean and unit
                # standard deviation.
                transforms.Normalize((0.1307,), (0.3081,)),

def make_data_loaders(
    experiment_config: Dict[str, Any], hparams: Dict[str, Any]
) -> Tuple[DataLoader, DataLoader]:
    download_data_dir = get_download_data_dir()
    train_data = get_dataset(download_data_dir, True)
    validation_data = get_dataset(download_data_dir, False)
    batch_size = hparams["batch_size"]
    return (
        DataLoader(train_data, batch_size=batch_size),
        DataLoader(validation_data, batch_size=batch_size),

Building the Model

Each PEDL experiment expects the implementation of a trial class. This class must inherit from an appropriate base class that corresponds to the framework we would like to use.

In our case, we are using a Pytorch model and thus our trial class subclasses PEDL’s PyTorchTrial class. In the trial class, we define our build_model and optimizer function.

class MNistTrial(PyTorchTrial):
    def build_model(self) -> nn.Module:
        model = nn.Sequential(
            nn.Conv2d(1, pedl.get_hyperparameter("n_filters1"), kernel_size=5),
            nn.Linear(16 * pedl.get_hyperparameter("n_filters2"), 50),
            nn.Linear(50, 10),

        # If loading backbone weights, do not call reset_parameters() or
        # call before loading the backbone weights.
        return model
    def optimizer(self, model: nn.Module) -> torch.optim.Optimizer:  # type: ignore
        return torch.optim.SGD(
            model.parameters(), lr=pedl.get_hyperparameter("learning_rate"), momentum=0.9

For this class, we have to define the iteration loop for training and evaluation. Instead of enumerating the DataLoader, PEDL manages it for you by providing one batch at a time in train_batch and evaluate_batch.

def train_batch(
    self, batch: TorchData, model: nn.Module, epoch_idx: int, batch_idx: int
) -> Dict[str, torch.Tensor]:
    batch = cast(Tuple[torch.Tensor, torch.Tensor], batch)
    data, labels = batch

    output = model(data)
    loss = torch.nn.functional.nll_loss(output, labels)
    error = error_rate(output, labels)

    return {"loss": loss, "train_error": error}

def evaluate_batch(self, batch: TorchData, model: nn.Module) -> Dict[str, Any]:
    batch = cast(Tuple[torch.Tensor, torch.Tensor], batch)
    data, labels = batch

    output = model(data)
    error = error_rate(output, labels)

    return {"validation_error": error}

Training the Model

Lastly, we create a configuration *.yaml file which contains information such as searcher metric and batch size.

To start the experiment, we run:

pedl experiment create const.yaml .

Here, the first argument (const.yaml) specifies the experiment configuration file and the second argument (.) the location of the directory that contains our model definition files.

Once the experiment is started, you will see a notification like this:

Preparing files (../mnist_pytorch) to send to master... 2.5KB and 4 files
Created experiment xxx
Activated experiment xxx

Evaluating the Model

Thanks to PEDL, model evaluation is done automatically for you. To access information on both training and validation performance, simply go to the webUI by entering the address of your PEDL_MASTER in your web browser.

Once you are on the PEDL landing page, you can find your experiment either via the experiment ID (xxx) or via its description.

This code is also available at MNIST TF KERAS.

Next Steps