The model definition is the interface between PEDL and the user's application framework (e.g., Keras, TensorFlow), in terms of loading training data, describing a model architecture, and specifying the underlying iterative optimization training algorithms. See the Defining Models chapter in the quick start guide for a brief introduction.
There are two kinds of model definitions:
- Standard Model Definition: Implement PEDL's provided
Trialinterface for your desired task. This option provides finer-grained control over PEDL model construction and computation.
- Simple Model Definition: Specify a directory of model code together with an entrypoint script that executes a training and validation procedure. This option requires very few code changes to set up and may be simplest if you're new to PEDL.
A model definition consists of a directory of files that comprise a Python package. That is, the directory should contain a
__init__.py file at the top-level. When using the standard model definition (see below), the
__init__.py file must expose the
Trial implementation and the
examples/imdb_keras is an example of a directory that contains a model definition.
Since project directories might include large artifacts that should not be packaged as part of the model definition (e.g., data sets or compiled binaries), users can optionally include a
.pedlignore file at the top-level that specifies file paths to be omitted from the model definition. The
.pedlignore file uses the same syntax as
.gitignore. Note that byte-compiled Python files (e.g.,
.pyc files or
__pycache__ directories) are always ignored.
For backward compatibility, PEDL also supports model definitions that consist of a single Python file, in which case creating an
__init__.py file is not necessary.
Standard Model Definition¶
To create a model using the "standard" model definition approach, users implement a simple
Trial API provided by PEDL. This API returns information about the machine learning task the user wants to perform, like the model architecture to use or the validation metrics that should be computed.
PEDL provides versions of the
Trial interface for each of the application frameworks it supports. Specifically, PEDL currently supports five types of
Trial interfaces encompassing three application frameworks:
Trial offers an optional interface to execute arbitrary Python functions before or after each training or validation step. This is useful for integrating with external systems, such as TensorBoard (see example below). To use callbacks in your experiment, implement the following optional interface in your
callbacks(self, hparams): Returns a list of
pedl.callback.Callbackinstances that will be used to run arbitrary Python functions during the lifetime of a PEDL trial. Callbacks are invoked in the order specified by this list.
The following predefined callbacks are provided by PEDL:
log_directoryspecifies the container path where TensorBoard event logs will be written from the trial runner containers. The event logs for each trial will be saved under sub-directories under the
log_directorylabelled with the trial ID:
<trial_id>/validationfor training and validation metrics, respectively. For a complete example, see TensorBoard Integration.
To define custom callbacks, users may subclass
pedl.callback.Callback and implement one or more of its optional interface functions:
on_trial_begin(): Executed before the start of the first training step of a trial.
on_train_step_begin(step_id): Executed at the beginning of a training step.
on_train_step_end(step_id, metrics): Executed at the end of a training step.
metricsis a list of Python dictionaries for this training step, where each dictionary contains the metrics of a single training batch.
on_validation_step_begin(step_id): Executed at the beginning of a validation step.
on_validation_step_end(step_id, metrics): Executed at the end of a validation step.
metricsis a Python dictionary that contains the metrics for this validation step.
Simple Model Definition¶
Simple model definitions provide a mechanism for running models in PEDL without needing to implement a
Trial API. Instead, features like automatic checkpointing and task migration are implemented by intercepting method calls from the model code into the deep learning framework (e.g., Keras).
To create an experiment using a simple model definition, the experiment configuration file should specify an
entrypoint section. The entrypoint
script is the Python script that creates and loads the training data, describes a model architecture, and runs the training and validation procedure using framework API's (e.g., Keras's
fit_generator()). PEDL will run the entrypoint script in a containerized trial runner environment and intercept framework calls to control the execution of the model training and validation. To access hyperparameters in model code, use the
pedl.get_hyperparameter(name) function, where
name is the string name of a hyperparameter as specified in the experiment configuration.
Currently, simple model definitions are only supported for Keras models.