PEDL aims to support reproducible machine learning experiments: that is, the result of running a PEDL experiment should be deterministic, so that re-running a previous experiment should produce an identical model. For example, this ensures that if the model produced from an experiment is ever lost, it can be recovered by re-running the experiment that produced it.
The current version of PEDL provides limited support for reproducibility; unfortunately, the current state of the hardware and software stack typically used for deep learning makes perfect reproducibility very challenging.
PEDL can control and reproduce the following sources of randomness:
- Hyperparameter sampling decisions.
- The initial weights for a given hyperparameter configuration.
- Shuffling of training data in a trial.
- Dropout or other random layers.
PEDL currently does not offer support for:
- Controlling non-determinism in floating-point operations. Modern deep learning frameworks typically use implement training using floating point operations that result in non-deterministic results, particularly on GPUs. If only CPUs are used for training, reproducible results can be achieved—see below.
- Reproducibility when using the
All PEDL experiments are associated with an experiment seed, an integer ranging from 0 to 231-1. The experiment seed can be set in the experiment configuration (via the
reproducibility.experiment_seed field). If an experiment seed is not explicitly specified, the master will assign one automatically.
The experiment seed is used as a source of randomness for any hyperparameter sampling procedures. The experiment seed is also used to generate a trial seed for every trial associated with the experiment. The trial seed is accessible from model code via
To achieve reproducible initial conditions in an experiment, please follow these guidelines:
- Use the
randomAPIs for random procedures, such as shuffling of data. Both PRNGs will be initialized with the trial seed by PEDL automatically.
- Use the trial seed to seed any randomized operations (e.g., initializers, dropout) in your framework of choice. For example, Keras initializers accept an optional seed parameter. Again, it is not necessary to set any graph-level PRNGs (e.g., TensorFlow's
tf.set_random_seed), as PEDL manages this for you.
Deterministic Floating Point on CPUs¶
When doing CPU-only training with TensorFlow, it is possible to achieve floating-point reproducibility throughout optimization. If using the
KerasFunctionalTrial interfaces, implement the
session_config() method to override the default session configuration:
def session_config(self) -> tf.ConfigProto: return tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
Disabling thread parallelism may negatively affect performance. Only enable this feature if you understand and accept this trade-off.