Skip to content

Tutorial 10: How to use FastEstimator Command Line Interface (CLI)

Overview

FastEstimator comes with a set of CLI commands that can help users train and test their models quickly. In this tutorial, we will go through the CLI usage and the arguments these CLI commands take. This tutorial is divided into the following sections:

How Does the CLI Work

  • fastestimator train: the command will look for a get_estimator function, invoke it, and then call the fit() method on the returned estimator instance to start the training.
  • fastestimator test: the command will look for a get_estimator function, invoke it, and then call the test() method on the returned estimator instance to run testing.
  • fastestimator run: the command will look for a fastestimator_run function and invoke it. If fastestimator_run is not available, it will instead look for get_estimator, invoke it, and then call fit() and/ or test depending on what data is available within the estimator's Pipeline.

CLI Usage

In this section we will show the actual commands that we can use to train and test our models. We will use mnist_tf.py for illustration.

To call estimator.fit() and start the training on terminal:

$ fastestimator train mnist_tf.py

To call estimator.test() and start testing on terminal:

$ fastestimator test mnist_tf.py

To first call estimator.fit() then estimator.test(), you can use:

$ fastestimator run mnist_tf.py

Sending Input Args to get_estimator or fastestimator_run

We can also pass arguments to the get_estimator or fastestimator_run functions from the CLI. The following code snippet shows the get_estimator method for our MNIST example:

def get_estimator(epochs=2, batch_size=32, ...):
    ...

Next, we try to change these arguments in two ways:

Using --arg

To pass the arguments directly from the CLI we can use the --arg format. The following shows an example of how we can set the number of epochs to 3 and batch_size to 64:

$ fastestimator train mnist_tf.py --epochs 3 --batch_size 64

Using a JSON file

The other way we can send arguments is by using the --hyperparameters argument and passing it a json file containing all the training hyperparameters like epochs, batch_size, optimizer, etc. This option is really useful when you want to repeat the training job more than once and/or the list of the hyperparameter is getting really long. The following shows an example JSON file and how it could be used for our MNIST example:

{
    "epochs": 1,
    "batch_size": 64
}
$ fastestimator train mnist_tf.py --hyperparameters hp.json

System argument

There are some default system arguments in the fastestimator train, fastestimator test, and fastestimator run commands. Here are a list of them:

  • warmup: Available in train and run, it controls whether to perform warmup checking before the actual training starts. Default is True. Users can disable warmup by --warmup False. Disabling it can reduce the initialization time needed to start the training.
  • eager: Available in train, test, and run. This argument is only relevant if using tensorflow backend. Enabling eager execution will allow user to access the value of the tf tensor at run time. The default is False, user can enable it by --eager True. The eager mode is useful in tensorflow debugging workflows. However, there are several downsides of eager execution such as less speed and more memory usage.
  • summary: Available in train, test, and run. This is the same summary argument used in estimator.fit(summary=...) or estimator.test(summary=...). It allows users to specify experiment name when generating reports. For example, Users can set experiment name by --summary exp_name.