Skip to content

tensorop

LambdaOp

Bases: TensorOp

An Operator that performs any specified function as forward function.

Parameters:

Name Type Description Default
fn Callable

The function to be executed.

required
inputs Union[None, str, Iterable[str]]

Key(s) from which to retrieve data from the data dictionary.

None
outputs Union[None, str, Iterable[str]]

Key(s) under which to write the outputs of this Op back to the data dictionary.

None
mode Union[None, str, Iterable[str]]

What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument like "!infer" or "!train".

None
ds_id Union[None, str, Iterable[str]]

What dataset id(s) to execute this Op in. To execute regardless of ds_id, pass None. To execute in all ds_ids except for a particular one, you can pass an argument like "!ds1".

None
Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
@traceable()
class LambdaOp(TensorOp):
    """An Operator that performs any specified function as forward function.

    Args:
        fn: The function to be executed.
        inputs: Key(s) from which to retrieve data from the data dictionary.
        outputs: Key(s) under which to write the outputs of this Op back to the data dictionary.
        mode: What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute
            regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument
            like "!infer" or "!train".
        ds_id: What dataset id(s) to execute this Op in. To execute regardless of ds_id, pass None. To execute in all
            ds_ids except for a particular one, you can pass an argument like "!ds1".
    """
    def __init__(self,
                 fn: Callable,
                 inputs: Union[None, str, Iterable[str]] = None,
                 outputs: Union[None, str, Iterable[str]] = None,
                 mode: Union[None, str, Iterable[str]] = None,
                 ds_id: Union[None, str, Iterable[str]] = None):
        super().__init__(inputs=inputs, outputs=outputs, mode=mode, ds_id=ds_id)
        self.fn = fn
        self.in_list = True

    def forward(self, data: List[Tensor], state: Dict[str, Any]) -> Union[Tensor, List[Tensor]]:
        return self.fn(*data)

TensorOp

Bases: Op

An Operator class which takes and returns tensor data.

These Operators are used in fe.Network to perform graph-based operations like neural network training.

Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
@traceable()
class TensorOp(Op):
    """An Operator class which takes and returns tensor data.

    These Operators are used in fe.Network to perform graph-based operations like neural network training.
    """
    def forward(self, data: Union[Tensor, List[Tensor]], state: Dict[str, Any]) -> Union[Tensor, List[Tensor]]:
        """A method which will be invoked in order to transform data.

        This method will be invoked on batches of data.

        Args:
            data: The batch from the data dictionary corresponding to whatever keys this Op declares as its `inputs`.
            state: Information about the current execution context, for example {"mode": "train"}.

        Returns:
            The `data` after applying whatever transform this Op is responsible for. It will be written into the data
            dictionary based on whatever keys this Op declares as its `outputs`.
        """
        return data

    def build(self, framework: str, device: Optional[torch.device] = None) -> None:
        """A method which will be invoked during Network instantiation.

        This method can be used to augment the natural __init__ method of the TensorOp once the desired backend
        framework is known.

        Args:
            framework: Which framework this Op will be executing in. One of 'tf' or 'torch'.
            device: Which device this Op will execute on. Usually 'cuda:0' or 'cpu'. Only populated when the `framework`
                is 'torch'.
        """
        pass

    # ###########################################################################
    # The methods below this point can be ignored by most non-FE developers
    # ###########################################################################

    # noinspection PyMethodMayBeStatic
    def get_fe_models(self) -> Set[Model]:
        """A method to get any models held by this Op.

        All users and most developers can safely ignore this method. This method may be invoked to gather and manipulate
        models, for example by the Network during load_epoch().

        Returns:
            Any models held by this Op.
        """
        return set()

    # noinspection PyMethodMayBeStatic
    def get_fe_loss_keys(self) -> Set[str]:
        """A method to get any loss keys held by this Op.

        All users and most developers can safely ignore this method. This method may be invoked to gather information
        about losses, for example by the Network in get_loss_keys().

        Returns:
            Any loss keys held by this Op.
        """
        return set()

    # noinspection PyMethodMayBeStatic
    def fe_retain_graph(self, retain: Optional[bool] = None) -> Optional[bool]:
        """A method to get / set whether this Op should retain network gradients after computing them.

        All users and most developers can safely ignore this method. Ops which do not compute gradients should leave
        this method alone. If this method is invoked with `retain` as True or False, then the gradient computations
        performed by this Op should retain or discard the graph respectively afterwards.

        Args:
            retain: If None, then return the current retain_graph status of the Op. If True or False, then set the
                retain_graph status of the op to the new status and return the new status.

        Returns:
            Whether this Op will retain the backward gradient graph after it's forward pass, or None if this Op does not
            compute backward gradients.
        """
        return None

build

A method which will be invoked during Network instantiation.

This method can be used to augment the natural init method of the TensorOp once the desired backend framework is known.

Parameters:

Name Type Description Default
framework str

Which framework this Op will be executing in. One of 'tf' or 'torch'.

required
device Optional[device]

Which device this Op will execute on. Usually 'cuda:0' or 'cpu'. Only populated when the framework is 'torch'.

None
Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
def build(self, framework: str, device: Optional[torch.device] = None) -> None:
    """A method which will be invoked during Network instantiation.

    This method can be used to augment the natural __init__ method of the TensorOp once the desired backend
    framework is known.

    Args:
        framework: Which framework this Op will be executing in. One of 'tf' or 'torch'.
        device: Which device this Op will execute on. Usually 'cuda:0' or 'cpu'. Only populated when the `framework`
            is 'torch'.
    """
    pass

fe_retain_graph

A method to get / set whether this Op should retain network gradients after computing them.

All users and most developers can safely ignore this method. Ops which do not compute gradients should leave this method alone. If this method is invoked with retain as True or False, then the gradient computations performed by this Op should retain or discard the graph respectively afterwards.

Parameters:

Name Type Description Default
retain Optional[bool]

If None, then return the current retain_graph status of the Op. If True or False, then set the retain_graph status of the op to the new status and return the new status.

None

Returns:

Type Description
Optional[bool]

Whether this Op will retain the backward gradient graph after it's forward pass, or None if this Op does not

Optional[bool]

compute backward gradients.

Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
def fe_retain_graph(self, retain: Optional[bool] = None) -> Optional[bool]:
    """A method to get / set whether this Op should retain network gradients after computing them.

    All users and most developers can safely ignore this method. Ops which do not compute gradients should leave
    this method alone. If this method is invoked with `retain` as True or False, then the gradient computations
    performed by this Op should retain or discard the graph respectively afterwards.

    Args:
        retain: If None, then return the current retain_graph status of the Op. If True or False, then set the
            retain_graph status of the op to the new status and return the new status.

    Returns:
        Whether this Op will retain the backward gradient graph after it's forward pass, or None if this Op does not
        compute backward gradients.
    """
    return None

forward

A method which will be invoked in order to transform data.

This method will be invoked on batches of data.

Parameters:

Name Type Description Default
data Union[Tensor, List[Tensor]]

The batch from the data dictionary corresponding to whatever keys this Op declares as its inputs.

required
state Dict[str, Any]

Information about the current execution context, for example {"mode": "train"}.

required

Returns:

Type Description
Union[Tensor, List[Tensor]]

The data after applying whatever transform this Op is responsible for. It will be written into the data

Union[Tensor, List[Tensor]]

dictionary based on whatever keys this Op declares as its outputs.

Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
def forward(self, data: Union[Tensor, List[Tensor]], state: Dict[str, Any]) -> Union[Tensor, List[Tensor]]:
    """A method which will be invoked in order to transform data.

    This method will be invoked on batches of data.

    Args:
        data: The batch from the data dictionary corresponding to whatever keys this Op declares as its `inputs`.
        state: Information about the current execution context, for example {"mode": "train"}.

    Returns:
        The `data` after applying whatever transform this Op is responsible for. It will be written into the data
        dictionary based on whatever keys this Op declares as its `outputs`.
    """
    return data

get_fe_loss_keys

A method to get any loss keys held by this Op.

All users and most developers can safely ignore this method. This method may be invoked to gather information about losses, for example by the Network in get_loss_keys().

Returns:

Type Description
Set[str]

Any loss keys held by this Op.

Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
def get_fe_loss_keys(self) -> Set[str]:
    """A method to get any loss keys held by this Op.

    All users and most developers can safely ignore this method. This method may be invoked to gather information
    about losses, for example by the Network in get_loss_keys().

    Returns:
        Any loss keys held by this Op.
    """
    return set()

get_fe_models

A method to get any models held by this Op.

All users and most developers can safely ignore this method. This method may be invoked to gather and manipulate models, for example by the Network during load_epoch().

Returns:

Type Description
Set[Model]

Any models held by this Op.

Source code in fastestimator/fastestimator/op/tensorop/tensorop.py
def get_fe_models(self) -> Set[Model]:
    """A method to get any models held by this Op.

    All users and most developers can safely ignore this method. This method may be invoked to gather and manipulate
    models, for example by the Network during load_epoch().

    Returns:
        Any models held by this Op.
    """
    return set()