Skip to content

l2_regularization

L2Regularizaton

Bases: TensorOp

Calculate L2 Regularization Loss.

Parameters:

Name Type Description Default
inputs str

String key representing input loss.

required
outputs str

String key under which to store the computed loss value.

required
mode Union[None, str, Iterable[str]]

What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument like "!infer" or "!train".

None
model Union[Model, Module]

A tensorflow or pytorch model

required
beta float

The multiplicative factor, to weight the l2 regularization loss with the input loss

0.01
Source code in fastestimator/fastestimator/op/tensorop/loss/l2_regularization.py
@traceable()
class L2Regularizaton(TensorOp):
    """Calculate L2 Regularization Loss.

    Args:
        inputs: String key representing input loss.
        outputs: String key under which to store the computed loss value.
        mode: What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute
            regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument
            like "!infer" or "!train".
        model: A tensorflow or pytorch model
        beta: The multiplicative factor, to weight the l2 regularization loss with the input loss
    """
    def __init__(self,
                 inputs: str,
                 outputs: str,
                 model: Union[tf.keras.Model, torch.nn.Module],
                 mode: Union[None, str, Iterable[str]] = None,
                 beta: float = 0.01):
        super().__init__(inputs=inputs, outputs=outputs, mode=mode)

        self.model = model
        self.beta = beta

    def forward(self, data: Union[Tensor, List[Tensor]], state: Dict[str, Any]) -> Tensor:
        loss = data
        total_loss = l2_regularization(self.model, self.beta) + loss
        return total_loss