Norm-Based Activation
- class e3nn.nn.NormActivation(irreps_in, scalar_nonlinearity: Callable, normalize: bool = True, epsilon: Optional[float] = None, bias: bool = False)[source]
Bases:
torch.nn.modules.module.Module
Norm-based activation function Applies a scalar nonlinearity to the norm of each irrep and ouputs a (normalized) version of that irrep multiplied by the scalar output of the scalar nonlinearity. :param irreps_in: representation of the input :type irreps_in:
e3nn.o3.Irreps
:param scalar_nonlinearity: scalar nonlinearity such astorch.sigmoid
:type scalar_nonlinearity: callable :param normalize: whether to normalize the input features before multiplying them by the scalars from the nonlinearity :type normalize: bool :param epsilon: whennormalize``ing, norms smaller than ``epsilon
will be clamped up toepsilon
to avoid division by zero and NaN gradients. Not allowed whennormalize
is False. :type epsilon: float, optional :param bias: whether to apply a learnable additive bias to the inputs of thescalar_nonlinearity
:type bias: boolExamples
>>> n = NormActivation("2x1e", torch.sigmoid) >>> feats = torch.ones(1, 2*3) >>> print(feats.reshape(1, 2, 3).norm(dim=-1)) tensor([[1.7321, 1.7321]]) >>> print(torch.sigmoid(feats.reshape(1, 2, 3).norm(dim=-1))) tensor([[0.8497, 0.8497]]) >>> print(n(feats).reshape(1, 2, 3).norm(dim=-1)) tensor([[0.8497, 0.8497]])
Methods:
forward
(features)evaluate :param features: tensor of shape
(..., irreps_in.dim)
:type features:torch.Tensor
- forward(features)[source]
evaluate :param features: tensor of shape
(..., irreps_in.dim)
:type features:torch.Tensor
- Returns
tensor of shape
(..., irreps_in.dim)
- Return type