# Euclidean neural networks¶

## What is e3nn?¶

e3nn is a python library based on pytorch to create equivariant neural networks for the group $$O(3)$$.

## Demonstration¶

All the functions to manipulate rotations (rotation matrices, Euler angles, quaternions, convertions, …) can be found here Parametrization of Rotations. The irreducible representations of $$O(3)$$ (more info at Irreps) are represented by the class Irrep. The direct sum of multiple irrep is described by an object Irreps.

If two tensors $$x$$ and $$y$$ transforms as $$D_x = 2 \times 1_o$$ (two vectors) and $$D_y = 0_e + 1_e$$ (a scalar and a pseudovector) respectively, where the indices $$e$$ and $$o$$ stand for even and odd – the representation of parity,

import torch
from e3nn import o3

irreps_x = o3.Irreps('2x1o')
irreps_y = o3.Irreps('0e + 1e')

x = irreps_x.randn(-1)
y = irreps_y.randn(-1)

irreps_x.dim, irreps_y.dim

(6, 4)


their outer product is a $$6 \times 4$$ matrix of two indices $$A_{ij} = x_i y_j$$.

A = torch.einsum('i,j', x, y)
A

tensor([[-0.7918,  0.4918,  0.3108, -0.3438],
[ 0.5060, -0.3143, -0.1986,  0.2197],
[ 3.7270, -2.3149, -1.4628,  1.6185],
[-0.6989,  0.4341,  0.2743, -0.3035],
[-1.9300,  1.1988,  0.7575, -0.8381],
[-1.2771,  0.7933,  0.5013, -0.5546]])


If a rotation is applied to the system, this matrix will transform with the representation $$D_x \otimes D_y$$ (the tensor product representation).

$A = x y^t \longrightarrow A' = D_x A D_y^t$

Which can be represented by

R = o3.rand_matrix()
D_x = irreps_x.D_from_matrix(R)
D_y = irreps_y.D_from_matrix(R)

plt.imshow(torch.kron(D_x, D_y), cmap='bwr', vmin=-1, vmax=1);


This representation is not irreducible (is reducible). It can be decomposed into irreps by a change of basis. The outerproduct followed by the change of basis is done by the class FullTensorProduct.

tp = o3.FullTensorProduct(irreps_x, irreps_y)
print(tp)

tp(x, y)

FullTensorProduct(2x1o x 1x0e+1x1e -> 2x0o+4x1o+2x2o | 8 paths | 0 weights)

tensor([ 1.1037e+00,  3.6775e-01, -7.9180e-01,  5.0602e-01,  3.7270e+00,
-6.9887e-01, -1.9300e+00, -1.2771e+00,  1.1897e+00, -1.3938e+00,
4.4199e-01, -9.4706e-01,  7.7552e-01, -6.5370e-01, -1.8800e+00,
-2.4988e-03, -1.0237e+00, -8.7896e-01,  7.9665e-01,  3.4633e-01,
1.0416e+00,  6.6768e-01, -2.3818e-01, -6.9911e-01])


As a sanity check, we can verify that the representation of the tensor prodcut is block diagonal and of the same dimension.

D = tp.irreps_out.D_from_matrix(R)
plt.imshow(D, cmap='bwr', vmin=-1, vmax=1);


FullTensorProduct is a special case of TensorProduct, other ones like FullyConnectedTensorProduct can contained weights what can be learned, very useful to create neural networks.