Selected article for: "large number and real world"

Author: Fuchs, Fabian B.; Worrall, Daniel E.; Fischer, Volker; Welling, Max
Title: SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
  • Cord-id: ky0d4du1
  • Document date: 2020_6_18
  • ID: ky0d4du1
    Snippet: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transform
    Document: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy $N$-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

    Search related documents:
    Co phrase search for related documents
    • abstract group and machine learning: 1
    • action call and machine learning: 1, 2
    • adam optimizer and local neural network: 1
    • adam optimizer and machine learning: 1, 2
    • local neural network and machine learning: 1, 2, 3
    • low dimensional embedding and machine learning: 1
    • low number and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13