Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Congyue Deng1 Or Litany2 Yueqi Duan1 Adrien Poulenard1 Andrea Tagliasacchi3,4 Leonidas Guibas1

1Stanford University 2NVIDIA 3Google Research 4University of Toronto

[Paper] [Code] [Video]

For neural implicit reconstruction code please also see here.

By lifting latent representations from vectors of scalar entries to vectors of 3D points (i.e., matrices) we facilitate the creation of a simple rotation equivariant toolbox allowing the implementation of fully equivariant pointcloud networks.

Linear layer - Typical neural networks today are built with "scalar'' neurons -- where the output of the non-linearities in a given layer is an ordered list of scalars. We extend deep networks to allow for "vector'' neurons -- where the output of the non-linearity is an ordered list of vectors.


Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning for pointclouds. We introduce a general framework built on top of what we call Vector Neuron representations for creating SO(3)-equivariant neural networks for pointcloud processing.

  • We propose a new versatile framework for constructing SO(3)-equivariant pointcloud networks.

  • Our building blocks are lightweight in terms of the number of learnable parameters and can be easily incorporated into existing network architectures.

  • We support a variety of learning tasks, in particular, we are the first to demonstrate a 3D equivariant network for 3D reconstruction.

  • When evaluated on classification and segmentation, our VN version of popular non-equivariant architectures achieve state-of-the-art performance.


  • Three core tasks in pointcloud processing: classification, segmentation (invariance), and reconstruction (equivariance).

  • Two backbones: PointNet (no convolutions), DGCNN (convolutions on arbitrary graphs).

  • Different train/test settings: I (no rotation), z (z-axis rotations), SO(3) (full SO(3) rotations).

Classification (%)

Test classification accuracy on the ModelNet40 dataset. Compared to their standard neural network counterparts, the VN networks show excellent stability over different rotations, even compared with augmentation-induced equivariance (the SO(3)/SO(3) case).

Part Segmentation (mIoU)

ShapeNet part segmentation. The results are reported in overall average category mean IoU over 16 categories.

Neural Implicit Reconstruction

Reconstruction results on ShapeNet with OccNet (light pink) and VN-OccNet (yellow). Even in the SO(3)/SO(3) case when data augmentation is adopted at train time, the standard OccNet still shows its limitation by generating blurry shapes (top left), averaged shapes (top right, the box-like output consists of sofa features averaged from different poses), or shapes with incorrect priors (bottom right, a shape in the car class is falsely identified as a chair).



title={Vector Neurons: A General Framework for SO (3)-Equivariant Networks},

author={Deng, Congyue and Litany, Or and Duan, Yueqi and Poulenard, Adrien and Tagliasacchi, Andrea and Guibas, Leonidas},

journal={arXiv preprint arXiv:2104.12229},




We gratefully acknowledge the support of a Vannevar Bush Faculty Fellowship, ARL grant W911NF-21-2-0104, as well as gifts from the Adobe, Amazon AWS, and Autodesk corporations.