LieRE: Lie Rotational Positional Encodings

Abstract

Transformer architectures depend on explicit position encodings to capture token positional information. Rotary Position Encoding (RoPE) has emerged as a popular choice in language models due to its efficient encoding of relative position information through key-query rotations. However, RoPE’s reliance on 2D rotations limits its applicability beyond language modeling, where higher-dimensional positional relationships are crucial. This paper introduces LieRE (Lie Rotational Embeddings), which generalizes RoPE to arbitrary dimensions using Lie group theory. LieRE enables efficient position encoding in high-dimensional spaces by parameterizing rotations via Lie algebra generators. We demonstrate LieRE’s effectiveness across multiple domains: achieving 1.5% improvement on 2D computer vision tasks and 1% improvement on 3D molecular property prediction compared to standard positional encodings. Our approach maintains computational efficiency while providing a principled framework for position encoding in diverse geometric spaces.