Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
-
Updated
Jul 1, 2025 - Python
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces (CVPR 2023)
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
[CVPR 2021] Adversarial Generation of Continuous Images
[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".
Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.
Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures
Context-aware Biases for Length Extrapolation
PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
🧮 Algebraic Positional Encodings.
完整的原版transformer程序,complete origin transformer program
Unofficial pytorch implementation of the paper "Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding", NeurIPS 2021.
Implementation of Rotary Embeddings, from the Roformer paper, in Tensorflow
[ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi Cai, Jason D. Lee, Pan Li, Zhangyang Wang
The implementation of transformer as presented in the paper "Attention is all you need" from scratch.
Benchmarking Positional Encodings for GNNs and Graph Transformers
Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".
Application for training an autoencoder for generating an encoder that can be used as feature extractor for dimensionality and noise reduction, while the decoder can be used for synthetic data generation. Supports dynamic plugin integration, allowing users to extend its capabilities by adding custom encoder and decoder models.
Add a description, image, and links to the positional-encoding topic page so that developers can more easily learn about it.
To associate your repository with the positional-encoding topic, visit your repo's landing page and select "manage topics."