Welcome to torch-ttt!#

Deployment & Documentation & Stats & License

GitHub stars GitHub forks Documentation Downloads

About torch-ttt#

Welcome to torch-ttt, a comprehensive and easy-to-use Python library for applying test-time training methods to your PyTorch models. Whether you’re tackling small projects or managing large datasets, torch-ttt provides a diverse set of algorithms to accommodate different needs. Our intuitive API seamlessly integrates into existing training / inference pipelines, ensuring smooth and efficient deployment without disrupting your workflow.

torch-ttt includes implementations of various TTT methods, from classical TTT [SWZ+20], to more recent methods like DeYO [LJL+25] and ActMAD [MSL+23]. The full list of the implemented methods can be found in a table below.

Method

Engine Class

Year

Reference

TTT

TTTEngine

2018

[SWZ+20]

TTT++

TTTPPEngine

2021

[LKvD+21]

TENT

TentEngine

2021

[WSL+21]

EATA

EataEngine

2022

[NWZ+22]

MEMO

MemoEngine

2022

[ZLF22]

Masked TTT

MaskedTTTEngine

2022

[GSCE22]

ActMAD

ActMADEngine

2023

[MSL+23]

DeYO

DeYOEngine

2024

[LJL+25]

torch-ttt stands out for:

  • Seamless Integration: A user-friendly API designed to effortlessly work with existing PyTorch models and pipelines.

  • Versatile Algorithms: A comprehensive collection of test-time training methods tailored to a wide range of use cases.

  • Efficiency & Scalability: Optimized implementations for robust performance on diverse hardware setups and datasets.

  • Flexibility in Application: Supporting various domains and use cases, ensuring adaptability to different requirements with minimal effort.

Minimal Test-Time Training Example:

# Example: Using TTT
from torch_ttt.engine.ttt_engine import TTTEngine

network = Net()
engine = TTTEngine(network, "layer_name")  # Wrap your model with the Engine
optimizer = optim.Adam(engine.parameters(), lr=learning_rate)

engine.train()
...  # Train your model as usual

engine.eval()
...  # Test your model as usual

References

[GSCE22]

Yossi Gandelsman, Yu Sun, Xinlei Chen, and Alexei Efros. Test-time training with masked autoencoders. In Advances in Neural Information Processing Systems. 2022.

[LJL+25] (1,2)

Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, and Sungroh Yoon. Entropy is not enough for test-time adaptation: from the perspective of disentangled factors. In International Conference on Learning Representations. 2025.

[LKvD+21]

Yuejiang Liu, Parth Kothari, Bastien Germain van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, and Alexandre Alahi. Ttt++: when does self-supervised test-time training fail or thrive? In Thirty-Fifth Conference on Neural Information Processing Systems. 2021.

[MSL+23] (1,2)

M. Jehanzeb Mirza, Pol Jane Soneira, Wei Lin, Mateusz Kozinski, Horst Possegger, and Horst Bischof. Actmad: activation matching to align distributions for test-time training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.

[NWZ+22]

Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Efficient test-time model adaptation without forgetting. In International Conference on Machine Learning. 2022.

[SWZ+20] (1,2)

Yu Sun, Xiaolong Wang, Liu Zhuang, John Miller, Moritz Hardt, and Alexei A. Efros. Test-time training with self-supervision for generalization under distribution shifts. In Forty-first International Conference on Machine Learning. 2020.

[WSL+21]

Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: fully test-time adaptation by entropy minimization. In International Conference on Learning Representations. 2021.

[ZLF22]

Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: test time robustness via adaptation and augmentation. In Advances in neural information processing systems. 2022.