TentEngine#

class torch_ttt.engine.tent_engine.TentEngine(model: Module, optimization_parameters: Dict[str, Any] = {})[source]#

TENT: Fully test-time adaptation by entropy minimization.

TENT adapts models at inference by minimizing prediction entropy, encouraging confident outputs on unlabeled data. It updates only BatchNorm affine parameters and requires no labels or training supervision.

Parameters:
  • model (torch.nn.Module) – Model to be adapted at test-time.

  • optimization_parameters (dict) – Optimizer configuration for adaptation (e.g. learning rate).

Example:

from torch_ttt.engine.tent_engine import TentEngine

model = MyModel()
engine = TentEngine(model, {"lr": 1e-3})
optimizer = torch.optim.Adam(engine.parameters(), lr=1e-3)

# Training
engine.train()
for inputs, labels in train_loader:
    optimizer.zero_grad()
    outputs, loss_ttt = engine(inputs)
    loss = criterion(outputs, labels) + alpha * loss_ttt
    loss.backward()
    optimizer.step()

# Inference
engine.eval()
for inputs, labels in test_loader:
    output, loss_ttt = engine(inputs)

Reference:

“Tent: Fully Test-Time Adaptation by Entropy Minimization”, Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Vasconcelos, Trevor Darrell

Paper link: PDF

ttt_forward(inputs) Tuple[Tensor, Tensor][source]#

Forward pass of the model.

Parameters:

inputs (torch.Tensor) – Input tensor.

Returns:

The current model prediction and the entropy loss value.