ViT + MAE for UDA on Sentinel-1/2 (SAR/optical) land-cover classification with CORAL & DANN. PyTorch.
-
Updated
Aug 28, 2025 - Python
ViT + MAE for UDA on Sentinel-1/2 (SAR/optical) land-cover classification with CORAL & DANN. PyTorch.
Masked autoencoders — implementation and evaluation for vision tasks using Vision Transformers (ViTs)
PySlowFast, the official video understanding framework from Facebook AI Research (FAIR), to train, evaluate, and reproduce state-of-the-art video models on the UCF24 action detection dataset. It supports customizable training pipelines, model fine-tuning, and evaluation for video-based action recognition and spatio-temporal localization tasks.
Add a description, image, and links to the masked-autoencoders topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoders topic, visit your repo's landing page and select "manage topics."