View PDF of the paper LJP: Provable and Scalable Self-Supervised Learning without Heuristics, by Randall Balestriero and 1 other authors
View PDF HTML (experimental)
abstract:Learning to manipulate representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architecture (JEPA) offers a promising blueprint, but a lack of practical guidance and theory has led to ad-hoc R&D. We present a comprehensive theory of JEPA and present it as a lean, scalable, and theoretically based training objective. First, we identify the isotropic Gaussian as the optimal distribution that the ZEPA embedding should follow to minimize downstream prediction risk. Second, we introduce a new objective – {\bf Sketched Isotropic Gaussian Regularization} (SIGReg) – to constrain the embeddings to reach that ideal distribution. Combining JEPA predictive loss with SIGReg provides several theoretical and practical benefits to JEPA: (i) single trade-off hyperparameter, (ii) linear time and memory complexity, (iii) consistency across hyper-parameter, architectures (ResNets, ViTs, ConvNets) and domains, (iv) guess-free, e.g., no stop-gradient, no teacher-student, no hyper-parameter scheduler. No, and (v) the distributed training-friendly implementation requires only about $50 lines of code. Our empirical validation covers 10+ datasets, 60+ architectures, all with different scales and domains. For example, using ImageNet-1k for pretraining and linear evaluation with frozen backbone, LeJEPA reaches 79\% with ViT-H/14. We hope that the simplicity and theory-friendly ecosystem offered by LeJEPA will re-establish self-supervised pre-training as a core pillar of AI research (\href{this https URL}{GitHub repo}).
Submission History
From: Randall Balestriero [view email]
[v1]
Tue, 11 Nov 2025 18:21:55 UTC (12,072 KB)
[v2]
Wed, 12 Nov 2025 14:26:39 UTC (12,072 KB)
[v3]
Fri, 14 Nov 2025 08:38:32 UTC (12,072 KB)
