Don’t miss out on the chance to join top executives in San Francisco on July 11-12 and learn how leaders are integrating and optimizing AI investments for success! Learn More


Meta’s chief AI scientist Yann LeCun has been working on deep learning systems that can learn world models with little or no help from humans for years. Now, Meta has released the first version of I-JEPA, a machine learning (ML) model that learns abstract representations of the world through self-supervised learning on images.

Initial tests show that I-JEPA performs strongly on many computer vision tasks and is much more efficient than other state-of-the-art models, requiring only a tenth of the computing resources for training. Meta has open-sourced the training code and model and will be presenting I-JEPA at the Conference on Computer Vision and Pattern Recognition (CVPR) next week.

Learning Like Humans

The idea of self-supervised learning is inspired by the way humans and animals learn. We obtain much of our knowledge simply by observing the world. Likewise, AI systems should be able to learn through raw observations without the need for humans to label their training data.

Self-supervised learning has made great inroads in some fields of AI, including generative models and large language models (LLMs). In 2022, LeCun proposed the “joint predictive embedding architecture” (JEPA), a self-supervised model that can learn world models and important knowledge such as common sense. JEPA differs from other self-supervised models in important ways.