Video models are zero-shot learners and reasoners

Speaker

Thaddäus Wiedemer is a 4th-year PhD student in the International Max Planck Research School for Intelligent Systems in Germany, currently interning at Google Deepmind Toronto. Most of his PhD focused on benchmarking robustness and generalization capabilities of large vision-language, language, and video models. His current research explores multi-modal reasoning with video models and how post-training can enable exploration beyond the pretraining data.

Thaddäus’s homepage: https://thaddaeuswiedemer.github.io/

Abstract

The remarkable zero-shot capabilities of Large Language Models (LLMs) have propelled natural language processing from task-specific models to unified, generalist foundation models. This transformation emerged from simple primitives: large, generative models trained on web-scale data. Curiously, the same primitives apply to today’s generative video models. Could video models be on a trajectory towards general-purpose vision understanding, much like LLMs developed general-purpose language understanding?

In this talk, I’m presenting our latest work, that demonstrates how Veo 3 can zero-shot solve a broad variety of tasks it wasn’t explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and much more. These abilities to perceive, model, and manipulate the visual world enable early forms of visual reasoning like maze and symmetry solving. Veo 3’s emergent zero-shot capabilities indicate that video models are on a path to becoming unified, generalist vision foundation models.

Video