Prompting-based Continual Learning

Speaker

Bio: Zifeng Wang is a Ph.D. student at Northeastern University. He received his B.S. degree in Electronic Engineering from Tsinghua University. His research interests include continual (lifelong) learning, data-efficient and parameter-efficient learning, adversarial robustness, and real-world machine learning applications.

Homepage: https://kingspencer.github.io/.

Abstract

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge. In this talk, we present a new continual learning paradigm – Prompting-based Continual Learning, which learns a tiny set of parameters, called prompts, to properly instruct a pre-trained model to learn tasks arriving sequentially. In our CVPR 22 work, Learning to Prompt (L2P), we design a key-value paired pool of prompts to dynamically instruct the backbone to learn coming tasks. In our ECCV 22 work, DualPrompt, we further improve the design of prompts by decoupling them into complementary “General” and “Expert” prompts to learn task-invariant and task-specific instructions, respectively. Both methods set new state-of-the-art performance on multiple challenging benchmarks, even without buffering examples. We hope that Prompting-based Continual Learning can provide a different perspective for solving frontier challenges in continual learning.

Video