Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning

Main Article Content

Tameem Adel

Abstract

Continual learning (CL) is a paradigm which addresses the issue of how to learn from sequentially arriving tasks. The goal of this paper is to introduce a CL framework which can both learn from a global multi-task architecture and locally adapt this learning to the task at hand. In addition to the global knowledge, we conjecture that it is also beneficial to further focus on the most relevant pieces of previous knowledge. Using a prototypical network as a proxy, the proposed framework bases its adaptation on the similarity between the current data stream and the previously encountered data. We develop two algorithms, one for the standard task-aware CL and another for the more challenging task-free setting where boundaries between tasks are unknown. We correspondingly derive a generalization upper bound on the error of an upcoming task. Experiments demonstrate that the introduced algorithms lead to improved performance on several CL benchmarks. 

Article Details

Section
Articles