ReFu: Recursive Fusion for Exemplar-Free 3D Class-Incremental Learning

1 The University of Edinburgh, 2 South China University of Technology
Main banner image
Overview of our ReFu framework. During incremental learning, data flows progressively into the model. Our proposed fusion backbone, pre-trained on the ShapeNet dataset and frozen during learning, extracts and fuses features from point clouds and meshes. These fused features are then expanded via a random projection layer and input into the Recursive Incremental Learning Mechanism (RILM). RILM recursively updates the regularized auto-correlation matrix and classifier weights. Only the matrix and weights from the previous phase \( (n-1) \) are stored, without retaining any raw data as exemplar.

Abstract

We introduce a novel Recursive Fusion model, dubbed ReFu, designed to integrate point clouds and meshes for exemplar-free 3D Class-Incremental Learning, where the model learns new 3D classes while retaining knowledge of previously learned ones. Unlike existing methods that either rely on storing historical data to mitigate forgetting or focus on single data modalities, ReFu eliminates the need for exemplar storage while utilizing the complementary strengths of both point clouds and meshes. To achieve this, we introduce a recursive method which continuously accumulates knowledge by updating the regularized autocorrelation matrix. Furthermore, we propose a fusion module, featuring a Pointcloud-guided Mesh Attention Layer that learns correlations between the two modalities. This mechanism effectively integrates point cloud and mesh features, leading to more robust and stable continual learning. Experiments across various datasets demonstrate that our proposed framework outperforms existing methods in 3D class-incremental learning.

Keywords: Class-Incremental Learning, 3D Computer Vision, Multi-modal Learning.

BibTeX


  @article{yang2024refu,
    title={ReFu: Recursive Fusion for Exemplar-Free 3D Class-Incremental Learning},
    author={Yang, Yi and Zhong, Lei and Zhuang, Huiping},
    journal={arXiv preprint arXiv:2409.12326},
    year={2024}
  }