Most existing few-shot classification methods only consider generalization on one dataset (i.e., single-domain), failing to transfer across various seen and unseen domains. In this talk, I will introduce the more realistic multi-domain few-shot classification problem to investigate the cross-domain generalization. Specifically, I will elaborate our ICCV 2021 work which designed a parameter-efficient multi-mode modulator (tri-M) to solve the above problem. First, the modulator is designed to maintain multiple modulation parameters (one for each domain) in a single network, thus achieving single-network multi-domain representation. Given a particular domain, domain-aware features can be efficiently generated with the well-devised separative selection and cooperative query modules. Second, we further divide the modulation parameters into the domain-specific set and the domain-cooperative set to explore the intra-domain information and inter-domain correlations, respectively. We demonstrate that the proposed multi-mode modulator achieves state-of-the-art results on the challenging META-DATASET benchmark, especially for unseen test domains.
Short Bio
Dr. Yanbin Liu is currently a Research Fellow in the School of Computing, Australian National University. His research interest involves few-shot learning, deep declarative networks, and spatial-temporal modeling. He has obtained 900+ Google citations, among which the ICLR 19 paper set up a new transductive few-shot benchmark and attracted 600+ followup works. He is the reviewer of major computer vision and machine learning conferences and journals, and received the outstanding reviewer award in CVPR 202
More Details
- When: Wed 15 March 2023, at 1:00 pm (GMT+10)
- Speaker: Dr Yanbin Liu (Australian National University)
- Host: Dr Xin Yu
- Zoom: https://uqz.zoom.us/j/82896549343