Home  | Publications | CZX+24

Embracing Events and Frames With Hierarchical Feature Refinement Network for Object Detection

MCML Authors

Abstract

In frame-based vision, object detection faces substantial performance degradation under challenging conditions due to the limited sensing capability of conventional cameras. Event cameras output sparse and asynchronous events, providing a potential solution to solve these problems. However, effectively fusing two heterogeneous modalities remains an open issue. In this work, we propose a novel hierarchical feature refinement network for event-frame fusion. The core concept is the design of the coarse-to-fine fusion module, denoted as the cross-modality adaptive feature refinement (CAFR) module. In the initial phase, the bidirectional cross-modality interaction (BCI) part facilitates information bridging from two distinct sources. Subsequently, the features are further refined by aligning the channel-level mean and variance in the two-fold adaptive feature refinement (TAFR) part. We conducted extensive experiments on two benchmarks: the low-resolution PKU-DDD17-Car dataset and the high-resolution DSEC dataset. Experimental results show that our method surpasses the state-of-the-art by an impressive margin of 8% on the DSEC dataset. Besides, our method exhibits significantly better robustness (69.5% versus 38.7%) when introducing 15 different corruption types to the frame images.

inproceedings


ECCV 2024

18th European Conference on Computer Vision. Milano, Italy, Sep 29-Oct 04, 2024.
Conference logo
A* Conference

Authors

H. Cao • Z. Zhang • Y. Xia • X. Li • J. Xia • G. Chen • A. Knoll

Links

DOI GitHub

Research Area

 B1 | Computer Vision

BibTeXKey: CZX+24

Back to Top