Event-Based Optical Flow via Transforming Into Motion-Dependent View

Event cameras respond to temporal dynamics, helping to resolve ambiguities in spatio-temporal changes for optical flow estimation. However, the unique spatio-temporal event distribution challenges the feature extraction, and the direct construction of motion representation through the orthogonal vie...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 27., Seite 5327-5339
1. Verfasser: Wan, Zengyu (VerfasserIn)
Weitere Verfasser: Tan, Ganchao, Wang, Yang, Zhai, Wei, Cao, Yang, Zha, Zheng-Jun
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM375461116
003 DE-627
005 20240928232245.0
007 cr uuu---uuuuu
008 240727s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3426469  |2 doi 
028 5 2 |a pubmed24n1551.xml 
035 |a (DE-627)NLM375461116 
035 |a (NLM)39058603 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wan, Zengyu  |e verfasserin  |4 aut 
245 1 0 |a Event-Based Optical Flow via Transforming Into Motion-Dependent View 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Event cameras respond to temporal dynamics, helping to resolve ambiguities in spatio-temporal changes for optical flow estimation. However, the unique spatio-temporal event distribution challenges the feature extraction, and the direct construction of motion representation through the orthogonal view is less than ideal due to the entanglement of appearance and motion. This paper proposes to transform the orthogonal view into a motion-dependent one for enhancing event-based motion representation and presents a Motion View-based Network (MV-Net) for practical optical flow estimation. Specifically, this motion-dependent view transformation is achieved through the Event View Transformation Module, which captures the relationship between the steepest temporal changes and motion direction, incorporating these temporal cues into the view transformation process for feature gathering. This module includes two phases: extracting the temporal evolution clues by central difference operation in the extraction phase and capturing the motion pattern by evolution-guided deformable convolution in the perception phase. Besides, the MV-Net constructs an eccentric downsampling process to avoid response weakening from the sparsity of events in the downsampling stage. The whole network is trained end-to-end in a self-supervised manner, and the evaluations conducted on four challenging datasets reveal the superior performance of the proposed model compared to state-of-the-art (SOTA) methods 
650 4 |a Journal Article 
700 1 |a Tan, Ganchao  |e verfasserin  |4 aut 
700 1 |a Wang, Yang  |e verfasserin  |4 aut 
700 1 |a Zhai, Wei  |e verfasserin  |4 aut 
700 1 |a Cao, Yang  |e verfasserin  |4 aut 
700 1 |a Zha, Zheng-Jun  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 27., Seite 5327-5339  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:33  |g year:2024  |g day:27  |g pages:5327-5339 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3426469  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 27  |h 5327-5339