S2DNet : Learning Image Features for Accurate Sparse-to-Dense Matching
ECCV 2020 (Spotlight)
Hugo Germain1, Guillaume Bourmaud2, Vincent Lepetit1
1 LIGM - Ecole des Ponts, Univ Gustave Eiffel, CNRS, ESIEE Paris, France
Abstract
Establishing robust and accurate correspondences is a fundamental backbone to many computer vision algorithms. While recent learning-based feature matching methods have shown promising results in providing robust correspondences under challenging conditions, they are often limited in terms of precision. In this paper, we introduce S2DNet, a novel feature matching pipeline, designed and trained to efficiently establish both robust and accurate correspondences. By leveraging a sparse-to-dense matching paradigm, we cast the correspondence learning problem as a supervised classification task to learn to output highly peaked correspondence maps. We show that S2DNet achieves state-of-the-art results on the HPatches benchmark, as well as on several long-term visual localization datasets.
To cite our paper :
@inproceedings{Germain2020S2DNet,
title = {S2DNet: Learning Image Features for Accurate Sparse-to-Dense Matching},
author = {Hugo Germain and Guillaume Bourmaud and Vincent Lepetit},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020}
}