Publications HAL de Guillaume,Delorme du labo/EPI UGA

2021

Conference papers

titre
ODANet: Online Deep Appearance Network for Identity-Consistent Multi-Person Tracking
auteur
Guillaume Delorme, yutong Ban, Guillaume Sarrazin, Xavier Alameda-Pineda
article
ICPR 2021 - 25th International Conference on Pattern Recognition / Workshops, Jan 2021, Milano / Virtual, Italy. pp.803-818, ⟨10.1007/978-3-030-68780-9_60⟩
resume
The analysis of effective states through time in multi-person scenarii is very challenging, because it requires to consistently track all persons over time. This requires a robust visual appearance model capable of re-identifying people already tracked in the past, as well as spotting newcomers. In real-world applications, the appearance of the persons to be tracked is unknown in advance, and therefore on must devise methods that are both discriminative and flexible. Previous work in the literature proposed different tracking methods with fixed appearance models. These models allowed, up to a certain extent, to discriminate between appearance samples of two different people. We propose an online deep appearance network (ODANet), a method able to simultaneously track people and update the appearance model with the newly gathered annotation-less images. Since this task is specially relevant for autonomous systems, we also describe a platform-independent robotic implementation of ODANet. Our experiments show the superiority of the proposed method with respect to the state of the art, and demonstrate the ability of ODANet to adapt to sudden changes in appearance, to integrate new appearances in the tracking system and to provide more identity-consistent tracks.
Accès au texte intégral et bibtex
https://hal.inria.fr/hal-03188744/file/main.pdf BibTex
titre
CANU-ReID: A Conditional Adversarial Network for Unsupervised person Re-IDentification
auteur
Guillaume Delorme, Yihong Xu, Stéphane Lathuilière, Radu Horaud, Xavier Alameda-Pineda
article
ICPR 2020 - 25th International Conference on Pattern Recognition, Jan 2021, Milano, Italy. pp.4428-4435, ⟨10.1109/ICPR48806.2021.9412431⟩
resume
Unsupervised person re-ID is the task of identifying people on a target data set for which the ID labels are unavailable during training. In this paper, we propose to unify two trends in unsupervised person re-ID: clustering & fine-tuning and adversarial learning. On one side, clustering groups training images into pseudo-ID labels, and uses them to fine-tune the feature extractor. On the other side, adversarial learning is used, inspired by domain adaptation, to match distributions from different domains. Since target data is distributed across different camera viewpoints, we propose to model each camera as an independent domain, and aim to learn domain-independent features. Straightforward adversarial learning yields negative transfer, we thus introduce a conditioning vector to mitigate this undesirable effect. In our framework, the centroid of the cluster to which the visual sample belongs is used as conditioning vector of our conditional adversarial network, where the vector is permutation invariant (clusters ordering does not matter) and its size is independent of the number of clusters. To our knowledge, we are the first to propose the use of conditional adversar-ial networks for unsupervised person re-ID. We evaluate the proposed architecture on top of two state-of-the-art clustering-based unsupervised person re-identification (re-ID) methods on four different experimental settings with three different data sets and set the new state-of-the-art performance on all four of them. Our code and model will be made publicly available at https://team.inria.fr/perception/canu-reid/.
Accès au texte intégral et bibtex
https://hal.inria.fr/hal-02882285/file/delorme_icpr2020.pdf BibTex

Theses

titre
Unsupervised domain adaptive multiple person tracking and visual identification for human-robot interaction
auteur
Guillaume Delorme
article
Artificial Intelligence [cs.AI]. Université Grenoble Alpes [2020-..], 2021. English. ⟨NNT : 2021GRALM035⟩
resume
Human robot interaction requires the robot to have an accurate knowledge ofits environment, especially who is present, and where, to enable an interactiveconversation. In this context, this thesis proposes to exploit image informa-tion recorded by the embedded camera to perform Multiple Object Tracking(MOT), leveraging localization and identification by exploiting temporal andspatial proximity to produce ID-exploitable trajectories. State-of-the-art meth-ods rely on deep learning approaches, which are known to heavily depend onthe training data, and suffer from poor generalization ability. More specifi-cally, most of MOT implementations embed a person re-identification modelto use as appearance cue, while those are widely known to be sensitive tobackground changes and illumination conditions. Consequently, this work fo-cuses on investigating adaptation strategies to new domains for MOT and re-ID models. A probabilistic generative model is first proposed to derive a MOTimplementation which, combined with a deep appearance model updated withpast track annotations, is able to adapt to the target domain on the fly, andis suitable for robotic application. It is quantitatively evaluated on a stan-dard MOT dataset while a robotic implementation provides qualitative results.Then, inspired by the domain adaptation literature, a camera-wise adversarialstrategy is proposed to address unsupervised person re-ID, and demonstratescompetitive performance compared to state-of-the-art re-ID models. It is thenfurther investigated in the novel framework of clustering and finetuning. Aconditional adversarial approach is proposed to address the negative transferproblem caused by the non-uniform distribution of IDs across cameras. Thisstrategy is implemented on two state-of-the-art unsupervised re-ID models,and shown to outperform them, thus yielding state-of-the-art performance. Fi-nally, the adversarial domain adaptation framework is further investigated inthe context of MOT. The interest for unsupervised domain adaptation MOT isdemonstrated, and combined with a tracking and finetuning strategy, an adver-sarial training scheme is derived and shown to outperform simpler adaptationstrategies.
Accès au texte intégral et bibtex
https://tel.archives-ouvertes.fr/tel-03564335/file/DELORME_2021_archivage.pdf BibTex

Preprints, Working Papers, ...

titre
TransCenter: Transformers with Dense Queries for Multiple-Object Tracking
auteur
Yihong Xu, Yutong Ban, Guillaume Delorme, Chuang Gan, Daniela Rus, Xavier Alameda-Pineda
article
2021
resume
Transformer networks have proven extremely powerful for a wide variety of tasks since they were introduced. Computer vision is not an exception, as the use of transformers has become very popular in the vision community in recent years. Despite this wave, multiple-object tracking (MOT) exhibits for now some sort of incompatibility with transformers. We argue that the standard representation -- bounding boxes -- is not adapted to learning transformers for MOT. Inspired by recent research, we propose TransCenter, the first transformer-based architecture for tracking the centers of multiple targets. Methodologically, we propose the use of dense queries in a double-decoder network, to be able to robustly infer the heatmap of targets' centers and associate them through time. TransCenter outperforms the current state-of-the-art in multiple-object tracking, both in MOT17 and MOT20. Our ablation study demonstrates the advantage in the proposed architecture compared to more naive alternatives. The code will be made publicly available.
Accès au bibtex
https://arxiv.org/pdf/2103.15145 BibTex

2019

Conference papers

titre
Audio-Visual Variational Fusion for Multi-Person Tracking with Robots
auteur
Xavier Alameda-Pineda, Soraya Arias, Yutong Ban, Guillaume Delorme, Laurent Girin, Radu Horaud, Xiaofei Li, Bastien Mourgue, Guillaume Sarrazin
article
ACMMM 2019 - 27th ACM International Conference on Multimedia, Oct 2019, Nice, France. pp.1059-1061, ⟨10.1145/3343031.3350590⟩
Accès au texte intégral et bibtex
https://hal.inria.fr/hal-02354514/file/avtracking_demo.pdf BibTex