Abstract
Person re-identification (re-ID) is one of the essential tasks for modern visual intelligent systems to identify a person from images or videos captured at different times, viewpoints, and spatial positions. In fact, it is easy to make an incorrect estimate for person re-ID in the presence of illumination change, low resolution, and pose differences. To provide a robust and accurate prediction, machine learning techniques are extensively used nowadays. However, learning-based approaches often face difficulties in data imbalance and distinguishing a person from others having strong appearance similarity. To improve the overall re-ID performance, false positives and false negatives should be part of the integral factors in the design of the loss function. In this work, we refine the well-known AGW baseline by incorporating a focal Tversky loss to address the data imbalance issue and facilitate the model to learn effectively from the hard examples. Experimental results show that the proposed re-ID method reaches rank-1 accuracy of 96.2% (with mAP: 94.5) and rank-1 accuracy of 93% (with mAP: 91.4) on Market1501 and DukeMTMC datasets, respectively, outperforming the state-of-the-art approaches.
Original language | English |
---|---|
Article number | 9852 |
Journal | Sensors |
Volume | 22 |
Issue number | 24 |
DOIs | |
Publication status | Published - 2022 Dec |
Keywords
- AGW baseline
- focal Tversky loss
- multiple object detection
- person re-identification
- person recognition
ASJC Scopus subject areas
- Analytical Chemistry
- Information Systems
- Biochemistry
- Atomic and Molecular Physics, and Optics
- Instrumentation
- Electrical and Electronic Engineering