DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where.
DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:
Below is a list of dataset extensions provided by the community:
If you use or extend DukeMTMC, please refer to the license terms.
DukeMTMCT is a tracking benchmark hosted on motchallenge.net. Click here for the up-to-date rankings. Here you will find the official motchallenge-devkit used for evaluation by MOTChallenge. For detailed instructions how to submit on motchallenge you can refer to this link.
Trackers are ranked using our identity-based measures which compute how often the system is correct about who is where, regardless of how often a target is lost and reacquired. Our measures are useful in applications such as security, surveillance or sports. This short post describes our measures with illustrations, while for details you can refer to the original paper.
The code for DeepCC  will be released after CVPR in early July.
We provide code for the following tracking systems which are all based on Correlation Clustering optimization:
Below is a list of extensions to our code provided by the community:
 Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking. [pdf] [blog post]
 Tracking Social Groups Within and Across Cameras. F. Solera, S. Calderara, E. Ristani, C. Tomasi, and R. Cucchiara. IEEE Transactions on Circuits and Systems 2016. [pdf]
 Tracking Multiple People Online and in Real Time. E. Ristani and C. Tomasi. ACCV 2014. [pdf]
If you use our work, please cite our papers accordingly:
If you use or extend our data, please see the license terms.