MOTP measures the positioning error for all matched pairs of person and tracker hypothesis on all frames.
MOTP = [[summation].sub.i,t] [d.sub.i,t]/[[summation].sub.t][c.sub.t] (22)
MOTP evaluated on the vision-based localization system with 24.3cm and 21.3cm for Hallway and Showroom scenarios respectively.
The MOTP is the total error in estimated position for matched object-hypothesis pairs over all frames, averaged by the total number of matches made.
Under MOTP metric, GM-PHD filter performs best except for test 4.
Evaluating performance (MOTP in pixels) Test 1 Test 2 Approach MOTP MOTA MOTP MOTA D-GMPHD 9.15 72.34% 11.98 90.10% filter GM-PHD 8.18 27.66% 11.84 34.65% filter BPF 17.21 -114.89% 20.68 40.59% Test 3 Test 4 Approach MOTP MOTA MOTP MOTA D-GMPHD 4.59 88.57% 2.22 94.97% filter GM-PHD 4.43 54.29% 2.65 35.22% filter BPF 24.17 -2.86% 7.77 98% Table 4.
MOTP = [[SIGMA].sub.i,t] [C.sub.i,t]/[[SIGMA].sub.t] [Nm.sub.t], (21)
Figure 5 shows the histograms of MOTA and MOTP in the experiment using the SPFA algorithm.
From Table 1, we can observe that our tracking approach can achieve a competitive result, especially the metric value of MOTP. For [17, 18], MOTP and MoTA cannot achieve good values simultaneously.
As [19], it can obtain a good balance between MOTA and MOTP, but its metric value is a little lower than that of our method.
Algorithm MOTP MOTA Andriyenko [17] 69.0% 63.7% Berclaz [18] 63.0% 77.0% Milan [19] 67.2% 67.0% Jin [20] 72.4% 72.1% Our method 69.9% 71.4% Table 2: The statistics result of pedestrian flow counting.
They defined two very intuitive metrics as multiple object tracking precision (
MOTP) and the multiple object tracking accuracy (MOTA).