Rank | Team | Organization | BLEU@4 | Meteor | CIDEr-D | ROUGE-L |
---|---|---|---|---|---|---|
1 | v2t_navigator | RUC & CMU | 0.408 | 0.282 | 0.448 | 0.609 |
2 | Aalto | Aalto University | 0.398 | 0.269 | 0.457 | 0.598 |
3 | VideoLAB | UML & Berkeley & UT-Austin | 0.391 | 0.277 | 0.441 | 0.606 |
4 | ruc-uva | RUC & UVA & Zhejiang University | 0.387 | 0.269 | 0.459 | 0.587 |
5 | Fudan-ILC | Fudan & ILC | 0.387 | 0.268 | 0.419 | 0.595 |
6 | NUS-TJU | NUS & TJU | 0.371 | 0.267 | 0.410 | 0.590 |
7 | Umich-COG | University of Michigan | 0.371 | 0.266 | 0.411 | 0.583 |
8 | MCG-ICT-CAS | ICT-CAS | 0.367 | 0.264 | 0.404 | 0.590 |
9 | DeepBrain | NLPR_CASIA & IQIYI | 0.382 | 0.259 | 0.401 | 0.582 |
10 | NTU MiRA | NTU | 0.355 | 0.261 | 0.383 | 0.579 |
11 | NLPRMMC | CASIA & Anhui University | 0.348 | 0.260 | 0.375 | 0.575 |
12 | NTHU_VSLab | NTHU | 0.344 | 0.260 | 0.367 | 0.584 |
13 | NII-AIST | NII-AIST & Tokyo & Tohoku | 0.364 | 0.257 | 0.370 | 0.577 |
14 | MIC_TJU | Tongji University | 0.345 | 0.258 | 0.350 | 0.575 |
15 | scorpio | University of Montreal | 0.348 | 0.251 | 0.367 | 0.571 |
16 | KR | University of Rochester | 0.328 | 0.253 | 0.364 | 0.564 |
17 | Shen&Xu | Hefei University of Technology & USTC | 0.314 | 0.247 | 0.338 | 0.555 |
18 | VRPGASU | Arizona State University | 0.280 | 0.254 | 0.260 | 0.526 |
19 | AFRL | Air Force Research Lab | 0.289 | 0.227 | 0.338 | 0.504 |
20 | Daedalus | Aristotle University of Thessaloniki | 0.269 | 0.196 | 0.127 | 0.505 |
21 | Oceans | DCD Lab, Zhejiang University | 0.157 | 0.196 | 0.166 | 0.457 |
Rank | Team | Organization | C1 | C2 | C3 |
---|---|---|---|---|---|
1 | Aalto | Aalto University | 3.263 | 3.104 | 3.244 |
2 | v2t_navigator | RUC & CMU | 3.261 | 3.091 | 3.154 |
3 | VideoLAB | UML & Berkeley & UT-Austin | 3.237 | 3.109 | 3.143 |
4 | Fudan-ILC | Fudan & ILC | 3.185 | 2.999 | 2.979 |
5 | ruc-uva | RUC & UVA & Zhejiang University | 3.225 | 2.997 | 2.933 |
6 | Umich-COG | University of Michigan | 3.247 | 2.865 | 2.929 |
7 | NUS-TJU | NUS & TJU | 3.308 | 2.833 | 2.893 |
8 | DeepBrain | NLPR_CASIA & IQIYI | 3.259 | 2.878 | 2.892 |
9 | NLPRMMC | CASIA & Anhui University | 3.266 | 2.868 | 2.893 |
10 | MCG-ICT-CAS | ICT | 3.339 | 2.800 | 2.867 |
11 | KR | University of Rochester | 3.292 | 2.854 | 2.860 |
12 | NII-AIST | NII-AIST & Tokyo & Tohoku | 3.207 | 2.896 | 2.865 |
13 | scorpio | University of Montreal | 3.218 | 2.848 | 2.880 |
14 | NTU MiRA | NTU | 3.257 | 2.784 | 2.864 |
15 | AFRL | Air Force Research Lab | 3.150 | 2.849 | 2.852 |
16 | Shen&Xu | Hefei University of Technology & USTC | 3.209 | 2.743 | 2.802 |
17 | NTHU_VSLab | NTHU | 3.192 | 2.748 | 2.811 |
18 | VRPGASU | Arizona State University | 3.358 | 2.584 | 2.742 |
19 | MIC_TJU | Tongji University | 3.189 | 2.650 | 2.743 |
20 | Daedalus | Aristotle University of Thessaloniki | 3.074 | 2.473 | 2.629 |
21 | Oceans | DCD Lab, Zhejiang University | 3.091 | 2.397 | 2.556 |
We computed multiple common metrics, including BLEU@4, METEOR, ROUGE-L, and CIDEr-D. The performances of the primary run from each team are measured for comparison across teams. The results of all runs can be downloaded here.
In addition, we will carry out the human evaluation of the systems submitted to this challenge on a subset of the testing set. Human were asked to rank the generated sentences of the primary run from each team and a reference sentence from 1 to 5 (lower - better) with respect to the following criteria.
· Coherence: judge the logic and readability of the sentence.
· Relevance: whether the sentence contains the more relevant and important objects/actions/events in the video clip?
· Helpful for blind (additional criteria): how helpful would the sentence be for a blind person to understand what is happening in this video clip?
M1 | BLEU@4, METEOR, ROUGE-L, and CIDEr-D |
M2 | Human evaluation of the captions in terms of Coherence, Relevance, and helpful for blind on a scale of 1-5 (lower - better) |
The ranking for the competition is based on the results from M1 and M2, respectively. Specifically, a rank list of teams is produced by sorting their scores on each M1 evaluation metric, respectively. The final rank of a team is measured by combining its ranking positions in the four ranking list and defined as:
R(team) = R(team)@BLEU@4 + R(team)@METEOR + R(team)@ROUGE-L + R(team)@CIDEr-D.
where R(team) is the rank position of the team, e.g., if the team achieves the best performance in terms of BLEU@4, thenR(team)@BLEU@4 is "1". The smaller the final ranking, the better the performance.
Similar in spirit, we will linearly fuse the scores of human evaluation on Coherence, Relevance and Helpful for Blind (in a scale of 1-5) for each team. The final score of each team is given by:
S(team) = S(team)@Coherence + S(team)@Relevance + S(team)@Helpful for Blind.
The larger the score, the better the performance.
We finally rank all the participants in two separate lists, one in terms of R(team) and the other S(team).
M1 | R(team) = R(team)@BLEU@4 + R(team)@METEOR + R(team)@ROUGE-L + R(team)@CIDEr-D |
M2 | S(team) = S(team)@Coherence + S(team)@Relevance + S(team)@Helpful for Blind |