TY - JOUR
T1 - Human consistency evaluation of static video summaries
AU - Kannappan, Sivapriyaa
AU - Liu, Yonghuai
AU - Tiddeman, Bernard
N1 - Funding Information:
Acknowledgments The first author would like to thank for the award given by Aberystwyth University under the Departmental Overseas Scholarship (DOS) and partly funding by Object Matrix, Ltd on the project. The authors would express their gratitude to the associate editor and anonymous reviewers for their constructive comments that have improved the readability and quality of this paper.
Publisher Copyright:
© 2018, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2019/5/1
Y1 - 2019/5/1
N2 - Automatic video summarization aims to provide brief representation of videos. Its evaluation is quite challenging, usually relying on comparison with user summaries. This study views it in a different perspective in terms of verifying the consistency of user summaries, as the outcome of video summarization is usually judged based on them. We focus on human consistency evaluation of static video summaries in which the user summaries are evaluated among themselves using the consistency modelling method we proposed recently. The purpose of such consistency evaluation is to check whether the users agree among themselves. The evaluation is performed on different publicly available datasets. Another contribution lies in the creation of static video summaries from the available video skims of the SumMe datatset. The results show that the level of agreement varies significantly between the users for the selection of key frames, which denotes the hidden challenge in automatic video summary evaluation. Moreover, the maximum agreement level of the users for a certain dataset, may indicate the best performance that the automatic video summarization techniques can achieve using that dataset.
AB - Automatic video summarization aims to provide brief representation of videos. Its evaluation is quite challenging, usually relying on comparison with user summaries. This study views it in a different perspective in terms of verifying the consistency of user summaries, as the outcome of video summarization is usually judged based on them. We focus on human consistency evaluation of static video summaries in which the user summaries are evaluated among themselves using the consistency modelling method we proposed recently. The purpose of such consistency evaluation is to check whether the users agree among themselves. The evaluation is performed on different publicly available datasets. Another contribution lies in the creation of static video summaries from the available video skims of the SumMe datatset. The results show that the level of agreement varies significantly between the users for the selection of key frames, which denotes the hidden challenge in automatic video summary evaluation. Moreover, the maximum agreement level of the users for a certain dataset, may indicate the best performance that the automatic video summarization techniques can achieve using that dataset.
KW - Consistency modelling
KW - Keyframe extraction
KW - Performance evaluation
KW - User consistency
KW - Video summarization
UR - http://www.scopus.com/inward/record.url?scp=85055689081&partnerID=8YFLogxK
U2 - 10.1007/s11042-018-6772-0
DO - 10.1007/s11042-018-6772-0
M3 - Article
SN - 1380-7501
VL - 78
SP - 12281
EP - 12306
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 9
ER -