An easy-to-use evaluation framework for benchmarking entity recognition and disambiguation systems

Hui Chen, Bao-gang Wei, Yi-ming Li, Yonghuai Liu, Wen-hao Zhu

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
340 Downloads (Pure)

Abstract

Entity recognition and disambiguation (ERD) is a crucial technique for knowledge base population and information extraction. In recent years, numerous papers have been published on this subject, and various ERD systems have been developed. However, there are still some confusions over the ERD field for a fair and complete comparison of these systems. Therefore, it is of emerging interest to develop a unified evaluation framework. In this paper, we present an easy-to-use evaluation framework (EUEF), which aims at facilitating the evaluation process and giving a fair comparison of ERD systems. EUEF is well designed and released to the public as an open source, and thus could be easily extended with novel ERD systems, datasets, and evaluation metrics. It is easy to discover the advantages and disadvantages of a specific ERD system and its components based on EUEF. We perform a comparison of several popular and publicly available ERD systems by using EUEF, and draw some interesting conclusions after a detailed analysis
Original languageEnglish
Pages (from-to)195-205
JournalFrontiers of Information Technology & Electronic Engineering
Volume18
Issue number2
DOIs
Publication statusPublished - 18 Feb 2017

Keywords

  • entity recognition and disambiguation (ERD)
  • evaluation framework
  • information extraction

Fingerprint

Dive into the research topics of 'An easy-to-use evaluation framework for benchmarking entity recognition and disambiguation systems'. Together they form a unique fingerprint.

Cite this