Highly interpretable hierarchical deep rule-based classifier

Xiaowei Gu, Plamen Parvanov Angelov

Research output: Contribution to journalArticlepeer-review

9 Citations (SciVal)
59 Downloads (Pure)


Pioneering the traditional fuzzy rule-based (FRB) systems, deep rule-based (DRB) classifiers are able to offer both human-level performance and transparent system structure on image classification problems by integrating zero-order fuzzy rule base with a multi-layer image-processing architecture that is typical for deep learning. Nonetheless, it is frequently observed that the inner structure of DRB can become over sophisticated and not interpretable for humans when applied to large-scale, complex problems. To tackle the issue, one feasible solution is to construct a tree structural classification model by aggregating the possibly huge number of prototypes identified from data into a much smaller number of more descriptive and highly abstract ones. Therefore, in this paper, we present a novel hierarchical deep rule-based (H-DRB) approach that is capable of summarizing the less descriptive raw prototypes into highly generalized ones and self-arranging them into a hierarchical prototype-based structure according to their descriptive abilities. By doing so, H-DRB can offer high-level performance and, most importantly, full transparency and human-interpretability on various problems including large-scale ones. The proposed concept and generical principles are verified through numerical experiments based on a wide variety of popular benchmark image sets. Numerical results demonstrate that the promise of H-DRB.
Original languageEnglish
Article number106310
JournalApplied Soft Computing
Early online date20 Apr 2020
Publication statusPublished - 01 Jul 2020
Externally publishedYes


  • Deep rule-based
  • Hierarchical
  • Prototype-based
  • Self-organizing


Dive into the research topics of 'Highly interpretable hierarchical deep rule-based classifier'. Together they form a unique fingerprint.

Cite this