Crisp decision trees are one of the most popular classification algorithms in current use within data mining and machine learning. However, although they possess many desirable features, they lack the ability to model vagueness. As a result of this, the induction of fuzzy decision trees (FDTs) has become an area of much interest. One important aspect of tree induction is the choice of feature at each stage of construction. If weak features are selected, the resulting decision tree will be meaningless and will exhibit poor performance. This paper introduces a new measure of feature significance based on fuzzy-rough sets for use within fuzzy ID3. The measure is experimentally compared with leading feature rankers, and is also compared with traditional fuzzy entropy for fuzzy tree induction.
|Number of pages
|Published - 2005