A Scalable and Effective Rough Set Theory based Approach for Big Data Pre-processing

Zaineb Chelly Dagdia, Christine Zarges, Gael Beck, Mustapha Lebbah

Research output: Contribution to journalArticlepeer-review

16 Citations (SciVal)
116 Downloads (Pure)


A big challenge in the knowledge discovery process is to perform data pre-processing, specifically feature selection, on a large amount of data and high dimensional attribute set. A variety of techniques have been proposed in the literature to deal with this challenge with different degrees of success as most of these techniques need further information about the given input data for thresholding, need to specify noise levels or use some feature ranking procedures. To overcome these limitations, Rough Set Theory (RST) can be used to discover the dependency within the data and reduce the number of attributes enclosed in an input data set while using the data alone and requiring no supplementary information. However, when it comes to massive data sets, RST reaches its limits as it is highly computationally expensive. In this paper, we propose a scalable and effective rough set theory based approach for large scale data pre-processing, specifically for feature selection, under the Spark framework. In our detailed experiments, data sets with up to 10 000 attributes have been considered, revealing that our proposed solution achieves a good speedup and performs its feature selection task well without sacrificing performance. Thus, making it relevant to big data.
Original languageEnglish
Pages (from-to)3321-3386
Number of pages66
JournalKnowledge and Information Systems
Issue number8
Early online date02 May 2020
Publication statusPublished - 01 Aug 2020


  • Big data
  • Data pre-processing
  • Distributed processing
  • High-performance computing
  • Rough set theory
  • Scalability


Dive into the research topics of 'A Scalable and Effective Rough Set Theory based Approach for Big Data Pre-processing'. Together they form a unique fingerprint.

Cite this