TY - JOUR
T1 - Fake News Detection with Deep Learning
T2 - Insights from Multi-dimensional Model Analysis
AU - Li, QiuPing
AU - Fu, Fen
AU - Li, Yinjuan
AU - Wisassinthu, Bhunnisa
AU - Chansanam, Wirapong
AU - Boongoen, Tossapon
N1 - © The Author(s) 2025.
PY - 2025/8/28
Y1 - 2025/8/28
N2 - This study aims to systematically evaluate and compare various deep learning models in terms of accuracy, efficiency, and interpretability for fake news detection. Leveraging recent advancements in pretrained models (e.g., BERT, RoBERTa) and lightweight frameworks (e.g., TextCNN), we implemented and optimized multiple detection models. Comparative analysis was conducted on a dataset containing approximately 40,000 news texts. Results revealed that BERT Large significantly outperformed other models, achieving an accuracy of 99.33%, attributed to its extensive semantic understanding capabilities. Conversely, TextCNN, despite its simpler architecture, achieved competitive accuracy (98.77%), demonstrating substantial practical value for resource-limited environments. Interpretability analysis via attention visualization highlighted distinct cognitive strategies of pretrained models when classifying real versus fake news. While the study addresses critical technical challenges in fake news detection, limitations related to potential dataset biases and domain specificity were acknowledged, suggesting opportunities for future research on multimodal and cross-domain adaptations. This research contributes substantially by providing practical benchmarks and interpretability insights, significantly enhancing real-world fake news detection systems, thus aiding platforms in combating misinformation effectively.
AB - This study aims to systematically evaluate and compare various deep learning models in terms of accuracy, efficiency, and interpretability for fake news detection. Leveraging recent advancements in pretrained models (e.g., BERT, RoBERTa) and lightweight frameworks (e.g., TextCNN), we implemented and optimized multiple detection models. Comparative analysis was conducted on a dataset containing approximately 40,000 news texts. Results revealed that BERT Large significantly outperformed other models, achieving an accuracy of 99.33%, attributed to its extensive semantic understanding capabilities. Conversely, TextCNN, despite its simpler architecture, achieved competitive accuracy (98.77%), demonstrating substantial practical value for resource-limited environments. Interpretability analysis via attention visualization highlighted distinct cognitive strategies of pretrained models when classifying real versus fake news. While the study addresses critical technical challenges in fake news detection, limitations related to potential dataset biases and domain specificity were acknowledged, suggesting opportunities for future research on multimodal and cross-domain adaptations. This research contributes substantially by providing practical benchmarks and interpretability insights, significantly enhancing real-world fake news detection systems, thus aiding platforms in combating misinformation effectively.
KW - fake news detection
KW - deep learning
KW - BERT
KW - TextCNN
KW - model interpretability
UR - https://ojs.bonviewpress.com/index.php/JCCE/article/view/6051/1555
U2 - 10.47852/bonviewJCCE52026051
DO - 10.47852/bonviewJCCE52026051
M3 - Article
SN - 2810-9570
JO - Journal of Computational and Cognitive Engineering
JF - Journal of Computational and Cognitive Engineering
ER -