Comparative study of DistilBERT and ELECTRA-Small Models in Spam Email Classification

Ferdy Agusman

Abstract


Spam email detection is one of the challenging tasks in cybersecurity due to the variability of spam content. These characteristics make it harder to identify spam, therefore researchers create different spam detection methods. Among these, Natural Language Processing (NLP) and machine learning techniques have shown outstanding results in classifying emails as spam or non-spam. Transformer-based models, such as BERT, have demonstrated pinpoint accuracy in text classification tasks. However, the computational requirements and resources are not practical in resource-limited environments. In order to mitigate this, smaller and more lightweight models, such as the DistilBERT and ELECTRA-Small, have been developed. Both models are renowned for their efficiency and accuracy. This study focuses on the comparison of these models in terms of accuracy, precision, recall, and F1 score. Experimental results revealed that while both models excel in binary classification, notable differences emerge. ELECTRA-small shows exceptional accuracy, precision and faster processing time, while DistilBERT demonstrates superior recall, highlighting its effectiveness in minimizing false negatives.


Keywords


Spam email;Machine learning;Transformer

Full Text:

PDF

References


AbdulNabi, I., & Yaseen, Q. (2021). Spam Email Detection Using Deep Learning Techniques. Procedia Computer Science, 184,853–858. https://doi.org/10.1016/j.procs.2021.03.107

Agbesi, V. K., Chen, W., Yussif, S. B., Hossin, M. A., Ukwuoma, C. C., Kuadey, N. A., Agbesi, C. C., Abdel Samee, N., Jamjoom, M. M., & Al-antari, M. A. (2023). Pre-Trained Transformer-Based Models for Text Classification Using Low-Resourced Ewe Language. Systems, 12(1),1. https://doi.org/10.3390/systems12010001

Ahmed, N., Amin, R., Aldabbas, H., Koundal, D., Alouffi, B., & Shah, T. (2022). Machine Learning Techniques for Spam Detection in Email and IoT Platforms: Analysis and Research Challenges. Security and Communication Networks, 2022, 1–19. https://doi.org/10.1155/2022/1862888

Akinyelu, A. A. (2021). Advances in spam detection for email spam, web spam, social network spam, and review spam: ML-based and nature-inspired-based techniques. Journal of Computer Security, 29(5),473–529. https://doi.org/10.3233/JCS-210022

Chowdhury, G. G. (2020). Natural Language Processing.

Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators (No. arXiv:2003.10555). arXiv. https://doi.org/10.48550/arXiv.2003.10555

Fahmy Amin, M. (2022). Confusion Matrix in Binary Classification Problems: A Step-by-Step Tutorial. Journal of Engineering Research,6(5),0–0. https://doi.org/10.21608/erjeng.2022.274526

Guo, Y., Mustafaoglu, Z., & Koundal, D. (2022). Spam Detection Using Bidirectional Transformers and Machine Learning Classifier Algorithms. Journal of Computational and Cognitive Engineering,2(1),59.https://doi.org/10.47852/bonviewJCCE2202192

Jazzar, M., F. Yousef, R., & Eleyan, D. (2021). Evaluation of Machine Learning Techniques for Email Spam Classification. International Journal of Education and Management Engineering, 11(4),35–42. https://doi.org/10.5815/ijeme.2021.04.04

Jones, I. (n.d.). Assessing the Efficacy of the ELECTRA Pre-Trained Language Model for Multi-Class Sarcasm Subcategory Classification. Department of Computer Science, University of Bath.

Khan, M., & Ghafoor, L. (n.d.). Adversarial Machine Learning in the Context of Network Security Challenges and Solutions.

Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., & Shah, M. (2022). Transformers in Vision: A Survey. ACM Computing Surveys, 54(10s), 1–41. https://doi.org/10.1145/3505244

Kofi Akpatsa, S., Lei, H., Li, X., Kofi Setornyo Obeng, V.-H., Mensah Martey, E., Clement Addo, P., & Dodzi Fiawoo, D. (2022). Online News Sentiment Classification Using DistilBERT. Journal of Quantum Computing, 4(1), 1–11. https://doi.org/10.32604/jqc.2022.026658

Li, P., Zhong, P., Mao, K., Wang, D., Yang, X., Liu, Y., Yin, J., & See, S. (2021). ACT: An Attentive Convolutional Transformer for Efficient Text Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13261–13269. https://doi.org/10.1609/aaai.v35i15.17566

Lu, H., Ehwerhemuepha, L., & Rakovski, C. (2022). A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance. BMC Medical Research Methodology, 22(1), 181. https://doi.org/10.1186/s12874-022-01665-y

Nair, A. R., Singh, R. P., Gupta, D., & Kumar, P. (2024). Evaluating the Impact of Text Data Augmentation on Text Classification Tasks using DistilBERT. Procedia Computer Science, 235, 102–111. https://doi.org/10.1016/j.procs.2024.04.013

Nallamothu, P. T., & Khan, M. S. (2023). Machine Learning for SPAM Detection. 6(1).

Ranasinghe, T., Gupte, S., Zampieri, M., & Nwogu, I. (2020). WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments.

Sahmoud, T., & Mikki, D. M. (2022). Spam Detection Using BERT.

Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2020). DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter (No. arXiv:1910.01108). arXiv. https://doi.org/10.48550/arXiv.1910.01108

Silva Barbon, R., & Akabane, A. T. (2022). Towards Transfer Learning Techniques—BERT, DistilBERT, BERTimbau, and DistilBERTimbau for Automatic Text Classification from Different Languages: A Case Study. Sensors, 22(21), 8184. https://doi.org/10.3390/s22218184

Tepecik, A., & Demir, E. (2024). Emotion Detection with Pre-Trained Language Models BERT and ELECTRA Analysis of Turkish Data. https://doi.org/10.58190/imiens.2024.82

Tezgider, M., Yildiz, B., & Aydin, G. (2022). Text classification using improved bidirectional transformer. Concurrency and Computation: Practice and Experience, 34(9), e6486. https://doi.org/10.1002/cpe.6486

Wood, T., Basto-Fernandes, V., Boiten, E., & Yevseyeva, I. (n.d.). Systematic Literature Review: Anti-Phishing Defences and Their Application to Before-the-click Phishing Email Detection.

Yi, X., & Xiao, Y. (2024). Optimizing Transformer Models for Resource-Constrained Environments: A Study on Compression Techniques for Edge Computing.

Zhang, S., Yu, H., & Zhu, G. (2022). An emotional classification method of Chinese short comment text based on ELECTRA. Connection Science, 34(1), 254–273. https://doi.org/10.1080/09540091.2021.1985968




DOI: https://doi.org/10.31294/inf.v12i2.25528

Refbacks



Copyright (c) 2025 Ferdy Agusman

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Index by:

 
 
 Published LPPM Universitas Bina Sarana Informatika with supported by Relawan Jurnal Indonesia

Jl. Kramat Raya No.98, Kwitang, Kec. Senen, Jakarta Pusat, DKI Jakarta 10450, Indonesia
Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License