首页|Enhance Social Network Bullying Detection Using Multi-Teacher Knowledge Distillation With XGBoost Classifier

Enhance Social Network Bullying Detection Using Multi-Teacher Knowledge Distillation With XGBoost Classifier

扫码查看
Cyberbullying remains a pressing issue in Thai social media, especially among teenagers. While many studies have explored deep learning approaches for sentiment analysis or toxicity detection, the detection of cyberbullying—especially in the Thai language—remains underexplored. This study introduces a novel framework that enhances cyberbullying detection by integrating Multi-Teacher Knowledge Distillation (MTKD) with an XGBoost classifier, specifically adapted for Thai-language social media posts. Unlike prior work that relies solely on neural models, this research demonstrates how distilled soft labels from diverse teacher models can be effectively transferred to a lightweight and interpretable XGBoost student model. A key contribution of this study is the successful adaptation of XGBoost, traditionally used for structured/tabular data, for a natural language classification task by using rich semantic features extracted via pre-trained NLP models. Additionally, although the selected datasets (Wisesight, Thai Toxic Tweet, and 40 Thai Children Stories) are often used for sentiment analysis, we reframe and preprocess them for the purpose of cyberbullying classification by focusing on toxic, harmful, or aggressive linguistic patterns. Our framework achieved strong classification performance—92.5%, 90.5%, and 91.0% accuracy across the three datasets—demonstrating its robustness and practical application in Thai-language cyberbullying detection.

CyberbullyingFeature extractionConvolutional neural networksClassification algorithmsBlogsAccuracyMachine learningNeural networksDeep learningTransformers

Sathit Prasomphan

展开 >

Department of Computer and Information Science, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Bangkok, Thailand

2025

IEEE Access

IEEE Access

ISSN:
年,卷(期):2025.13(1)
  • 21