Online social platforms have become omnipresent.While these environments are beneficial for sharing messages, ideas, or cartoon martian with big head information of any kind, they also expose cyber-bullying, verbal harassment, or humiliation.Admittedly and regrettably, the latter actions are rampant, urging further research to restrain malicious activities.Even though this topic has been explored in several languages, there is no prior work in Georgian toxic comment analysis and detection.
In this work, we extracted data from the Tbilisi forum, an online platform for public discussions.Data containing 10,000 comments were labeled as toxic/non-toxic.After data preprocessing, we pass generated vectors to our models.We developed multiple deep learning architectures: NCP, biRNN, CNN, biGRU-CNN, biLSTM, biGRU, transformer, and a baseline NB-SVM.
We took a novel approach in toxic comment classification via employing a brain-inspired NCP model.Each model, including NCP, showed satisfactory results.Our best-performing model was CNN with nitrile gloves in a bucket 0.888 ACC and 0.
942 AUC.