Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
The effect of gender bias on hate speech detection
Date
2023-06-01
Author
Sahinuc, Furkan
Yilmaz, Eyup Halit
Toraman, Çağrı
Koc, Aykut
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
78
views
0
downloads
Cite This
Hate speech against individuals or communities with different backgrounds is a major problem in online social networks. The domain of hate speech has spread to various topics, including race, religion, and gender. Although there are many efforts for hate speech detection in different domains and languages, the effects of gender identity are not solely examined in hate speech detection. Moreover, hate speech detection is mostly studied for particular languages, specifically English, but not low-resource languages, such as Turkish. We examine gender identity-based hate speech detection for both English and Turkish tweets. We compare the performances of state-of-the-art models using 20 k tweets per language. We observe that transformer-based language models outperform bag-of-words and deep learning models, while the conventional bag-of-words model has surprising performances, possibly due to offensive or hate-related keywords. Furthermore, we analyze the effect of debiased embeddings for hate speech detection. We find that the performance can be improved by removing the gender-related bias in neural embeddings since gender-biased words can have offensive or hateful implications.
URI
https://hdl.handle.net/11511/109715
Journal
SIGNAL IMAGE AND VIDEO PROCESSING
DOI
https://doi.org/10.1007/s11760-022-02368-z
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
F. Sahinuc, E. H. Yilmaz, Ç. Toraman, and A. Koc, “The effect of gender bias on hate speech detection,”
SIGNAL IMAGE AND VIDEO PROCESSING
, vol. 17, no. 4, pp. 1591–1597, 2023, Accessed: 00, 2024. [Online]. Available: https://hdl.handle.net/11511/109715.