A Deep Multi-modal Neural Network for the Identification of Hate Speech from Social Media
Abstract
Hate speech can be particularized as an intentional and chronic act to harm a single person or a group of individuals. This act can be performed via social networking websites such as Twitter, YouTube, Facebook, and more. Most of the existing approaches for finding hate speech are concentrated on either textual or visual information of the posted social media contents. In this work, a multi-modal system is proposed that uses textual as well as the visual contents of the social media post to classify it into Racist, Sexist, Homophobic, Religion-based hate, Other hate and No hate classes. The proposed multi-modal system uses a convolutional neural network-based model to process text and a pre-trained VGG-16 network to process imagery contents. The performance of the proposed model is tested with the benchmark dataset and it achieved significant performance in classifying social media posts into six different hate classes.
Origin | Files produced by the author(s) |
---|