Can We Automate the Analysis of Online Child Sexual Exploitation Discourse?

dc.contributor.authorCook, D., Zilka, M., DeSandre, H., Giles, S., Weller, A., & Maskell, S.
dc.date.accessioned2022-11-04T18:08:30Z
dc.date.available2022-11-04T18:08:30Z
dc.date.issued2022
dc.description.abstractSocial media’s growing popularity raises concerns around children’s online safety. Interactions between minors and adults with predatory intentions is a particularly grave concern. Research into online sexual grooming has often relied on domain-experts to manually annotate conversations, limiting both scale and scope. In this work, we test how well automated methods can detect conversational behaviors and replace an expert human annotator. Informed by psychological theories of online grooming, we label 6772 chat messages sent by child-sex offenders with one of eleven predatory behaviors. We train bag-of-words and natural language inference models to classify each behavior, and show that the best performing models classify behaviors in a manner that is consistent, but not on-par, with human annotation.en_US
dc.identifier.citationCook, D., Zilka, M., DeSandre, H., Giles, S., Weller, A., & Maskell, S. (2022). Can We Automate the Analysis of Online Child Sexual Exploitation Discourse?. arXiv preprint arXiv:2209.12320.en_US
dc.identifier.urihttps://arxiv.org/pdf/2209.12320.pdf
dc.identifier.urihttp://hdl.handle.net/11212/5609
dc.language.isoenen_US
dc.publisherarXiven_US
dc.subjectonline safetyen_US
dc.subjectsocial mediaen_US
dc.subjectchild sexual exploitationen_US
dc.subjectonline groomingen_US
dc.subjectmanipulationen_US
dc.subjectdetectionen_US
dc.titleCan We Automate the Analysis of Online Child Sexual Exploitation Discourse?en_US
dc.typeArticleen_US

Files