Aaron Bunch Journalist with Australian Associated Press | Collection of published work | + 61 484 008 119 | abunch@aap.com.au

Aaron Bunch
Algorithm detects tweets abusing women

A Queensland university has developed an algorithm to detect misogynistic social media posts, including threats of harm or sexual violence.

August 28, 2020

A Queensland university has developed an algorithm to detect misogynistic Twitter posts containing words abusive to woman, such as whore, slut and rape.

Queensland University of Technology’s sophisticated formula can analyse millions of tweets and identify content targeting women, including threats of harm or sexual violence.

“Our machine-learning solution cuts through the raucous rabble on the Twittersphere and identifies hateful content,” computer scientist Richi Nayak told AAP.

“We hope the model can be adopted by social media platforms to automatically identify and report it to protect women”.

QUT researchers developed a text mining system where the algorithm learns tweet-specific and abusive language as it goes.

“The key challenge in misogynistic tweet detection is understanding the context of a tweet,” Professor Nayak said.

“The complex and noisy nature of tweets makes it difficult

“But, sadly, there’s no shortage of misogynistic data out there to work with”.

Prof Nayak said teaching a machine to differentiate context, without the help of tone, was key to this project’s success

The QUT team built a deep learning algorithm, which means the machine is able to look at its previous understanding of terminology and change its contextual and semantic understanding as it goes.

Researchers also ensured the algorithm could differentiate between abuse, sarcasm and friendly use of aggressive terminology as it built its vocabulary.

That’s tough to do because natural language is constantly evolving, Prof Nayak says.

“Take the phrase ‘get back to the kitchen’ as an example – devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” she said.

“But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.”

Prof Nayak said the model could also be expanded to identify racism, homophobia, or abuse toward people with disabilities.

“If we can make identifying and removing this content easier, that can help create a safer online space for all users,’ she said.

QUT’s faculties of Science and Engineering and Law and the Digital Media Research Centre published their research on the digital science platform Springer on Friday.

Comments are closed.

Latest Stories
archive
date published
April 2024
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
2930