![]()
To create the algorithm, the researchers mined 1 million Tweets from the social platform and then searched them for keywords. After this, they categorized the remaining Tweets and input the findings into a “machine learning classifier, which used the samples to create its own classification model.” As the AI learning system continued to refine its vocabulary, the researchers monitored context and intent so that it could better understand if the content was meant to be humorous or otherwise non-abusive.
Although the algorithm was specifically designed to filter out misogynistic words and phrases that are ill-intentioned, it could be used similarly to filter content that is racist, homophobic, or ableist. The algorithm has not been adopted by the social platform as of yet, however the researchers hope that Twitter and other sites integrate it to help protect their users from online abuse.
Image Credit: Shutterstock
The post Misogyny-Detecting Algorithms : Misogyny-Detecting Algorithm appeared first on FrontLine Fever.
from FrontLine Fever https://ift.tt/3luS20S

0 Comments