HomeHow You Can Report Hate Online?

How You Can Report Hate Online?

When a serial killer took the lives of 50 worshippers, while injuring 50 others at a mosque days back in New Zealand, it was clear that hatred could actually be increased to these mean and deadly heights with our daily involvements online. Before this tragic occurrence though, we already knew that social media platforms have to do more to curb hate in general.

Thankfully, our prayers are getting answered because as you read this, Facebook recently made a needed announcement that they are ready to battle hate by placing a ban on nationalism and separatism on Facebook and Instagram. And the ban was effective immediately from March 31, meaning its started already.

What we take from this is, henceforth, immediately human or artificial intelligence get to know of these types of posts, they get taken down. The two social networks will also provide support resources to individuals that search terms that have anything to do with white supremacy.

If you are at home reading this, and you feel the whole hate thing is overrated, it might interest you to know that Facebook recently shared that almost two hundred people saw the Christchurch gunman’s livestreaming video but not one person reported it until it twelve minutes later when it was too late. If you are bothered by that, here are steps to take anytime you discover hate online.

How To Report Terrible Content, And Will Reports Be Any Good?

Social media platforms have precise rules about what they delete and do not delete, and since they desire increased users, more content and more hours spent on their sites, they will not want to engage in anything that tampers with free speech. Although, they have to adhere to hate speech laws in all nations using their site, and each platforms have their own policies and instructions that users must strictly obey. If a post or account is found culpable as regards violations of some terms, they will be deleted. But, still, we do not expect artificial intelligence and human moderators to be aware of everything.

Most times, social networking sites will only delete content that instigate hate, this explains why it is still an easy task to spread false information.

So definitely, your reports are vital, but you can improve on them by being very specific. Here is how to get that done on major social networking websites.

Facebook

Being the most known social media platform, Facebook has felt the heat the most over its posts moderation, and so ensured they have a straightforward and uncomplicated reporting system according to its community standards. As you stare at a hateful post, page, group or facebook account, search for these options — “give feedback on” or “report”. Giving feedback feels odd right? but it aids Facebook’s AI systems and moderators to be aware of the kind of content to look at and it also assists them to draw patterns for easy reference in future. You will receive lots of options for notifying those at Facebook about violent and hateful content such as hate speech, violence, terrorism and even fake news. After selecting one of the reasons you are shown for flagging the content to Facebook, the social networking site will also give you what next to do. These include blocking an account, hiding posts from a page, or messaging the page to find a solution to the issue. If you experience a deadly livestream like the Newzealand incident, use the “report” function and do the needful. But if you know the exact location where the disaster is about to take place, call the police asap.

Instagram

Even if Facebook owns Instagram, the reporting system on the latter is not as great as the former, although we expect it to change soon. To report anything, hit the three dots on the top right corner of a post and tap “report” at the bottom. Select “report for spam” if the account appears fake or simply flag the post “inappropriate.” Instagram will notify you if a step was taken on the matter. However, except there is a racist language or symbols is clearly displayed, or there is a clear call to violence, so far, Instagram lets posts remain visible. Also, there is still no way of reporting fake news or info on the gram. To make matters worse, on IG’s pages on reporting, what you see under “Hate Accounts,” is just “harassment or bullying.” That is not good enough.

Twitter

As soon as you tap the down-facing arrow at the top right of a tweet, “report” will be visible and Twitter will then show you options. Options like if the tweet does not interest you, if it appears suspect or looks like spam or if it flaunts sensitive pictures. Also whether it is insultive and harmful.

If you select “abusive or harmful,” you can determine if the tweet is “disrespectful or offensive.” If it is offensive, Twitter will send you an apology, asking you to mute or block the culpable account.

After tagging a tweet “offensive”, it will be wise to break things down further and explain in detail what the situation is.

YouTube

Hit the three grey dots under the bottom corner of a clip, which signifies “more”, tap “report.” After that, YouTube will allow you to report the clip for being culpable of going against site guidelines that prohibits posts that contain s3xual content, violent or repulsive content, content that depicts hate or abuse, harmful or dangerous acts, child abuse, enhancing terrorism, spam or misleading content, infringing your rights (for legal claims, such as copyright), or simply a captioning trouble. The more precise your report, the better. YouTube sees hateful materials as promoting and enhancing violence against individuals based on race, ethnic origin, religion, disability, gender, age, nationality, veteran status and s3xual orientation/gender identity.

author image

About Author

Samuel Afolabi is a lazy tech-savvy that loves writing almost all tech-related kinds of stuff. He is the Editor-in-Chief of TechVaz. You can connect with him socially :)

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.