Home News Why Moderators Can’t Protect Online Communities on Their Own

Why Moderators Can’t Protect Online Communities on Their Own

166
0
Why Moderators Can’t Protect Online Communities on Their Own

Make Money Online

HBR Staff/Pexels



Post



  • Post



  • Share



  • Annotate



  • Save



  • Print

  • The data on online abuse is sobering: Nearly one in three teens have been cyberbullied and one in five women have experienced misogynistic abuse online. Overall, some 40% of all internet users have faced some form of online harassment. Why have online communities failed so dramatically to protect their users? An analysis of 18 years of data on user behavior and its moderation reveals that the failure stems from the fact that people responsible for moderating online behavior labor under five misconceptions about toxicity, specifically that people experiencing abuse will leave, that the incidence of abuse are isolated and independent, that abuse is not an inherent part of community culture, that rivalries in communities are beneficial, and that self-moderation can and does prevent abuse. These misconceptions drive current moderation practices. In each case, the authors present findings that both debunk the myths and point to more effective ways of managing toxic online behavior.

    Online communities often advertise themselves as bringing people together, but many are characterized by toxic behavior, from overt harassment, hate speech, trolling, and doxing to more casual (yet often still quite harmful) comments that belittle and exclude. They include well-known platforms such as Reddit, Discord, and 4chan, along with brand communities like Call of Duty. The data is sobering: According to one recent report, nearly one in three teens have been cyberbullied, and about 15% have perpetuated cyberbullying against others. Other reports have found that that 40% of all internet users have faced some form of online harassment, with one in five women experiencing misogynistic abuse online.

    Read More