The Three R’s of Facebook Moderation

Creating a branded Facebook page can be daunting for any business; doing so without a robust moderation policy in place can be disastrous.

A branded Facebook page can be an open-invitation for foul-mouthed detractors to fill the page with expletives and abuse.

Perhaps less obvious are the implications of a well-meaning, but misguided employee jumping in and publicly defending the company against said detractor. Or worse, a recently redundant employee joining in and publicly attacking the company out of frustration.

Before creating a branded presence on Facebook, it is therefore important to consider the Three R’s – Resourcing, Redundancies and Restrictions.

Resourcing

Consider who will moderate the page and the hours the moderator(s) will be active for. Weekdays and work hours moderation is commonplace, but clearly state this on the page. For larger branded community pages, automated moderation packages, such as Crisp Certified, can be purchased, to automatically remove negative or abusive posts; while software such as the Digital Recognition Moderation Engine (DRME) can be used to moderate user generated images and video.

Do ensure all employees, from the boardroom to the shop floor, have been fully briefed on the page and its rules of engagement; a solid social media policy will provide employees with the confidence to know when and how to respond to customers as well as outlining the necessity for employees to transparently state their relationship with a brand when engaging online.

Redundancies

During sensitive times, such as mass redundancies, a branded Facebook page – or indeed the social web in general – can present itself as the perfect place to vent feelings of injustice. Employ a monitoring system, such as Radian 6 or Sysomos, to proactively search for any online conversations surrounding this. Ensure an escalation process is in place should the online conversation spread and fully brief the page moderator to watch for potential negativity on the page. Page guidelines should state that profanity and abuse can be removed and blocked, to enable the moderator to quickly deal with abusive or aggressive Wall posts and comments.

Restrictions

Consider sensitivities surrounding data protection. While automated moderation services can be employed to block specific words and profanities, moderation becomes more difficult when it isn’t a specific word that needs to be blocked, but a reference to something or someone. Ensure the page guidelines clearly state what can and cannot be referenced on the page, this ensures the moderator can remove or block users who ignore these guidelines. The General Medical Council is a good example of an organisation finding a balance between protecting data and driving discussion.

Latest Posts

this post unpacks why b2b isn’t boring and how it’s moved from nice-to-have to mission-critical. it argues for trust as a working system (clear claims, named sources, human voices), puts short, sourced answers where people and ai look (linkedin, youtube, communities), and shows why people beat logos for credibility. it backs hybrid buying journeys that give control and timely human support, and it tracks intent signals like saves, sends and branded search. if b2b is your world, join us at socialday b2b forum 2025 at bounce, shoreditch on 12 november to go deeper.
Read More
If you’re a B2B marketer, you can probably see your buyer is changing. Your meetings seem to have more and more senior-positioned folk who are younger, digitally native, and social pioneers. It’s time to adapt accordingly. They research on their phones, trust creators more than brands, and expect to feel…
Read More
Social schedulers vs native: what actually works You’re managing a posting plan that never quits. Copy, links, tags, alt text, approvals, and the “can we move this to Thursday?” Loop. You’re holding social strategy in one hand and a calendar in the other, trying to keep both upright. At immediate…
Read More