Pre-Active Moderation
AI-based, real-time pre-active moderation checks messages for harmful content and asks the sender to change or remove the harmful content before sending the message. This type of moderation prevents unsavory interactions but can disrupt a user's experience if not implemented correctly.
Pre-active chat moderation is difficult to implement with manual workflows due to the real-time nature of chat. This method of moderation is typically automated and can follow two approaches.