Automated Enforcement

You can use Cove's Automated Enforcement product to automatically moderate content on your platform. There are a few core components that will help you do this, and they're quickly summarized below. To see more detail, you can click into the linked sub-section for any of the components.

The core components are:

  1. Rules: A Rule is a piece of logic that scans your content and decides whether it should receive an Action. These pieces of logic usually represent an "if-then" structure, such as: "If a user posts a comment with the word 'kill', then delete it." This, of course, is not a very good rule, but you can imagine similar logic that would be more useful. In Cove, you can encode these "if-then" pieces of logic into Rules. Once you've created all your moderation Rules, whenever you send an Item to Cove through the Item API, we'll run it through all of your Rules, and those Rules can trigger Actions.
  2. Signals: A Signal is a method of analyzing content to determine whether or not it violates your policies. Simple Signals include keyword and regular expression matching, and more complex Signals include running content through AI models are producing a "harmfulness" score.
  3. Matching Banks: A Matching Bank is a way to keep track of a large list of values that you'd want to check for matches. For example, you could create a Matching Bank that holds 10,000 banned keywords, and you can check every new post or comment to see if the text in the post or comment matches any of the 10,000 banned keywords in the Matching Bank. You can build Rules that reference your Matching Banks, so you don't have to list out all 10,000 banned keywords every time you create a Rule that checks for matches against those keywords.
  4. User Strike System: Cove's User Strike System lets you define a special set of Rules. Rather than running these Rules on individual Items (like posts or comments), the Strike System is designed to help you handle users that are repeatedly violating your policies. Every time a user violates your policies, you can apply "strikes" to their account, and if they accumulate enough strikes, you can automatically take more severe actions on their account.
  5. Report Rules: A Report Rule is a special type of Rule. Rules typically run on Items when they are first created (or edited later). These Items are all sent to Cove through the Item API. But if you're using Cove's Moderator Console, then you'll also be sending Reports to Cove through the Report API every time a user flags something potentially harmful on your platform. Report Rules are Rules that run only when an Item is reported (i.e. submitted to Cove through the Report API), and not when the Item is first submitted to Cove through the Item API. Report Rules are particularly useful if you want to automatically resolve some Reports instead of adding them to a Queue for moderator review.
  6. Spam Rules (Coming Soon): A Spam Rule is another special type of Rule. Most Rules just run on individual Items - they're meant for analyzing whether or not a single Item violates your policies. Spam Rules, however, are meant for detecting spammy users. If a user sends the message "Hi, how are you?" to another user, that seems harmless, so it shouldn't be flagged by any Rules. But if they send that exact same message to 100 users within the span of 20 seconds, they're probably a bot or spammer! So you'll need a different kind of Rule to look at the aggregate behavior of a user over a specific period of time, which is exactly what Spam Rules enable you to do.