Basic Concepts

These are the core building blocks of Cove. Understanding these will help you integrate quickly and get your Trust & Safety workflow up and running.

The concepts below are listed in the same order in which you should build your Cove integration. Some concepts build on previous ones, so we recommend that you read all of them in order.

Item

An Item is any entity that you have on your platform. This can include individual pieces of content (e.g. posts, comments, direct messages, product listings, product reviews, etc.), threads of content (e.g. comment threads, group chats, etc.), or users and their profiles. Any individual entity can be considered an Item, even if it contains other Items within it.

Item Type

Item Types represent the different types of Items on your platform. For example, if you've built a social network that allows users to create profiles, upload posts, and comment on other users' posts, then your Item Types might be Profile, Post, Comment, and Comment Thread. If you've built a marketplace or eCommerce platform, your Item Types might be Buyer, Seller, Product Listing, Product Review, Direct Message, Transaction, and more. Every Item you send us needs to be an instance of exactly one of these Item Types.

The first step in your integration process will be defining these Item Types in your Cove dashboard. That way, we know exactly what types of Items you'll be sending us.

Read more about how to create Item Types here.

Important note on how we uniquely identify an Item

In Cove, to uniquely identify a particular Item, we use an (Item ID, Item Type ID) pair. Some platforms may not be able to guarantee that a comment ID and a user ID won't clash. Some other customers may have multiple platforms they own and operate, with no guarantee that Item IDs across platforms won't clash with each other.

In those circumstances, the (Item ID, Item Type ID) pair is needed to uniquely identify the correct Item. That's why you'll often see throughout this documentation that we ask you to send us your Items in the following shape:

item: {
  id: string;
  typeId: string;
}

The id field will be your unique identifier for the Item, and the typeId field will be Cove's ID for the corresponding Item Type. Once you create an Item Type in your Cove dashboard, you'll see its generated ID, which you can then use to populate the typdId field.

Action

Actions in Cove represent any action you can perform on Items. Some common Trust & Safety-related examples include Delete, Ban, Mute, and Send to Moderator. If you want to add non-T&S-related actions as well, such as Promote, Add to Trending, Mark as Trustworthy, or Approve Transaction, you absolutely can! You can add any automated action to Cove.

Each Action must map to an API endpoint that you expose to Cove. For example, if you create a Delete Action in Cove, you must provide an API endpoint (i.e. a URL and ideally an authentication scheme) to which Cove can send API requests. That way, you can start building Rules that automatically trigger these Actions, and your moderators can start applying these Actions through our Manual Review Tool, and the Actions will actually get executed on your servers.

The second step in your integration process will be defining these Actions in your Cove dashboard.

Read more about Actions here.

Rule

A Rule is an automated filter or check that scans your platform for harmful Items and automatically takes action on those Items. Each time you send us an Item, we run it through all of your Cove Rules, and those Rules can trigger Actions if they detect abuse.

Once you've defined your Item Types and Actions in the Cove dashboard, you can start creating Rules! These Rules will run on every Item you send to our Item API.

Read more about Rules here.

Custom AI Model

In Cove, you can create Custom AI Models. A Custom AI Model is an AI model that is designed to detect a certain kind of harmful content (for example, Hate Speech, Harassment or Violence). These AI models are "Custom" because:

  • They are trained on your platform's content, which means they understand how your users communicate and how harm looks on your platform specifically.
  • They are also trained on your platform's guidelines or standards, which means they will respect your definition of harm. For example, if you allow certain types of sexual content, but not others, you can teach the AI model to understand exactly what type of sexual content is unacceptable.
  • They are fine-tuned by you, which means you can give the model simple feedback about how to make moderation decisions, and it will learn from your feedback.

Learn more about Custom AI Models in Cove here.

Signal

Signals are what you use to detect abuse. When you build a Rule, you'll select which Signals to use to determine whether an Item is abusive. We have a library of Signals that you can choose from, which includes:

Read more about Signals here.

Policy

Policies are categories of harm that are prohibited or monitored on your platform. Some typical examples include Spam, Hate, Harassment, Violence, etc.

Policies can have sub-policies underneath them, so the Violence policy could have sub-policies Graphic Violence, Threats of Violence, Encouragement and Celebration of Violence, etc., all of which are specific types of Violence that could occur on your platform.

It is often useful (and in some cases, required by the EU's Digital Services Act) to tie every Action you take to one or more specific Policies. For example, you could Delete a comment under your Hate policy, or you could Delete it under your Spam policy. Cove allows you to track those differences and measure how many Actions you've taken for each Policy. That way, you can see how effectively you're enforcing each Policy over time, identify Policies for which your enforcement is poor or degrading, and report performance metrics to your leadership (or to regulators).

You can create and manage your Policies in the Policies Dashboard, and you can fetch them programmatically through our Policies API.

Report

Reports are created when a user on your platform flags an Item as potentially harmful. When a user flags an Item on your platform and you send it to our Report API, we add it to a Review Queue so that your moderators can review it and decide what to do with it. Read more about Review Queues below.

Review Queue

A Review Queue is a queue that holds lots of Reports. You can think of a Review Queue as an airport security line, and Reports as travelers getting screened and processed one at a time.

When you send a new Report to Cove through our Report API, we'll add it to the end of a Review Queue. Once it reaches the front of the Queue, a moderator will look at the Report and decide what to do with it.