Features

A/B Tests

Test subject lines and content variants

Overview

A/B test subject lines and email content to optimize engagement. Create tests with multiple variants, define the metric to optimize for, and let the platform automatically select a winner after a configurable observation period.

List Tests

Retrieve all A/B tests along with their variants and performance statistics.

GET/api/ab-tests

Returns all A/B tests with their variants and current stats.

Response

json
[
  {
    "id": 1,
    "name": "Welcome Email Subject Test",
    "metric": "open_rate",
    "status": "completed",
    "test_percentage": 20,
    "wait_hours": 24,
    "winner_variant_id": 2,
    "variants": [
      {
        "id": 1,
        "subject": "Welcome to the platform!",
        "sends": 500,
        "open_rate": 0.42
      },
      {
        "id": 2,
        "subject": "You're in! Here's what's next",
        "sends": 500,
        "open_rate": 0.58
      }
    ]
  }
]

Create Test

Create a new A/B test with two or more variants. Specify the metric to optimize, the percentage of the audience to use for testing, and how long to wait before selecting a winner.

POST/api/ab-tests

Create a new A/B test with multiple variants.

Parameters

NameTypeDescription
name*stringName for this test
metric*stringMetric to optimize: open_rate or click_rate
test_percentage*numberPercentage of audience to include in the test (1-50)
wait_hours*numberHours to wait before auto-selecting the winner
variants*arrayArray of variant objects with subject and html_body

Request

json
{
  "name": "Promo Email Subject Test",
  "metric": "open_rate",
  "test_percentage": 20,
  "wait_hours": 24,
  "variants": [
    {
      "subject": "Don't miss our spring sale",
      "html_body": "<h1>Spring Sale</h1><p>20% off everything.</p>"
    },
    {
      "subject": "Spring deals inside - 20% off",
      "html_body": "<h1>Spring Deals</h1><p>Save 20% on all items.</p>"
    }
  ]
}

Response

json
{
  "id": 5,
  "name": "Promo Email Subject Test",
  "metric": "open_rate",
  "status": "running",
  "test_percentage": 20,
  "wait_hours": 24,
  "variants": [
    { "id": 9, "subject": "Don't miss our spring sale" },
    { "id": 10, "subject": "Spring deals inside - 20% off" }
  ]
}

How It Works

When a test is created, the platform splits the target audience based on the test_percentage value. The test segment is divided evenly among all variants, and each variant is sent to its portion of the audience.

During the observation period defined by wait_hours, engagement data is collected for each variant. After the wait period expires, the platform automatically selects the variant with the highest score on the chosen metric and sends it to the remaining audience.

Test lifecycle
{
  "phase_1": "Split test_percentage of audience across variants",
  "phase_2": "Send each variant to its audience slice",
  "phase_3": "Collect engagement data for wait_hours",
  "phase_4": "Select variant with best metric score as winner",
  "phase_5": "Send winner to remaining audience"
}

Metrics

Two metrics are supported for A/B test optimization:

  • open_rate — The percentage of recipients who opened the email. Best for testing subject lines and sender names.
  • click_rate — The percentage of recipients who clicked at least one link. Best for testing email body content and call-to-action placement.