Integration Testing#
edg is uniquely suited to work as a self-contained integration testing tool for databases. A single YAML config expresses a full test, all the way from schema creation, data population, query execution, assertions, and tear down. The edg all command runs this entire lifecycle and fails if any query errors or expectation is not met, making it a drop-in CI gate.
How it works#
The all command runs five phases in order:
up → seed → run → deseed → down- up - creates tables and indexes
- seed - populates them with realistic data
- run - executes your query workload with concurrent workers, collecting latency and error metrics
- deseed - truncates tables
- down - drops tables
After run, any expectations are evaluated against the collected metrics. If an expectation fails, edg still runs teardown (deseed and down) before exiting with a failure code.
Writing an integration test#
A complete integration test in a single config:
globals:
customers: 10000
initial_balance: 10000
batch_size: 5000
up:
- name: create_customer
query: |-
CREATE TABLE IF NOT EXISTS customer (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email STRING NOT NULL
)
- name: create_account
query: |-
CREATE TABLE IF NOT EXISTS account (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
balance FLOAT NOT NULL,
customer_id UUID NOT NULL REFERENCES customer(id)
)
seed:
- name: populate_customer
args:
- gen_batch(customers, batch_size, 'email')
query: |-
INSERT INTO customer (email)
SELECT unnest(string_to_array('$1', sep))
- name: populate_account
args:
- batch(customers / batch_size)
- initial_balance
- batch_size
query: |-
INSERT INTO account (balance, customer_id)
SELECT $2::FLOAT, c.id
FROM customer c
ORDER BY c.id
OFFSET $1::INT * $3::INT
LIMIT $3::INT
init:
- name: fetch_accounts
type: query
query: SELECT id FROM account ORDER BY random()
run_weights:
check_balance: 50
credit_account: 50
run:
- name: check_balance
type: query
args:
- ref_rand('fetch_accounts').id
query: |-
SELECT balance FROM account WHERE id = $1::UUID
- name: credit_account
type: exec
args:
- ref_rand('fetch_accounts').id
- gen('number:1,1000')
query: |-
UPDATE account SET balance = balance + $2::FLOAT
WHERE id = $1::UUID
expectations:
- error_rate < 1
- check_balance.p99 < 50
- credit_account.p99 < 50
- tpm > 1000
deseed:
- name: truncate_account
type: exec
query: TRUNCATE TABLE account CASCADE
- name: truncate_customer
type: exec
query: TRUNCATE TABLE customer CASCADE
down:
- name: drop_account
type: exec
query: DROP TABLE IF EXISTS account
- name: drop_customer
type: exec
query: DROP TABLE IF EXISTS customerRun it:
edg all \
--driver pgx \
--config integration-test.yaml \
--url ${DATABASE_URL} \
-w 10 \
-d 30sedg creates the schema, seeds 10,000 customers with accounts, runs balance checks and credits with 10 workers for 30 seconds, then asserts that:
- The overall error rate stays below 1%
- Both queries stay under 50ms at p99
- Throughput exceeds 1,000 transactions per minute
If any assertion fails, the test is deemed to have failed. The database is cleaned up either way.
What to test#
The expectations section supports global and per-query metrics:
Correctness - queries should not error:
expectations:
- error_rate == 0
- make_transfer.error_count == 0Latency - queries should respond within bounds:
expectations:
- get_user.p99 < 25
- checkout.p99 < 200Throughput - the system should sustain a minimum transaction rate:
expectations:
- tpm > 5000Combined - multiple conditions in a single expression:
expectations:
- error_rate < 0.5 && tpm > 10000Multiple test scenarios#
Use separate config files for different test scenarios, or use !includes to share schema definitions while varying seed data and workloads:
test-fixtures/
shared/
schema.yaml # up + down (shared across scenarios)
teardown.yaml # deseed (shared across scenarios)
happy-path.yaml # seed for standard flow
edge-cases.yaml # seed for boundary conditions
empty-state.yaml # no seed, just schema# happy-path.yaml
globals:
users: 500
batch_size: 100
up: !include shared/schema.yaml
seed:
- name: populate_users
args:
- gen_batch(users, batch_size, 'email')
query: |-
INSERT INTO users (email)
SELECT unnest(string_to_array('$1', sep))
deseed: !include shared/teardown.yaml
down: !include shared/schema.yamlRun the scenario you need:
edg all \
--driver pgx \
--config test-fixtures/happy-path.yaml \
--url ${DATABASE_URL} \
-w 5 \
-d 30sDeterministic seeding#
The --rng-seed flag makes expression output deterministic. Two runs with the same seed produce identical generated values:
edg all \
--driver pgx \
--config integration-test.yaml \
--url ${DATABASE_URL} \
--rng-seed 42 \
-w 10 \
-d 30sFunctions like gen(), uniform(), set_rand(), and other random expressions return the same sequence each time. This makes test runs reproducible and debugging easier because the data is predictable across runs.
CI pipeline example#
GitHub Actions#
name: Integration Tests
on: [push, pull_request]
jobs:
integration:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U test"
--health-interval 5s
--health-timeout 5s
--health-retries 5
env:
DATABASE_URL: "postgres://test:test@localhost:5432/testdb?sslmode=disable"
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: stable
- name: Install edg
run: go install github.com/codingconcepts/edg@latest
- name: Run integration tests
run: |
edg all \
--driver pgx \
--config integration-test.yaml \
--url ${DATABASE_URL} \
--rng-seed 42 \
-w 10 \
-d 30sBecause edg all handles setup, execution, assertions, and teardown in a single command, the pipeline step is just one invocation. A non-zero exit fails the build.
Docker Compose#
For local development, pair edg with a docker-compose.yaml that spins up the database:
services:
db:
image: postgres:16
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: testdb
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "test"]
interval: 2s
timeout: 5s
retries: 5Then run tests as follows:
docker compose up -d --wait
edg all \
--driver pgx \
--config integration-test.yaml \
--url ${DATABASE_URL} \
--rng-seed 42 \
-w 10 \
-d 30s
docker compose downRunning phases separately#
For debugging or when you want to inspect the database between phases, run each step individually:
edg up --driver pgx --config integration-test.yaml --url ${DATABASE_URL}
edg seed --driver pgx --config integration-test.yaml --url ${DATABASE_URL}
# Inspect the database, run ad-hoc queries, etc.
edg run --driver pgx --config integration-test.yaml --url ${DATABASE_URL} -w 10 -d 30s
edg deseed --driver pgx --config integration-test.yaml --url ${DATABASE_URL}
edg down --driver pgx --config integration-test.yaml --url ${DATABASE_URL}Tips#
- Use
--rng-seedin CI. Deterministic data eliminates an entire class of flaky tests caused by random values. - Keep seed data small. Integration tests should be fast. Thousands of rows are usually enough – save millions for load testing.
- Validate configs in CI. Add
edg validate --config integration-test.yamlas an earlier pipeline step to catch YAML errors before they hit the database. - Use
stagesfor ramp-up tests. The stages section lets you vary worker counts and durations across phases, useful for testing how your database handles increasing load.