Integration
Run Playwright Tests on GitLab CI in Parallel
Keep your GitLab CI pipeline. Replace the test-execution job with a single TraceLoom call that fans your Playwright suite out across 50+ EC2 Spot workers in your own AWS account, with a full trace on every test.
Bottom line: TraceLoom is a single GitLab CI job. Your pipeline still builds, lints and deploys on GitLab runners, but browser execution fans out to 50+ EC2 Spot workers in your own AWS account — with full Playwright traces stored in your S3 bucket.
Last updated: April 2026
Add to your GitLab CI pipeline
.gitlab-ci.yml
playwright:
image: node:20
stage: test
script:
- npm ci
- npx traceloom run
variables:
TRACELOOM_API_KEY: $TRACELOOM_API_KEY
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
Define TRACELOOM_API_KEY under Settings → CI/CD → Variables (masked, protected). No changes to your Playwright test files — TraceLoom wraps the Playwright CLI.
How TraceLoom extends GitLab CI
GitLab CI is strong at pipeline orchestration — stages, job dependencies, artifacts, environments — but scaling Playwright across many GitLab runners means paying for more runner minutes and managing concurrency in YAML.
TraceLoom keeps your GitLab CI pipeline doing what it's good at. A single npx traceloom run step hands test execution off to TraceLoom, which spins up 50+ EC2 Spot workers in your own AWS account, shards tests based on historical run times, captures a full .trace.zip for every test, and returns a single pass/fail exit code to GitLab.
Workers run inside your VPC. Traces land in your S3 bucket. TraceLoom's control plane only sees run metadata — pass/fail counts, timing, test names. Your test data never leaves your AWS account.
What you get
- ✓ 50+ parallel EC2 Spot workers. Go far beyond what a handful of GitLab runners can do in parallel, without per-session pricing.
- ✓ Full Playwright trace on every test. DOM snapshots, network waterfalls, console logs — viewable in the standard Playwright Trace Viewer.
- ✓ BYOC data ownership. Workers run inside your VPC and traces land in your S3 bucket. TraceLoom's control plane only sees run metadata.
- ✓ One GitLab CI job. Drop-in `.gitlab-ci.yml` snippet with no changes to your Playwright test files or Playwright config.
Frequently Asked Questions
- How do I trigger TraceLoom from a GitLab CI pipeline?
- Add a job to your `.gitlab-ci.yml` that runs `npx traceloom run`, with `TRACELOOM_API_KEY` defined as a CI/CD variable in your GitLab project settings (Settings → CI/CD → Variables, masked and protected). The full YAML is in the snippet above. TraceLoom detects non-TTY environments automatically and switches to plain-text output so GitLab CI logs stay parseable.
- Does TraceLoom work with GitLab CI `parallel:` keyword?
- You can, but you usually don't need to. GitLab's `parallel:` keyword spreads work across GitLab runners, which are metered. TraceLoom fans your Playwright suite out to 50+ EC2 Spot workers in your own AWS account from a single GitLab job, so you get more parallelism without provisioning more runners.
- Can I use TraceLoom with self-hosted GitLab and self-hosted runners?
- Yes. TraceLoom only cares that the runner can reach the TraceLoom API over HTTPS and that `TRACELOOM_API_KEY` is available as an environment variable. Self-hosted GitLab, GitLab SaaS, and self-hosted runners all work the same way. The EC2 Spot workers run in your own AWS account regardless of where GitLab itself is hosted.
- Does my GitLab runner minutes bill go down with TraceLoom?
- Typically yes for anything beyond a small suite. Browser execution moves off GitLab runners onto EC2 Spot instances in your AWS account. Your GitLab job still runs the trigger, any pre/post steps, and the TraceLoom CLI itself, but the heavy parallel Playwright work happens on Spot compute — usually 60–90% cheaper than the equivalent GitLab runner minutes.
Also integrates with: GitHub Actions · Jenkins
Get started
Ship faster with tests you actually trust.
Deploy one CloudFormation stack, run your first suite in 15 minutes, and see every trace in your own S3 bucket.