The Best Way to Trigger On-Demand Test Runs from a Dashboard or API
Updated on April 30, 2026
Updated on April 30, 2026
Shipping fast is not the same as shipping blindly. High-performing teams build a release habit where anyone who needs confidence can request it instantly, without waiting for the next CI cycle or pulling a QA engineer off higher-leverage work.
On-demand test runs are the operational backbone of that habit. They let you answer questions like: Did the checkout still work after that small UI tweak? Is the hotfix safe to deploy? Did the staging environment regress? The best on-demand system supports two equally important entry points:
Shiplight AI is built for this exact workflow: verifying UI changes in real browsers while keeping test maintenance near zero through intent-based execution, self-healing automation, and AI-powered assertions. Here is how to structure on-demand runs so they are reliable, fast, and actually used.
Most teams already can run tests on demand. The problem is that it is often slow, overly manual, or brittle, so people stop trusting it.
A best-in-class on-demand trigger has four properties:
Shiplight’s test suite management, cloud runners, live dashboards, and AI test summarization are designed to make these properties the default behavior rather than extra process.
Dashboard triggers shine when the person requesting confidence is not the person wiring pipelines. Product managers, designers, support engineers, and on-call engineers need a prove it now workflow that is safe and repeatable.
A practical dashboard design usually includes:
In Shiplight, teams typically pair these dashboard triggers with intent-based steps and self-healing behavior so UI shifts do not turn every on-demand request into a debugging session. The point of self-serve is that it stays self-serve even as the UI evolves.
Dashboard runs are most valuable in moments where timing matters more than automation purity:
The key is to treat the dashboard as a product surface, not an admin panel. You are designing an experience for fast, correct decisions.
API triggers are how on-demand testing becomes a platform capability. They enable:
Even if you use Shiplight’s CI/CD integrations, an explicit start run API is still valuable for ad hoc verification, orchestration, and tooling that sits outside traditional pipelines.
Regardless of the specific API shape, the content of the request matters more than the endpoint. A strong on-demand run payload includes:
Below is an illustrative example of what a good request body looks like. Treat this as a pattern, not a Shiplight-specific API contract:
{
"suite": "checkout-smoke",
"environment": {
"name": "preview",
"base_url": "https://preview-123.example.com"
},
"build": {
"commit_sha": "abc123",
"pull_request": 418
},
"context": {
"triggered_by": "release-bot",
"reason": "pre-deploy verification",
"ticket": "OPS-2314"
},
"execution": {
"browsers": ["chromium", "webkit"],
"parallelism": 8
}
}
When teams say API-triggered testing, what they usually want is not just a trigger. They want an artifact that flows back into their systems: run URL, status, summary, and a consistent way to fetch results. That is why Shiplight pairs cloud execution with live dashboards, reporting, and run summaries that are readable outside the QA function.
You do not want a dashboard way and an API way that diverge. The best approach is one execution model with two entry points.
Here is the practical split most teams land on:
Shiplight supports both human and programmable paths while keeping tests maintainable through YAML-based definitions, intent-based execution, and self-healing behavior. That combination is what makes on-demand runs sustainable after the first month.
On-demand access is powerful. Without guardrails, it becomes expensive noise.
A few proven guardrails:
Shiplight’s workflow orchestration and reporting capabilities are especially useful here. They let you model real execution flows, run the right subset, and route results to the right people.
If you are designing your on-demand strategy now, aim for a small set of high-leverage defaults:
The outcome you want is consistent: when someone asks Are we safe to ship?, the answer is a link to a recent, scoped run with a clear summary and real browser evidence, not a debate about whether the tests are up to date.
If you are building this system now, Shiplight AI is designed to make on-demand verification routine: minimal maintenance, real-browser execution, and results that are legible to the whole team.