How self-healing tests work with Shiplight AI Fixer for dynamic UIs
Updated on May 2, 2026
Updated on May 2, 2026
Modern UIs change for good reasons. Product teams iterate quickly, design systems evolve, A/B tests roll out, feature flags reshape layouts, and component libraries get refactored for performance or accessibility. The problem is not change itself. The problem is what change does to end-to-end tests that were built on brittle assumptions.
Traditional UI automation typically binds a test step to a specific locator strategy, often a CSS selector, XPath, or a tightly scoped DOM path. That approach works until a “small” UI change lands: a button label is tweaked, a wrapper div appears, a component is moved into a different container, or a form field is re-rendered with a different structure. The user journey is still valid, but the test fails anyway. Teams then spend hours doing mechanical maintenance that does not improve product quality.
Shiplight AI is built to break that cycle. Its self-healing capability and AI Fixer are designed to keep tests aligned with user intent, even as the UI remains dynamic.
Dynamic UIs introduce instability in exactly the places selector-based tests rely on:
In selector-first frameworks, these changes are indistinguishable from “the element is gone,” even when the intended user action is still available. That creates a steady stream of test failures that are not regressions.
Shiplight’s self-healing starts with how tests are expressed and executed. Instead of treating an end-to-end test as a script that must locate an exact DOM node, Shiplight is designed around intent-based test execution. In practice, that means steps are written in the language of user actions and outcomes, such as “click the login button” or “fill the email field,” rather than “click #btn-login.”
That shift matters because intent gives the system room to adapt when UI implementation details change. When a UI evolves, the correct question is rarely “Is this the same element?” It is “Can a user still perform the same action here, and does the product still behave correctly?”
Shiplight’s platform runs verification in real browsers during development workflows, which is critical for dynamic UIs. A selector that looks “right” in a static DOM snapshot can still fail due to timing, rendering, or client-side state. Real execution is where intent and reality meet.
Self-healing is not a single trick. It is a workflow that detects what changed, determines whether the user intent can still be satisfied, and adapts the test in a controlled way.
In Shiplight, self-healing is designed to handle common UI shifts such as elements moving, renaming, or being restructured. When a step cannot find its original target, Shiplight can attempt to re-identify the correct target using broader context than a single brittle locator.
That context typically includes signals like:
The goal is not to make tests pass at all costs. The goal is to keep tests stable when the product is stable, and to fail loudly when the user experience is actually broken.
Self-healing can resolve many changes automatically. But some failures are legitimately ambiguous. For example:
This is where Shiplight AI Fixer becomes the difference between “tests are flaky” and “tests stay current.”
AI Fixer is designed for the cases that are too complex or risky for silent healing. Instead of leaving a developer to spelunk through logs and DOM dumps, Fixer can guide the repair by proposing a concrete adjustment that brings the test back into alignment with the updated UI. Crucially, this keeps the feedback loop tight: the team sees what changed, why the test failed, and what the new durable intent should be.
In a healthy workflow, Fixer does not replace engineering judgment. It accelerates it.
A reasonable concern with any self-healing system is, “Will it mask real bugs?” That concern is valid, and Shiplight’s answer is to pair healing with verification that remains strict where it matters: outcomes.
Shiplight’s AI-powered assertions are built to validate behavior and UI correctness using the full testing context, rather than relying on a single fragile check. That matters for dynamic UIs because it keeps the test focused on user-visible truth:
When healing is paired with strong assertions, teams get the best of both worlds: fewer meaningless failures, and higher confidence that failures represent real issues.
Consider a simple intent: “click the login button.”
In a selector-based world, the test might fail if the login button moves from a top navigation bar into a profile menu, or if the markup changes during a design refresh. The user intent is still valid, but the locator breaks.
With Shiplight’s intent-based approach, self-healing can often adapt to changes like:
If the UI change is more significant, AI Fixer can help update the step to reflect the new interaction that a real user now performs, such as “open the profile menu, then click Sign in,” while keeping the test readable and maintainable.
Self-healing works best when teams treat tests as product documentation for critical flows, not as a pile of brittle scripts. A few practices help:
Dynamic UIs are not going to slow down, and neither are release cycles. The teams that ship fastest are the teams that reduce invisible drag: false failures, repetitive test upkeep, and slow investigation loops.
Shiplight AI’s self-healing tests and AI Fixer are built for exactly that reality. By anchoring automation to intent, running verification in real browsers, and providing a practical path to resolve complex UI changes, Shiplight helps teams spend less time repairing tests and more time improving the product they ship.