Agent-Native QA
Agent-native QA describes quality assurance tools designed so AI coding agents can invoke them directly as peers — through agent-callable interfaces (typically MCP) — rather than human dashboards. The AI agent is a first-class user, not just an internal feature.
In one sentence
A QA tool is agent-native when AI coding agents can use it as peers — invoking its capabilities, interpreting its output, and incorporating results into an ongoing task — rather than only humans operating it through a dashboard.
Origin
The term distinguishes a new architectural posture for tooling in the era of AI coding agents (Claude Code, Cursor, Codex, GitHub Copilot). Earlier "AI-powered" tools used AI internally to help humans; agent-native tools turn that inside-out by exposing the tool's capabilities so an external AI agent can call them.
Three architectural postures
| Posture | AI's role | Operator |
|---|---|---|
| Human-native | Absent or basic | Human via dashboard/UI |
| AI-augmented | Internal feature (smart locators, suggestions) | Human, with AI assist |
| Agent-native | First-class user via MCP/API | AI agent, with human oversight |
Practical signal
A tool is agent-native if it exposes its core actions through an agent-callable protocol such as the Model Context Protocol (MCP). The Shiplight Plugin is an example: /verify, /create_e2e_tests, and /review are MCP tools that Claude Code, Cursor, Codex, and GitHub Copilot can invoke during development without a human context switch.
Common confusion
"Agent-native" is sometimes used interchangeably with "agentic", but they describe different aspects: agent-native is about who can call the tool; agentic is about who drives the workflow. A tool can be agent-native without being agentic (it exposes APIs but has no autonomous behavior of its own), and a system can be agentic without being agent-native (it runs agents internally but isn't callable by external coding agents).