Shift-left security: why runtime approval beats pre-flight checks
Policy-as-code tools like OPA, Kyverno, and Conftest are genuinely valuable. But they're designed to answer "is this configuration allowed?" — not "is this command safe to run right now, in this context, by this agent?" Those are different questions with different answers.
What shift-left gets right
The shift-left movement has a good core insight: find problems earlier in the development lifecycle, when they're cheaper to fix. Kubernetes admission controllers (OPA/Gatekeeper, Kyverno) evaluate policies at deploy time. Conftest validates infrastructure-as-code in CI. Both prevent misconfigured resources from making it to production.
For infrastructure configuration — container images, resource limits, network policies, RBAC bindings — shift-left is the right tool. The configuration is static. The risk is deterministic. A policy that says "no privileged containers" is correct at admission time and stays correct at runtime.
What it can't do
The problem is that a growing class of risk is not static. When an AI agent connects to your production server, it doesn't arrive with a manifest. It arrives with intent — and it generates commands dynamically based on what it finds.
Consider a coding agent tasked with "fix the disk space issue on prod-server-1." A Kyverno policy can validate Kubernetes resources. It cannot see the agent running:
find /var/log -name "*.log" -mtime +7 -exec rm {} \;
# ... finds 200GB of old logs, clears them
du -sh /var/lib/docker
# ... notices Docker taking 120GB
docker system prune -af
# ... removes all stopped containers, unused images
# ... including the staging environment's images that were cached there
Each command is individually reasonable. Together they cause a production incident. No admission controller can see this sequence coming, because it's being generated in real time based on dynamic observation of the server state.
The fundamental mismatch
Policy-as-code operates on declarative specifications. You write rules about what configurations are allowed. The system checks new configurations against those rules.
Agentic AI operates on imperative instructions. The agent observes state and emits actions. Those actions are not specified in advance — they're computed at runtime. By the time you'd need to evaluate a policy, the command has already been generated.
- Misconfigured Kubernetes resources
- Non-compliant container images
- Forbidden resource types at deploy time
- Infrastructure drift from IaC baseline
- Hardcoded secrets in configs
- Unexpected commands from AI agents
- Commands with dangerous side effects at runtime
- Off-hours destructive operations
- Scope creep ("I'll just clean this up too")
- Commands that are right-in-theory but wrong-right-now
The "same command, different context" problem
OPA's motto is "policy as code." But many security decisions aren't about code — they're about context.
DELETE FROM events WHERE created_at < NOW() - INTERVAL '90 days'
This command is:
- Safe — if it's a scheduled weekly cleanup, you're aware of it, and events older than 90 days genuinely don't matter
- Risky — if it's being run by an AI agent that decided the table was "too big" while doing something else
- Catastrophic — if it's 11pm before a compliance audit that needs the last 90 days of event data
A policy that says "DELETE is allowed on events" misses the point. The question isn't whether the command is syntactically allowed — it's whether the specific execution in the current context is something a human would sanction if they knew about it.
Policy-as-code evaluates commands against known rules. Runtime approval evaluates commands against human judgment. They're not substitutes — they're complementary layers.
Not either/or
This isn't an argument against OPA, Kyverno, or any shift-left tool. Use them. They catch a real class of problems efficiently and automatically.
The argument is that the AI agent era introduces a new class of risk that doesn't fit the shift-left model: dynamic, contextual, intent-driven actions that can only be evaluated at the moment they're about to happen.
A well-structured security architecture in 2026 probably looks like:
- Before deploy: Kyverno/OPA validates infrastructure configuration
- At access time: Teleport/Boundary controls who can connect to what
- During execution: expacti gates what agents actually do, in real time
- After: Immutable audit log + compliance reports
The shift-left movement was right that "find it earlier" is better than "find it after." Runtime approval takes that to its logical conclusion: find it at the last possible moment before the damage, when you still have context to make a good decision.
What this means for DevSecOps teams
If your team is running AI agents against any infrastructure — development, staging, or production — you need both layers:
- Keep your Kubernetes admission controllers. They catch known-bad configurations efficiently.
- Add runtime approval for anything an AI agent touches. Static policies can't reason about what a language model will decide to do next.
The cost of runtime approval is low once the whitelist is built. The first week catches the interesting cases. After that, it's mostly automated — with a human in the loop specifically for the edge cases that matter.
Add the runtime layer
expacti sits between your AI agents and your infrastructure. One integration, complete approval history.
▶ Interactive demo How it compares