This is the first time I see "steering rules" mentioned. I do something similar with Claude, curious how it looks for them and how they integrate it with Q/Kiro.
Those rules are often ignored by agents. Codex is known to be quite adhering, but it falls back to its own ideas, which run counter to rules I‘ve given it. The longer a session goes on, the more it goes off the rails.
I'm aware of the issues around rules as in a default prompt. I had hoped the author of the blog meant a different mechanism when they mentioned "steering rules". I do mean something different, where an agent will self-correct when it is seen going against rules in the initial prompt. I have a different setup myself for Claude Code, and would call parts of that "steering"; adjusting the trajectory of the agent as it goes.
With Claude Code, you can intercept its prompts if you start it in a wrapper and mock fetch (someone with github user handle „badlogic“ did this, but I can’t find the repo now). For all other things (and codex, Cursor) you‘d need to proxy/isolate all comms with the system heavily.
Yes they do, most of the time. Then they don’t. Yesterday, I told codex that it must always run tests by invoking a make target. That target is even configurable w/ parameters, eg to filter by test name. But always, at some point in the session, codex started disregarding that rule and fell back to using the platform native test tool directly. I used strong language to steer it back, but 20% or so of context later, it did that again.
"steering rules" is a core feature baked into Kiro. It's similar to the spec files use in most agentic workflows but you can use exclusion and inclusion rules to avoid wasting context.
There's currently not an official workflow on how to manage these steering files across repos if you want to have organisation-wide standards, which is probably my main criticism.