Designing SaaS for Agent Users: Identity, Permissions, and Product Boundaries
Why this matters
AI assisted development is not the risk. Unbounded AI suggestions inside an ungoverned codebase is the risk.
This note captures a practical way to turn tools like Cursor into a controlled contributor by codifying rules, boundaries, and review triggers so suggestions converge on your architecture rather than eroding it.
The failure mode this prevents
- Rules exist but are generic, so the assistant drifts into whatever pattern is easiest.
- Teams add more rules and get more noise, not more discipline.
- Security and privacy constraints are written down in docs but never enforced in day to day edits.
- New engineers learn by trial and error, while the assistant repeats the same mistakes at machine speed.
A workable rule system
Treat rules as layered guardrails, not a single monolithic policy file. Keep a small set of global constraints, then add narrow rules that only apply to the folders where they are relevant.
If you can describe the boundary in a code review comment, you can describe it as a rule. The difference is that a rule repeats perfectly.
- Global rules: security baseline, logging minimums, dependency policies, test expectations.
- Domain rules: API boundary conventions, schema ownership, error handling patterns.
- File path scope: apply rules to the folders where violations are expensive.
Related practice on this site
If you are working on agents in production, you may also want to read Software agents in delivery pipelines and Cursor rule governance.
Signals your rules are working
- The assistant stops proposing cross boundary calls without explicit interfaces.
- Generated code matches your logging and error patterns without you prompting for it.
- Refactors touch fewer files because boundaries are respected.
- Security sensitive areas get consistent red flags and safer defaults.
Evidence and related writing
A narrative example of this topic is published on Medium: Read the Medium article.
The shift most products miss
SaaS products are still designed as if every action originates from a human. Menus, workflows, confirmations, and guardrails all assume a person is present. Agent driven usage breaks this assumption quietly. APIs are exercised at scale, features are combined in unexpected ways, and feedback loops collapse.
The danger is not misuse. The danger is invisibility. When agents act on behalf of users, intent becomes indirect. Without deliberate observability, systems cannot distinguish between meaningful demand and automated noise.
Implications for product design
Designing for agent users requires thinking in terms of contracts, not screens. Rate limits, semantic validation, and outcome based APIs become first class product features. This is uncomfortable for teams used to optimizing flows rather than interfaces.
Products that adapt early gain leverage. Those that do not often respond by adding friction later, which tends to punish legitimate use more than automated behavior.