Parameterised Discretion
Human-Readable, Politically Produced AI Model Guidance
Tim Mayoh (Working Draft – May 2025)
Abstract
This paper introduces parameterised discretion, a model for encoding planning judgement into transparent, configurable, and politically governed logic structures. Instead of automating professional reasoning, the framework enables AI systems—specifically, large language models (LLMs)—to follow declared decision parameters authored by institutional actors. These human-readable configuration files encode thresholds, policy weightings, and relevance criteria, ensuring outputs remain consistent, auditable, and publicly contestable. Parameterised discretion thus reframes AI not as an autonomous agent, but as a procedural tool for simulating judgement under democratic oversight.
1. Introduction
Professional discretion is central to development management in UK planning. Officers interpret broad policy, weigh competing goods, and draft narrative reports that mediate between institutional expectations and local realities. Yet the reasoning behind these reports is seldom formalised. Much of it remains implicit, shaped by institutional norms, personal heuristics, or precedent. This opacity makes it difficult to audit, replicate, or systematically interrogate the basis for planning outcomes.
Recent advances in retrieval-augmented generation using LLMs make it possible to simulate report drafting as a taskflow. But this simulation requires something rarely made explicit: the structure of discretion itself. This paper proposes parameterised discretion as a mechanism for externalising judgement into a modifiable logic layer. Rather than encoding outcomes, it encodes the procedural conditions under which outcomes are reached—supporting reproducibility without denying the political and ethical dimensions of planning.
2. What Is Parameterised Discretion?
Parameterised discretion refers to the use of declared, human-readable configuration files to guide AI systems in simulating discretionary reasoning. These files encode institutional or political choices—thresholds, evidence weights, policy relevance rules—that would otherwise remain tacit. They are subject to version control, public scrutiny, and procedural challenge.
Unlike predictive models that learn decision patterns from data, parameterised discretion foregrounds authored logic: declared values, not inferred preferences. The AI system becomes a reasoning assistant, applying these values consistently while leaving room for oversight and revision.
3. Key Properties
Property | Description |
---|---|
Human-readable | Formats like YAML or JSON, enabling direct review and editability by planners |
Politically produced | Authored by accountable actors—officers, committees, or policymakers |
Declarative | States what matters and how to reason with it |
Version-controlled | Supports traceability and historical comparison |
Governance-ready | Tied to policy instruments or procedural mandates |
Configurable | Adaptable to changing political priorities or spatial contexts |
Inspectable | Outputs include a parameter appendix and traceable reasoning structure |
These properties allow parameterised discretion to serve as an infrastructure of accountability. It does not automate the decision, but it does require that value-laden choices be made visible.
4. Why It Matters
4.1 Improving Consistency
Formalising judgement criteria reduces unwarranted variation across cases and authorities. Discretion is not eliminated but made legible and repeatable, improving procedural fairness.
4.2 Enhancing Traceability
AI-generated reports include trace logs and parameter summaries, allowing decision reviewers to see how outputs were shaped—and by whom.
4.3 Enabling Political Accountability
When value weightings are encoded explicitly, they can be debated and changed. This recasts policy interpretation as a governable act, not a professional mystique.
4.4 Scenario Testing
Because parameters are modular, institutions can simulate how different policy framings affect outcomes—testing, for example, a housing-led vs. heritage-led configuration over the same evidence base.
4.5 Supporting Equity
Discretion often reproduces power asymmetries through omission or habitual framing. Parameterisation surfaces these choices. It invites scrutiny of who benefits, who loses, and whose voices are prioritised. Equity-led parameters—e.g., weighting social rent more heavily or requiring cumulative impact tests in marginalised areas—can be proposed, tested, and debated.
5. Practical Applications
- Report drafting: Officers use LLMs guided by declared parameters to generate narrative reports.
- Policy sensitivity analysis: Authorities model how shifts in parameters affect outcomes across applications.
- Appeals and inspection: Inspectors or communities review the decision logic, not just its surface narrative.
- Inter-authority comparison: Divergence in reasoning frameworks can be traced and aligned where appropriate.
- Induction and training: Parameter sets act as transparent teaching materials, accelerating organisational learning.
6. Theoretical Foundations
Parameterised discretion draws on several key theoretical traditions that help clarify its rationale.
Street-level bureaucracy (Lipsky 1980) reveals how discretionary judgement by front-line professionals operationalises abstract policy. Parameterised discretion formalises this process without eliminating it—structuring the conditions under which judgement is applied while preserving professional involvement.
Discretion as institutional guidance (Booth 1996) sees flexibility as a deliberate design feature. Parameterised discretion retains this, while making its structure inspectable and transferable.
Collaborative and communicative planning (Healey 1997; Forester 1999) view planning as value negotiation. Parameterised discretion externalises these values for scrutiny and iterative refinement.
Reflective practice (Schön 1983) positions professional judgement as an evolving, situated skill. Our framework supports this through modifiable, versioned parameters.
Algorithmic accountability (Kroll et al. 2017) insists on legible inputs for AI systems. Parameterised discretion meets this standard by foregrounding structured value choices.
7. Risks and Design Challenges
- False objectivity: Parameters may obscure contestable values behind a veneer of procedural neutrality.
- Governance opacity: If parameter authorship is not transparent, discretion may be captured upstream.
- Over-standardisation: Excessive reliance on templates may erode professional reflexivity.
- Institutional inertia: Without formal mechanisms for revising parameter sets, drift or entrenchment may occur.
These risks are not arguments against the model, but warnings that infrastructural power must itself be governed.
8. Conclusion
Parameterised discretion offers a model for digital planning systems that neither mystifies judgement nor seeks to erase it. It treats AI not as a black-box decision-maker, but as a configurable reasoning assistant bound to declared public values. It brings planning discretion into the domain of infrastructure: legible, revisable, and politically accountable.
By doing so, it invites a new era of procedural imagination—where the politics of planning are not hidden in prose, but surfaced, shared, and subject to democratic control.
References (Indicative)
- Booth, P. (1996). Controlling Development: Certainty and Discretion. Routledge.
- Forester, J. (1999). The Deliberative Practitioner. MIT Press.
- Healey, P. (1997). Collaborative Planning. Macmillan.
- Kroll, J. A., et al. (2017). "Accountable Algorithms." University of Pennsylvania Law Review, 165, 633–705.
- Lipsky, M. (1980). Street-Level Bureaucracy. Russell Sage Foundation.
- Schön, D. A. (1983). The Reflective Practitioner. Basic Books.