Material Considerations

Parameterised Discretion

Human-Readable, Politically Produced AI Model Guidance

Tim Mayoh (Working Draft – May 2025)


Abstract

This paper introduces parameterised discretion, a model for encoding planning judgement into transparent, configurable, and politically governed logic structures. Instead of automating professional reasoning, the framework enables AI systems—specifically, large language models (LLMs)—to follow declared decision parameters authored by institutional actors. These human-readable configuration files encode thresholds, policy weightings, and relevance criteria, ensuring outputs remain consistent, auditable, and publicly contestable. Parameterised discretion thus reframes AI not as an autonomous agent, but as a procedural tool for simulating judgement under democratic oversight.


1. Introduction

Professional discretion is central to development management in UK planning. Officers interpret broad policy, weigh competing goods, and draft narrative reports that mediate between institutional expectations and local realities. Yet the reasoning behind these reports is seldom formalised. Much of it remains implicit, shaped by institutional norms, personal heuristics, or precedent. This opacity makes it difficult to audit, replicate, or systematically interrogate the basis for planning outcomes.

Recent advances in retrieval-augmented generation using LLMs make it possible to simulate report drafting as a taskflow. But this simulation requires something rarely made explicit: the structure of discretion itself. This paper proposes parameterised discretion as a mechanism for externalising judgement into a modifiable logic layer. Rather than encoding outcomes, it encodes the procedural conditions under which outcomes are reached—supporting reproducibility without denying the political and ethical dimensions of planning.


2. What Is Parameterised Discretion?

Parameterised discretion refers to the use of declared, human-readable configuration files to guide AI systems in simulating discretionary reasoning. These files encode institutional or political choices—thresholds, evidence weights, policy relevance rules—that would otherwise remain tacit. They are subject to version control, public scrutiny, and procedural challenge.

Unlike predictive models that learn decision patterns from data, parameterised discretion foregrounds authored logic: declared values, not inferred preferences. The AI system becomes a reasoning assistant, applying these values consistently while leaving room for oversight and revision.


3. Key Properties

Property Description
Human-readable Formats like YAML or JSON, enabling direct review and editability by planners
Politically produced Authored by accountable actors—officers, committees, or policymakers
Declarative States what matters and how to reason with it
Version-controlled Supports traceability and historical comparison
Governance-ready Tied to policy instruments or procedural mandates
Configurable Adaptable to changing political priorities or spatial contexts
Inspectable Outputs include a parameter appendix and traceable reasoning structure

These properties allow parameterised discretion to serve as an infrastructure of accountability. It does not automate the decision, but it does require that value-laden choices be made visible.


4. Why It Matters

4.1 Improving Consistency

Formalising judgement criteria reduces unwarranted variation across cases and authorities. Discretion is not eliminated but made legible and repeatable, improving procedural fairness.

4.2 Enhancing Traceability

AI-generated reports include trace logs and parameter summaries, allowing decision reviewers to see how outputs were shaped—and by whom.

4.3 Enabling Political Accountability

When value weightings are encoded explicitly, they can be debated and changed. This recasts policy interpretation as a governable act, not a professional mystique.

4.4 Scenario Testing

Because parameters are modular, institutions can simulate how different policy framings affect outcomes—testing, for example, a housing-led vs. heritage-led configuration over the same evidence base.

4.5 Supporting Equity

Discretion often reproduces power asymmetries through omission or habitual framing. Parameterisation surfaces these choices. It invites scrutiny of who benefits, who loses, and whose voices are prioritised. Equity-led parameters—e.g., weighting social rent more heavily or requiring cumulative impact tests in marginalised areas—can be proposed, tested, and debated.


5. Practical Applications


6. Theoretical Foundations

Parameterised discretion draws on several key theoretical traditions that help clarify its rationale.


7. Risks and Design Challenges

These risks are not arguments against the model, but warnings that infrastructural power must itself be governed.


8. Conclusion

Parameterised discretion offers a model for digital planning systems that neither mystifies judgement nor seeks to erase it. It treats AI not as a black-box decision-maker, but as a configurable reasoning assistant bound to declared public values. It brings planning discretion into the domain of infrastructure: legible, revisable, and politically accountable.

By doing so, it invites a new era of procedural imagination—where the politics of planning are not hidden in prose, but surfaced, shared, and subject to democratic control.


References (Indicative)