The Planner’s Assistant vs. Agentic Digital Twins — Shared Chassis, Different Mission
Thesis: Agentic digital twins aim to automate city operations; The Planner’s Assistant (TPA) extends the same architecture to planning, not to defend discretion for its own sake, but to scale human capacity through augmentation. It is less about preserving tradition and more about building a civic reasoning engine that can operate credibly in the complex, contested space of planning. In doing so, TPA situates itself at the intersection of computational innovation and governance reform, aligning technical advances with the institutional realities of planning systems.
The shared technical chassis
Recent literature on agentic digital twins proposes a three‑layered architecture, coordinated by an orchestration protocol. TPA follows the same structural logic, though its institutional purpose diverges:
- Cognitive layer (LLM): goal parsing, knowledge‑augmented reasoning, workflow synthesis, reflection/memory. This layer provides conversational interfaces, decomposes objectives, and generates candidate workflows. It can be thought of as the “brain” of the system, tasked with interpreting intent and mapping it into computationally tractable processes.
- Data layer: integration of spatial and scientific sources; exchange with simulations and (sometimes) the physical world. The divergence is whether the emphasis is real‑time telemetry (operations) or visuospatial evidence (planning).
- Task execution layer (agents): orchestrates GIS, solvers, simulators, ML pipelines, and code generation. This may mean traffic optimisation in one case, or policy compliance reasoning in the other. These are modular and can be extended over time as new planning or operational tools emerge.
- Interfaces: natural‑language dialogue, dashboards, visualisations. Accessibility is essential for both expert and non‑expert stakeholders, reducing barriers to engagement and supporting wider legitimacy.
- Coordination protocol: context protocol for tool use, provenance, and secure execution. This ensures predictable integration across heterogeneous tools and services, while maintaining traceability of workflows.
In both paradigms, the architecture is conceived as a “city AI operating system.” The crucial difference lies in whether the target is real‑time actuation or structured augmentation of governance, and whether legitimacy derives from efficiency or from deliberative capacity.
Operations vs. Planning — two modes on the same chassis
Dimension | Agentic Digital Twin (Operations) | The Planner’s Assistant (Planning) |
---|---|---|
Goal | Cyber‑physical responsiveness; optimise flows and controls | Extend human capacity in planning; scale reasoning beyond current limits |
Timescale | Seconds → hours | Weeks → years |
Data layer | Situational awareness: IoT, telemetry, traffic/energy feeds, crowd reports | Visuospatial reasoning: site plans, policy maps, constraints, 3D massing, design codes, precedent |
Tools | Real‑time optimisation, MPC, traffic/building simulators, edge control | GIS, policy parsers, VLMs for drawings/CGIs, evidence synthesis, planning balance models |
Outputs | Actuation commands, schedules, routing plans | Draft reports, policy matrices, scenario explorations, decision support |
Benchmark | Efficiency, latency, reliability | Scalability of reasoning, defensibility of outputs, institutional credibility |
Governance | Human‑on‑the‑loop for safety | Human‑in‑the‑loop for direction, but with AI carrying the cognitive load |
Risk posture | Safety/robustness under autonomy | Risk of epistemic drift if not carefully grounded; mitigated by transparent workflows |
The key divergence is ontological: operations twins act directly upon infrastructure, while TPA acts indirectly by producing structured, reasoned artefacts that humans can adopt, contest, or override. The result is that TPA is less about control and more about augmenting institutional cognition.
The TPA data layer is visuospatial
Where the operational model thrives on sensor fusion and control loops, TPA is oriented toward visuospatial reasoning:
- Plans & drawings (VLM): read elevations, sections, CGIs, landscape plans; detect heights, setbacks, frontage rhythm, materials. These tasks, though simple for a trained planner, often overwhelm digital systems unless multimodal reasoning is introduced.
- Policy maps & constraints (GIS): statutory layers, designations, heritage, transport, flood/BNG; spatial join + rule checks. Here, the emphasis is on formalising tacit practices of policy referencing and making them computationally tractable.
- 3D massing & codes: evaluate bulk, overshadowing, skyline impacts; apply design codes (e.g., frontage activation, step‑backs) in context. This reflects the inherently spatial character of planning disputes, where visual fit and urban character dominate.
- Precedent & policy synthesis: retrieve appeal decisions, cross‑referenced policies, and material considerations. This introduces an institutional memory into the system, bridging legal and technical domains.
- Structured reasoning outputs: reasoning chains surfaced in ways that can be inspected, not to fetishise provenance, but to enable scaling of cognitive work while maintaining institutional trust.
This is not a telemetry stack. It is an evidence‑interpretation stack, oriented toward planning tasks that are cognitively demanding, resource‑intensive, and currently bottlenecked by human capacity. The ambition is to relieve planners from repetitive synthesis, freeing their attention for negotiation, adaptation, and higher‑order judgement.
Beyond discretion: scaling capacity
TPA is not designed to freeze or glorify “human discretion.” It begins with the recognition that discretion is overloaded, inconsistently applied, and vulnerable to capture by narrow interests. This produces delivery bottlenecks, legitimacy crises, and a loss of public confidence in planning systems.
The aim is therefore to scale the capacity to reason systematically, by:
- Decomposing complex, contradictory policies into tractable material considerations.
- Offering draft reasoning chains that officers and committees can adopt, amend, or reject.
- Handling volume and complexity that would otherwise overwhelm individual planners, especially in high‑stakes development management.
- Enabling comparative analysis across sites and policies, creating consistency where currently there is fragmentation.
- Providing traceable reasoning as scaffolding for appeals, judicial reviews, and political deliberation.
Where the operations twin aims to remove humans from the loop for efficiency, TPA aims to keep humans in control while reducing their cognitive overload. It is less about “saving discretion” and more about building institutional stamina in the face of complexity.
Why not just call TPA a digital twin?
The twin metaphor implies a control relationship with a physical counterpart. Planning, however, is about governance under uncertainty and contestation. TPA is less a “twin” than a reasoning layer built on planning artefacts (policies, drawings, spatial data, precedents). It does not actuate; it augments.
Moreover, to call it a twin risks collapsing its purpose into the language of optimisation and technical efficiency. Planning is not reducible to those terms: it is a process of interpretation, negotiation, and institutional accountability. The legitimacy of TPA lies not in fidelity to a real‑world twin, but in its ability to generate reasoned outputs that can withstand audit, scrutiny, and political contestation.
How the two worlds meet
Despite their different emphases, these paradigms are complementary:
- From TPA to operations: policy synthesis and planning conditions can be translated into operational parameters. For instance, phasing requirements imposed at the planning stage can be operationalised through logistics twins.
- From operations to TPA: real‑time telemetry and performance data can be fed back to stress‑test plan scenarios and support reasoned departures. For example, congestion data from operations twins may inform decisions on whether transport mitigation measures are sufficient.
Together they form a civic intelligence stack: adaptive operations + augmented governance. This layered model suggests a future where planning and operations are not siloed, but continuously inform one another through AI‑mediated feedback.
Research positioning
The agentic twin agenda demonstrates how AI orchestration yields adaptive capacity in logistics, energy, and transport. TPA demonstrates how the same architecture can address bounded rationality in planning. Rather than defending “discretion” as a principle, it acknowledges that discretionary systems require scaffolding if they are to function under 21st‑century complexity.
The research contribution is therefore twofold:
- Technically: showing how LLMs, visuospatial reasoning, and agent orchestration can generate structured planning support. By combining multimodal interpretation with structured knowledge graphs and retrieval‑augmented reasoning, TPA builds workflows capable of supporting entire decision processes.
- Institutionally: showing how AI can scale cognitive capacity while leaving space for accountability, politics, and law. This involves not merely tool integration but embedding AI into the institutional logic of planning, where reasoning must be defensible, contestable, and transparent enough to survive appeals and judicial review.
This research sits within a wider agenda of digital discretion studies: analysing how AI systems can reshape the boundaries between automation, augmentation, and human authority in governance contexts. It argues that explainable augmentation is not a luxury but a precondition for legitimacy, especially in domains as contested as planning.
Position statement
I welcome the emergence of agentic‑twin frameworks for real‑time city operations. My contribution with The Planner’s Assistant is to build the planning analogue of that architecture: the same cognitive/data/execution chassis, but with a visuospatial evidence layer and a capacity‑scaling, human‑in‑the‑loop objective. One makes the city adaptive; the other makes planning governable at scale.
TL;DR
- Same chassis: cognition • data • agentic tool use • interfaces.
- Different missions: Automate operations vs augment planning capacity.
- TPA’s edge: visuospatial evidence reasoning + scalable human‑guided outputs.