📘 Devlog #1 — Orientation
This is a record of what I’ve built so far, where the architecture stands, and what comes next.
1. From v1 to v2: Technical Progress
I began properly building The Planner’s Assistant in early 2025, after an extended period of reading, scoping, and conceptual planning. The first implementation laid down the backend: FastAPI, PostGIS, basic constraint ingestion, initial policy parsing, and a working OpenAPI schema. But the React-based frontend wasn’t flexible enough — it couldn’t support the layered, multi-view logic I needed for constraints, reasoning, and planning scenarios.
Now, the frontend has been rebuilt in Svelte, and I’m in the process of refactoring the backend to fit the new architecture. The project remains aimed at supporting planners — not with a chatbot, but with structured, explainable reasoning layered over spatial data. This may be one of the first public-facing integrations of GIS and LLMs in the planning domain. The interface is card-based, task-driven, and designed to surface overlays, policy commentary, and scenarios in a form that stays legible under complexity.
✅ What’s Already Done
- Using OpenRouter to abstract LLM model selection while evaluating open vs. frontier trade-offs. Governance matters, but functionality is the current priority.
- Traceability (structured reasoning steps, audit trails) remains a long-term goal, but is currently deprioritised.
- Semantic search ongoing, but policy chunking and ingestion is still being worked out.
- Currently relying on structured chunking and light filtering, with a hybrid retrieval architecture planned.
- OpenAPI schema complete, covering sites, policies, constraints, officer reports, scenarios, and AI outputs.
- Async AI infrastructure in place using Celery and Redis.
- Frontend fully rebuilt in Svelte, replacing the original React prototype.
- AI prompt chains tested in v1, with a multi-pass enrichment structure — ready to be ported into v2.
⚠️ Where I’m Blocked
The main blocker is constraint ingestion. The original pipeline (based on ogr2ogr
) loaded spatial datasets into PostGIS with flexibility, storing unmapped fields in extra_properties::jsonb
. But the logic needs to be restructured to cleanly integrate overlays, frontend expectations, and schema assumptions.
The bigger issue is strategic: what counts as a meaningful constraint? LPAs vary significantly in how they represent spatial policy, and the temptation to model everything can stall progress.
Include too little, and it’s not useful. Include too much, and it’s never done.
For now, I’m focusing on a core set based on the Digital Land Planning Data Specification: Green Belt, Flood Zones, SSSIs, Conservation Areas, Local Plan Allocations, Article 4 Directions, and AONBs. I’m logging edge cases for later — not solving them all now.
🧭 Next Steps
- Rebuild the constraint ingestion logic for v2.
- Port over the AI reasoning chain and enrichments.
- Add task-driven reasoning views and planner-facing commentary panels.
- Maintain a modular architecture — avoid hardwiring premature assumptions.
2. Working Notes
This section tracks decisions, problems, and structural shifts in how the system is evolving. It isn’t comprehensive — just a way of staying close to the work.
The Planner’s Assistant is built around a core hypothesis: that complex planning judgement can be partially structured and surfaced, without flattening discretion or turning decisions into black boxes. The goal isn’t automation for its own sake — it’s transparency and explainability at a systems level.
There’s no fixed roadmap. Some days are spent untangling spatial logic, others reworking how prompts behave across models. The work moves between backend infrastructure, frontend composition, and the conceptual edges of what makes planning legible.
One constant: GitHub Copilot has been unexpectedly useful. It’s not flawless, but it mirrors intent, speeds up boilerplate, and keeps momentum going during ambiguous phases. Especially useful when the task is clear but repetitive, or the logic is rough but needs pushing through.
3. Timeline Snapshot (Approximate)
- Late 2024 — Study phase at the British Library. Built a self-directed curriculum based on UCL’s spatial planning modules — planning theory, urban systems, spatial analysis, and AI in governance. No code yet.
- Jan 2025 — Semantic search experiments begin. Prompt chains tested using OpenAI and Gemini. No tool finalised.
- Feb 2025 — Backend scaffolded with FastAPI and PostGIS. Initial schema and model structure.
- Mar 2025 — React UI built. Constraint loader wired with
ogr2ogr
. Began loading base layers. Early officer-style prompt tests. - Apr 2025 — GDS planning hackathon. First public framing. Svelte frontend work begins. Spatial Ledger and Reasonable Authority scoped.
- May 2025 — Major refactor. React fully dropped. Constraint ingestion under review. Semantic architecture clarified. OpenAPI schema finalised. Devlog launched.
4. Initial Toolchain and Stack
A summary of the current toolchain, with brief notes on why each was chosen:
Backend
- Python + FastAPI — Chosen for its clarity, strong ecosystem, and fast development cycle. FastAPI provides async support and OpenAPI-native documentation out of the box.
- PostgreSQL + PostGIS — A spatial database that’s reliable, flexible, and essential for planning logic. Spatial joins, geometry buffers, and overlays are all first-class operations.
- Celery + Redis — Provides async task handling for LLM generation without blocking requests. Stable, well-supported, and production-ready.
Frontend
- Svelte + Vite — Replaced React for cleaner state management, lower overhead, and easier component composition. More intuitive for managing dynamic view logic.
- UI Design — Task-focused, card-based layout with progressive disclosure. Map overlays support but do not dominate reasoning.
AI Integration
- OpenRouter — Going to be used to abstract model selection between open and frontier LLMs. Allows easy switching and avoids early lock-in.
- Prompt Chains — Structured, multi-pass enrichment prompts designed to interpret policies, generate planning commentary, and surface reasoning steps.
Data Handling
ogr2ogr
— Used to load shapefiles and spatial datasets into PostGIS. Flexible and widely supported across UK datasets.extra_properties::jsonb
— Stores non-standard fields for lightly structured or unpredictable schema cases.- Base Layers — Drawn from Open Street Maps and Digital Land.
Search / Retrieval
- Hybrid Architecture — Combining structured tag-based filtering with semantic embeddings to improve chunk retrieval.
- Chunking Logic — Still under development; balancing granularity, interpretability, and performance.
5. Closing Notes
This is probably incomplete. There’s a good chance I’ve missed important pieces of the story, technical notes, or half-built threads that matter later. I’ll likely update or revise this post as things settle.
Immediate priorities still sitting in the background:
- Fixing the main public site — theplannersassistant.uk
- Building some lighter interactive material around Reasonable Authority to explore planning judgement playfully
- UPDATE: Creating a static frontend demo with preloaded JSON data (no LLM backend), to demonstrate the full concept of the reasoning interface — including constraints, policy retrieval, and structured AI outputs — all without requiring deployment or live inference
More to come. Built without lanyards.