Open-Source AI in Planning: The Safer, Smarter Choice
As digital tools become more embedded in the planning process, we face a strategic decision. While proprietary software is sometimes assumed to be more reliable and ready-to-use, this perception deserves closer scrutiny β especially as the sector begins to adopt more advanced, AI-assisted systems.
This piece outlines the case for a different approach: that open, explainable AI tools are not only viable, but represent the most robust, transparent, and accountable foundation for planning technology.
Conversely, closed, proprietary systems pose growing risks β not just in terms of cost or flexibility, but in their long-term alignment with public sector objectives.
A quick primer: What do we mean by "proprietary" vs. "open-source"?
Proprietary software refers to tools that are owned and controlled by a private company. The source code is closed β meaning no one outside the company can see how it works, modify its behaviour, or use it without a license.
Open-source software, by contrast, makes its code publicly available. Anyone can inspect how it works, adapt it to their needs, and (depending on the license) reuse or contribute to it.
This distinction matters. Because when we talk about AI tools making or influencing planning decisions, weβre really talking about who controls the logic β and whether that logic can be understood, improved, or contested.
The real risk isnβt innovation β itβs dependency
As planning systems grow more complex and data-driven, itβs not enough to ask whether a tool works. We must ask: who controls the logic? Who can inspect it, adapt it, explain it β or walk away from it if necessary?
Many tools on the market β often developed with public grants β are built on closed platforms that:
- Do not disclose how planning logic is structured
- Offer limited capacity for local authorities to adapt outputs
- Operate independently of shared infrastructure efforts
- Create vendor dependencies that are difficult to exit
What appears initially convenient or efficient can, over time, create what amounts to digital lock-in β with systems that are costly to maintain, difficult to improve, and opaque in their operation.
A comparative perspective
π Closed SaaS: Appears safe, but is structurally risky
Risk Factor | Closed SaaS |
---|---|
Logic transparency | β Opaque, hard-coded |
Policy flexibility | β Difficult to customise |
Legal defensibility | β Risky due to unexplainable decisions |
Integration | β Often limited, proprietary APIs |
Long-term control | β Vendor-dependent |
Cost over time | β High; locked-in licensing |
Alignment with public goals | β Commercial incentives dominate |
βIt looks polished β until you realise you can't see inside.β
π Open Source: Appears risky, but is structurally resilient
Resilience Factor | Open Source |
---|---|
Logic transparency | β Fully inspectable |
Policy flexibility | β Can be adapted to local plans |
Legal defensibility | β Reasoning is visible, traceable |
Integration | β Open standards, modular design |
Long-term control | β No lock-in; extendable by councils or civic tech |
Cost over time | β Lower TCO; no license fees |
Alignment with public goals | β Designed for stewardship, not sales |
βItβs not free because itβs cheap. Itβs free because itβs ours.β
A path forward
There are clear foundations already being laid: open data standards, shared infrastructure projects like BOPS and PlanX, and early work on explainable, planner-facing AI tools.
But to scale these efforts and ensure they remain aligned with public values, we need to:
- Prioritise openness, so systems are inspectable, testable, and democratically accountable
- Support interoperability, to allow integration with other public tools and datasets
- Fund explainability, so decisions can be justified to both officers and communities
- Invest in shared stewardship, so infrastructure remains publicly governed
This is not an argument against commercial innovation. It is an argument for governing the foundations of planning tech as public infrastructure.
When public money funds planning tools, it should deliver public value β not private dependency.
We donβt need more short-lived MVPs or opaque PDFs. We need shared systems that build institutional memory, support professional judgement, and reinforce public trust.
That is what open, explainable AI can offer β if we choose to build it that way.