Material Considerations

Open-Source AI in Planning: The Safer, Smarter Choice

As digital tools become more embedded in the planning process, we face a strategic decision. While proprietary software is sometimes assumed to be more reliable and ready-to-use, this perception deserves closer scrutiny β€” especially as the sector begins to adopt more advanced, AI-assisted systems.

This piece outlines the case for a different approach: that open, explainable AI tools are not only viable, but represent the most robust, transparent, and accountable foundation for planning technology.

Conversely, closed, proprietary systems pose growing risks β€” not just in terms of cost or flexibility, but in their long-term alignment with public sector objectives.


A quick primer: What do we mean by "proprietary" vs. "open-source"?

This distinction matters. Because when we talk about AI tools making or influencing planning decisions, we’re really talking about who controls the logic β€” and whether that logic can be understood, improved, or contested.


The real risk isn’t innovation β€” it’s dependency

As planning systems grow more complex and data-driven, it’s not enough to ask whether a tool works. We must ask: who controls the logic? Who can inspect it, adapt it, explain it β€” or walk away from it if necessary?

Many tools on the market β€” often developed with public grants β€” are built on closed platforms that:

What appears initially convenient or efficient can, over time, create what amounts to digital lock-in β€” with systems that are costly to maintain, difficult to improve, and opaque in their operation.


A comparative perspective

πŸ”’ Closed SaaS: Appears safe, but is structurally risky

Risk Factor Closed SaaS
Logic transparency ❌ Opaque, hard-coded
Policy flexibility ❌ Difficult to customise
Legal defensibility ❌ Risky due to unexplainable decisions
Integration ❌ Often limited, proprietary APIs
Long-term control ❌ Vendor-dependent
Cost over time ❌ High; locked-in licensing
Alignment with public goals ❌ Commercial incentives dominate

β€œIt looks polished β€” until you realise you can't see inside.”

πŸ”“ Open Source: Appears risky, but is structurally resilient

Resilience Factor Open Source
Logic transparency βœ… Fully inspectable
Policy flexibility βœ… Can be adapted to local plans
Legal defensibility βœ… Reasoning is visible, traceable
Integration βœ… Open standards, modular design
Long-term control βœ… No lock-in; extendable by councils or civic tech
Cost over time βœ… Lower TCO; no license fees
Alignment with public goals βœ… Designed for stewardship, not sales

β€œIt’s not free because it’s cheap. It’s free because it’s ours.”


A path forward

There are clear foundations already being laid: open data standards, shared infrastructure projects like BOPS and PlanX, and early work on explainable, planner-facing AI tools.

But to scale these efforts and ensure they remain aligned with public values, we need to:

This is not an argument against commercial innovation. It is an argument for governing the foundations of planning tech as public infrastructure.

When public money funds planning tools, it should deliver public value β€” not private dependency.


We don’t need more short-lived MVPs or opaque PDFs. We need shared systems that build institutional memory, support professional judgement, and reinforce public trust.

That is what open, explainable AI can offer β€” if we choose to build it that way.