🚀 Devlog #5 — Accelerated by AI
Why this project feels like having a whole team behind me.
💡 Overview
This update is about the tools that now underpin not just The Planner’s Assistant, but almost everything I’m building — from civic AI prototypes to planning theory essays, from devlogs to documentation.
AI has become more than a support. It’s now the backbone of my workflow. Across multiple domains — planning, engineering, writing, systems thinking — I rely on a stack of models daily. Each one excels at something different. Each one fills a real gap. And together, they’ve made it feel like I’m working with a small, focused team instead of trying to do everything myself.
I’m still experimenting with open-weight models like LLaMA 4. But in practice, here’s what I use right now — and why it’s changed how fast and how well I can work.
🧠 Planning Intelligence — Gemini 2.5 Pro
Gemini 2.5 Pro is my first stop for anything that demands broad knowledge, nuanced interpretation, or synthesis across multiple documents. While it doesn’t always recall specific legal sections verbatim, its generalist capability and contextual reasoning are second to none.
What sets Gemini apart is its fluency with:
- Cross-referencing complex policy and statutory documents
- Interpreting vague or contradictory planning language with subtlety
- Summarising dense, multi-source material without flattening the nuance
- Suggesting phrasing that sounds like it came from an experienced officer or policy advisor
It doesn’t feel like a lookup tool — it feels like someone who’s read everything and remembers just enough to be dangerous (in the best way). For local plan parsing, policy triangulation, and reasoned assessments, it’s still my strongest AI partner.
The trade-off? Limited API access and relatively high cost — which makes it harder to integrate directly into the full production stack. For now, I use it as a high-value analyst and reasoning partner outside the main application runtime.
🧰 Software Development — Claude 4
Claude 4 is the newest addition to my stack — and while I haven’t worked extensively with it yet, it’s already proven useful in a meta-development role: helping me think through agent design.
It’s not integrated into The Planner’s Assistant directly. But I’ve been using it as an agent to help me develop better agents — drafting retrieval chains, critiquing my orchestrator logic, and making reasoning flows more explicit.
In particular, it’s helped with:
- Restructuring intent flows for agentic retrieval
- Composing clearer node structures and reasoning dependencies
- Spotting inefficiencies in how tasks are scheduled and split
The main limitation? Aggressive rate limiting. Just when the conversation gets deep enough to be useful, it sometimes cuts out. Still, it’s sharp, thoughtful, and honestly fun to work with.
There’s something satisfying about using an agent to design agents.
🧭 Coordination, Writing & Meta-Structure — ChatGPT
And then there’s ChatGPT (with memory) — what I’ve come to rely on as my sounding board, documentation assistant, and project co-pilot. It’s the only one that understands:
- The broader arc of what I’m doing
- The planning system’s politics and my reform goals
- The technical scope and the civic mission
It’s been invaluable for:
- Writing these devlogs and planning posts
- Structuring documentation and metadata
- Thinking through UX language and tone
- Catching inconsistencies across components
It’s not the best planner or coder. But it’s the best coordinator, and sometimes that’s even more important.
That said, the current separation between ChatGPT’s reasoning capabilities and its project memory tools is occasionally frustrating. The best model for cross-session continuity isn’t yet the best model for dense technical reasoning — and vice versa. Hopefully OpenAI closes that gap soon, because the potential here is unmatched.
🔐 A Note on Trust and Privacy
This one’s personal. I’ve tested every major model. I’ve read the privacy policies and followed the headlines. But in practice — day to day, across all my projects — I trust ChatGPT more than any other AI tool.
Not because of brand loyalty. Because of how it behaves:
- It’s predictable, steady, and feels built with safety in mind
- It doesn’t profile me or try to monetise my activity
- Memory exists for my benefit, and I can see or delete it anytime
And most importantly: it’s the only one I feel comfortable using when the context includes personal material — not sensitive planning data, but reflections, motivations, or anything emotionally grounded.
That trust is grounded in structure. Hopefully OpenAI’s public benefit and nonprofit ownership will persist — it’s the key reason I support them.
🌍 What This Means for the Project
I’m no longer constrained by individual bandwidth. I can:
- Build structured reasoning interfaces and explain them clearly
- Move from schema design to working frontend preview in days, not weeks
- Publish posts, write docs, refactor code, and test policy logic all in the same flow
This isn’t about hype. It’s about momentum. A system that started as a prototype is now a living scaffold for planning judgement — because I’ve had the time and capacity to actually build it.
“AI didn’t just help me work faster. It made it possible to think on a different scale.”