Mauro Medda

Coding with AI - Part I

The Vibe Coding Trap

When you first start using coding agents, you experience an intoxicating rush of productivity. They’re powerful. They’re smart. You throw a problem at them—“fix me this issue”—and they actually solve it. You ship to production fast. The w00t moment is real.

But then something shifts. On more complex tasks, you find yourself playing prompt engineer, crafting increasingly sophisticated instructions, convinced you’re unlocking some latent capability in the model. The agent seems to understand. You feel like a master. And then—chaos.

The problem is genuinely complex. You’ve granted auto-accept permissions. The agent starts rewriting half your codebase at once. Files change in ways you didn’t anticipate. You lose control. Context windows fill up. Tokens burn. And suddenly you’re staring at a 3-hour debugging session to fix a 15-minute task.

From excitement to frustration. I know you’ve been here. Don’t lie.

Finding a Better Way

Over the past months, I’ve been experimenting with different approaches. I spend a lot of time with coding agents—probably more than I spend talking to real people (yes, including my wife). I named my go-to agent “Claudio” for my frinds :). We’ve become quite close.

I started where many of us do: customizing CLAUDE.md files in every directory, trying the GitHub spec kit, combining approaches, adding subagents and commands. But none of it felt right. It was manual. It was fragmented. It didn’t scale the way I needed—both for myself and my growing team.

What I really wanted was something that would:

I kept digging. Reading blogs, watching talks, absorbing everything I could about AI-assisted workflows. And here’s what I learned: there’s no “one-size fits all” or blueprints. But there is a way.

The Breakthrough: Compounding Engineering

Then I discovered Every and their work on AI-assisted development. More specifically, Kieran Klaassen’s writing on “Compounding Engineering” and the broader philosophy articulated in their articles on “Stop Coding and Start Planning” and “The Three Ways I Work with LLMs”.

These weren’t just nice ideas—they were the missing piece. They showed me how to teach AI systems to think like senior engineers instead of code monkeys.

The core insight: instead of asking your AI to fix problems, teach it how to think about problems.

Three Pillars of Our Approach

1. Compounding Engineering: Build Lasting Knowledge

Instead of treating each task as a one-off, you build a knowledge base that your AI learns from and improves with. Stop starting from scratch every time.

The old way: use Claudio to fix one issue, then start over next time, re-explaining coding rules, architecture decisions, and best practices. Exhausting.

The new way: invest time in creating:

The philosophy is simple but powerful:

Plan → Delegate → Assess → Codify

  1. Plan - Think through what you actually want in detail
  2. Delegate - Let Claude do the work based on your constitution
  3. Assess - Verify it actually works
  4. Codify - Turn the lessons into permanent knowledge (update your CLAUDE.md, create new commands, etc.)

This creates a compounding effect. Each task teaches your system something new. After weeks, your AI understands your codebase better than many engineers.

2. Stop Coding, Start Planning

Here’s the uncomfortable truth: vibe coding works for small changes. But it fails at complexity.

When I didn’t plan and just asked claude code to “build this feature,” I’d end up with a solution that worked on the surface but missed architectural considerations. Then I’d spend 10-15 hours debugging and refactoring—on something that should have taken 2 hours.

The shift is cognitive, not technical. Instead of telling your AI how to code, teach it how to think about problems.

Ask yourself (and claude, yes - sometime they are very creative and it can help in cover some case you haven’t thought about):

When you spend 30 minutes planning with claude code before writing code, the code itself becomes almost trivial. And when issues arise, you’ve already thought through them.

This is what separates “ship it and pray” from “ship it and sleep well.”

3. Create Reusable Components: The Marketplace Effect

The final piece is operationalization. I’ve codified every workflow and practice I can into:

Why? Because onboarding someone (whether yourself on a new project or a new team member) should take minutes, not hours.

Instead of:

You get:

The HikmaAI Claude Marketplace approach (inspired by Every’s work) lets teams package their practices as shareable, installable plugins. Zero friction. Day zero productivity.

What’s Next?

This is Part I of a deeper dive into how we’re building HikmaAI and scaling AI-assisted operations. In Part II+, we’ll get hands-on:

For now, the key takeaway: AI coding agents are powerful coworkers, but without structure they become expensive liabilities. The most productive teams aren’t the ones who code fastest—they’re the ones who plan best and build systems that compound over time.

If this resonates with you, stick around. Part II gets technical.


References:

Reply to this post by email ↪