Vibe Coding (Part 1): How I Paired with Windsurf & ChatGPT to Build Smarter

At first, working with AI felt like jamming with a bandmate playing in a different key. I'd ask it to help me write a backend service, and it'd suggest something that didn't fit the codebase β or worse, didn't even compile. We were speaking, but not vibing.
That's when I realized something critical: coding with AI isn't about barking commands and hoping for the best. It's about building rhythm. Mutual understanding. A shared mental model.
The real unlock was learning how to vibe with the agent β teaching it, aligning with it, syncing like you would with a junior developer who learns fast if you guide right. This blog captures my journey from friction to flow β with Windsurf, ChatGPT, and a whole lot of debugging therapy.
π§ββοΈ "Vibe Toh Match Honi Chahiye"
π© When the Vibe Was Off
The Frustrating Reality of Misaligned Agent Assistance
You know the drill. You prompt the agent to "fix the login bug" and it renames half your services. Or it suggests logic that almost works β but breaks everything in production. It's like watching a confident intern destroy a sprint with conviction.
I tried better prompts. I tried copying more context. But no matter what I fed it, the results kept missing the mark. Every session felt like a reset. No memory. No history. Just repeating myself like a broken record.
That was the beginning of my search β not for a better prompt, but for a better process.
πΊ The VIBE Coding Revelation
Why Coding with AI Is More Like Pair Programming Than Command-Line Magic
The big unlock? Realizing that LLMs aren't magical autocomplete machines. They're collaborators β just like any other team member. And like any teammate, they thrive when you set them up for success.
"Think of AI as a different kind of programming language β and vibe coding as a new type of language-based development."
Once I stopped treating ChatGPT like a vending machine and started treating it like a smart but context-hungry dev, things changed. I gave it project overviews, style guides, even logs. It started understanding the why behind my prompts β not just the what.
That's when the vibe clicked. And once it did, we started shipping. Fast.
π Tools That Match Your Flow (and Frequency)
Why I Picked Windsurf and ChatGPT to Match My Developer Energy
If the vibe is the music, the tools are your instruments β and for me, Windsurf and ChatGPT hit all the right notes.
I didn't arrive at this combo overnight. I tried everything from Replit to Lovable, hoping for fast feedback and visual results. Those tools worked great for UI experimentation, but as soon as I tried touching backend logic β boom. Everything fell apart. I'd change a button and some random backend route would rewrite itself.
That's when I knew I needed developer-first agents.
Why Windsurf?
Windsurf quickly became my main IDE agent. It's built for devs who want their AI to stick around β who care about persistent memory, custom context, and conversational programming. It lets me:
- Feed custom Markdown rules (like team style guides and architecture decisions)
- Maintain local memory for each project
- Execute tasks in order, section-by-section, like pair programming with someone you trust It vibes well with the structure I build β and that's critical.
Why ChatGPT?
ChatGPT is my thought partner. It's where I go before code, before docs, before even deciding the stack. I treat it like a CTO and whiteboard at once.
- Architectural discussions? β
- Stack tradeoffs? β
- Debugging loops that Windsurf can't crack? β
I even use it to generate summaries of our discussions β to feed back into Windsurf.
Together, these tools aren't just functional β they match my mental state at different stages of development. Whether I'm thinking, planning, debugging, or coding, there's a tool that gets me. That's how I keep the flow alive β and the vibe unbroken.
π Plan First, Code Later: Onboard Your Agent Like a Dev
"You wouldn't start a road trip with someone who doesn't know the destination. So why start coding before your agent knows your system?"
Before writing a single line of code, I align my agent with the architecture β ensuring the LLM truly understands the terrain.
Here's my exact process inside Windsurf, tailored for real-world, multi-service codebases.
1. Explore the Codebase First
I begin every session by asking:
π§ "Can you explore the project structure and tell me what each service does?"
Windsurf then scans:
- docker-compose.yml
- Service folders (e.g., module-playground, module-mcp-search, β¦ module-test)
- Entrypoints like server.py or main.ts
The LLM might reply:
"Based on the structure, this looks like a multi-modal AI system with gRPC services handling multiple agent workflows. The module-auth-mgmt seems to manage authentication and user state."
This gives the agent raw awareness β before instructions even begin.
2. Ask What the Project Thinks It's Doing
I never assume the agent understands nuance. I ask:
"What does this project do?"
Then β critically β I complete or correct it:
"Actually, module-mcp-search isn't just searchβit's also used by QA automation workflows for tool context lookups. Please update your memory."
Now the agent doesn't just see the code β it understands purpose.
3. Generate the High-Level Architecture
Next prompt:
"Create a high-level architecture diagram or markdown summary of how these services interact."
Windsurf replies:
## Architecture Overview
- User β module-playground (React)
- Talks to: module-intelligence-engine (LLM)
- Calls:
- module-agent (via gRPC)
- module-analytics-mgmt (user tracking, rate limits)
- module-mcp-search (tool results)
This becomes the grounding document for all future conversations.
4. Understand the Run & Test Lifecycle
Before coding begins:
"How do I start the system? Which services run in dev vs staging?" "What test frameworks are used? What command runs integration tests?"
The agent reviews Makefiles, Docker setup, test configs.
I might clarify:
"Use docker-compose -f docker-compose.dev.yml up module-playground
when testing locally. Tests are in /tests/integration/ and use pytest."
Now the agent can build, run, and verify autonomously.
5. Co-Debug the Startup
When things break, we collaborate:
"The system isn't starting. Check TTS container logs and find the issue."
If logs are saved: "Read last 100 lines from /logs/agent_error.log. Tell me what failed and why."
The agent parses, summarizes, and proposes the fix.
6. Refine the Setup Together
Once the baseline is set:
- Mark "won't do" services
- Add TODOs or improvements
- Track what's done, pending, or flaky
The /plan.md becomes shared memory. A new session? Just say: "Start by reading /plan.md and /windsurf.md."
Why This Works
This isn't just about "prompting better." It's about shared situational awareness.
Once your LLM knows:
- What the system does
- Where things live
- How to run & test
- What your goals and constraints areβ¦
β¦it stops acting like a code generator and starts showing up like a teammate.
π‘ Pro Tip: Prevent Premature Coding
Sometimes the AI gets eager and starts coding immediately after reading the context.
Prevent this with a gatekeeping prompt: "Explain your understanding first. Get my approval before implementing anything."
This keeps the interaction focused, avoids wasted generations, and gives you full control of direction.
πΏ How to Keep the Vibe Matched
Just like pairing with a human teammate, syncing with your AI agent isn't a one-time setup β it's an ongoing relationship. The challenge isn't just getting the agent aligned once. It's keeping it aligned over hours, days, and even different sessions.
That's where persistent memory and context reinforcement come in.
Here's how Karan ensures that Windsurf and ChatGPT stay vibe-locked β even after rebooting the system or switching tasks.
1. Use Full Conversations to Extract Rules
Karan doesn't just hope the agent remembers. After aligning the AI through an exploratory session, he asks ChatGPT:
"Based on this session, generate project rules and memory in Markdown."
This distills the agent's understanding into structured, persistent documentation.
2. Generate Local Rules from Context
The rules aren't vague. They include:
- System architecture
- Naming conventions
- Style preferences
- How services interact
- How test and run workflows are triggered
- Do's and Don'ts per component
These become agent-friendly equivalents of internal team wikis.
3. Feed Them Into Windsurf's Local Memory
Windsurf supports "local rules" that act like sticky notes inside the agent's brain. Once the Markdown rules are ready, Karan feeds them directly into Windsurf's memory panel.
Now, even if the session resets or the model changes, the LLM retains:
- Purpose of each microservice
- Where to look for errors
- How to run, test, and debug
- Specific project quirks
Windsurf Local Memory
4. Update Continuously
Every major change β new feature, new test system, updated naming β triggers a rule refresh. Karan re-generates the Markdown via ChatGPT and updates Windsurf's memory.
This keeps the LLM aware and adaptive. Just like onboarding a junior dev, you don't teach them once β you teach them as they grow.

Why This Matters
Most devs lose time re-explaining things to their AI agents.
By creating and updating local rules:
- You scale context across sessions
- You eliminate repetitive alignment
- You build a long-term working memory for your agent
And most importantly, you never have to say "that's not what I meant" again.
The vibe? Locked in.
π‘ Pro Tip: Clarity first. Code second.
Use this Gist to improve Windsurf's behavior by adding it to its Global Rules. It primes your agent to seek clarity before coding β avoiding premature implementations in unknown environments.
Give it this behavior prompt and your AI will ask the right questions before touching your codebase
π Version Control = Vibe Reset
Sometimes it's not about fixing β it's about reverting. Because yes, the agent will mess up.
In vibe coding, things can go sideways fast. One moment the AI is writing clean abstractions, and the next β it's renamed half your files and left your app unbootable.
This is why Git isn't just version control. It's vibe insurance.
1. Revert Like a Pro
Don't fight broken logic or untangle hallucinated code. Just roll it back.
git checkout .
Or if things really exploded:
git reset --hard <last-good-commit>
Fast, clean, and your agent won't even know what went wrong.
2. Branch Out Before You Burn Out
Before asking the agent to implement a feature, try a new lib, or refactor something gnarly β create a branch.
git checkout -b spike-session-fix
This lets the agent explore freely. And when it inevitably wanders into the weeds, you just hop back to main.
3. Save What Works, Move On
Once you've solved something and it works β commit it immediately.
git commit -am "fix: retries in TTS now working"
Then β and only then β should you start the next change.
Because multiple prompts in a row will compound changes. And if your agent fumbles, you'll lose the last known good state.
Let commits pile up. Let branches go to waste. That's the cost of building with an evolving co-pilot.
π¨ Watch for Rabbit Holes: Know When to Step Back (and Path Correction)
Sometimes your AI agent is like an overconfident junior dev β it keeps trying, but it's clearly off track.
1. Spot the Spiral
- Keeps looping on the same logic?
- Repeating similar code that's not working?
- You're copy-pasting the same error messages again and again?
π¨ That's your sign: Stop. Don't prompt harder β zoom out.
2. Step Back and Reframe
If the path feels broken, don't just push through.
- Pause the agent.
- Re-express the goal: "What are we solving again?"
- Inject more context: project architecture, expected flow, or test cases.
- Then retry with a clean state.
AI needs structure to work well β when it spins, give it direction.
"Let's take a step back. What are we missing?" "Do you understand the flow from start to finish?" "What assumptions might be wrong here?"
3. ChatGPT for Rescue Missions
Sometimes, Cursor or Windsurf just won't crack it.
That's when ChatGPT becomes your backup brain:
- Paste the full error and surrounding code
- Ask for diagnosis and solution
- Once resolved, bring the idea back to Windsurf with clarity
Think of it as going to your staff engineer when your teammate gets stuck.
Why This Works
You're not just coding β you're mentoring an AI.
- Spot the confusion
- Guide with clarity
- Offload to ChatGPT when needed
- Bring the learning back and resume the flow
π§ββοΈ Final Thought: It's Not Just Code. It's a Relationship.
VIBE coding isn't about giving commands.
It's about building presence.
About listening. Clarifying. Resetting.
It's not just prompting β it's pairing.
Plans. Rules. Resets. Refactors. Experiments.
All of these rituals? They're not overhead. They're how you build mutual understanding with your AI.
Because once your agent:
- Understands your repo
- Follows your architecture
- Reads your logs and tests
- Learns from your corrections
- Matches your rhythm and flow β¦it stops being a tool. It becomes a co-creator. "Vibe toh match honi chahiye." And once it does, building isn't effort β it's flow.
π TL;DR
Quick takeaways from the entire VIBE coding journey:
π§ Sync Your Tools with Your Mindset
Don't force-fit traditional dev habits β align with your agent, and choose tools that vibe with your style.
π¨ Build Step-by-Step, Not All at Once
Divide work into clear phases. Review each step before moving on to keep the agent aligned and the code clean.
π Reset Is Part of the Rhythm
It's okay if your AI fumbles β use Git to roll back, refine your prompts, and reboot with a clearer direction. Staying in flow means knowing when to reset.
It's not just about writing code. It's about building flow β together.
π What's Next?
In Part 1, we focused on pairing with AI while coding β prompting better, iterating faster, and integrating with tools like Windsurf.
But what happens before the first line of code?
How do you think, test, and debug with your AI agent β like it's part of your team?
Vibe Coding (Part 2): Thinking, Testing, Debugging β Co-Building with AI
It's where prompt-driven design, log-based debugging, and test-first implementation come together into a structured co-building flow.