Debugging with a new teammate

Debugging has always been part science, part art, and part pain. Normally, it's you, your logs, and your patience. But with AI, the process changed completely. Sometimes it felt like I had a superpowered debugger sitting next to me. Other times, it felt like I was babysitting an intern who insisted on fixing things by randomly unplugging cables.

Debugging with AI - the chaotic process of AI-assisted debugging

The copy-paste magic (and trap)

At the beginning, I loved debugging with AI. A feature would crash, I'd copy the error message, paste it into the prompt, and… boom, solution.

Example:
Error: Cannot read properties of undefined (reading 'map')
AI would explain the root cause and propose a fix in seconds. No StackOverflow digging, no hours staring at the code. That felt amazing.

But here's the trap: many of those "fixes" worked by coincidence, not because they were correct. I was so addicted to the speed that I didn't check the logic. More than once, I accepted a fix that solved the error but broke something else silently.

When AI overcomplicates everything

One thing I learned: AI loves complicated solutions.

  • I'd have a simple variable not updating. Instead of just resetting it properly, AI would propose a whole new data management layer.
  • I'd have a rendering glitch in plain HTML. Instead of tweaking the DOM updates, AI would try to rebuild the entire view logic from scratch.
  • I'd have a timing issue. Of course, the solution was always… setTimeout.

It reminded me of junior developers who try to impress you with big rewrites when all you asked for was a one-line fix.

A real example: the calendar bug

The calendar is one of the places where I spend most of my time. At one point, the events were duplicating every time I navigated between months. I asked AI to fix it.

  • First attempt: it rewrote the entire calendar module. Too risky.
  • Second attempt: it added conditions that broke event rendering completely.
  • Third attempt: it gave me a working fix… but then scrolling stopped working.

In the end, the real bug was a simple cleanup issue. I wasn't removing old event listeners before adding new ones, so they kept stacking up. The AI never suggested that. I figured it out by going back to basics: console.log and manual checks.

Lesson: AI can guide, but you still need to think.

When AI actually saved the day

Not all stories were frustrating. Some were genuinely impressive.

  • It suggested edge case handling I would have skipped. For example, when fetching events from Google Calendar, it proposed adding checks for empty responses or malformed data before rendering them, which prevented random crashes.
  • It explained async problems in plain language, showing me exactly why some functions were running before the data was ready.
  • It suggested small refactors that improved reliability, like centralizing error handling in services instead of repeating try/catch everywhere.

These moments reminded me why I kept using it. AI debugging wasn't magic, but it was a powerful accelerant.

My debugging workflow with AI

Over time, I developed a routine:

  1. Reproduce the bug manually. Make sure I understood the behavior myself.
  2. Simplify the problem. Extract the relevant code and state the context clearly.
  3. Paste the error, but also describe what I expected. Just pasting the stack trace was never enough.
  4. Challenge every fix. If it looked too big, I'd ask: "Is there a simpler way?"
  5. Test systematically. No shortcuts. Run through all scenarios before trusting the change.

This wasn't as fast as the first days of "copy-paste and trust", but it saved me from disasters.

The emotional rollercoaster

Debugging with AI was an emotional mix:

  • Relief when it solved something in seconds.
  • Frustration when it invented new problems.
  • Embarrassment when I realized the real bug was something obvious.
  • Satisfaction when I finally combined AI's hints with my own logic to nail the issue.

It wasn't smooth, but it was strangely fun. Debugging stopped feeling like solitary torture and became more like pair programming, messy, but collaborative.

Lessons from debugging with AI

  1. Don't trust blindly. AI fixes often work by accident.
  2. AI loves overengineering. Always look for the simpler solution.
  3. Context is everything. The clearer your prompt, the better the help.
  4. Stay systematic. Logs, test cases, and manual checks are still essential.
  5. Use AI for perspective, not absolution. It's there to suggest, not to decide.

Closing

Debugging with AI taught me a lot about patience, clarity, and discipline. It saved me time, but it also wasted time when I wasn't careful. The key was to treat AI not as an all-knowing debugger, but as a fast, sometimes clumsy, debugging partner.

Through all the debugging challenges, Sortit.Now gradually became more stable and reliable. Each bug we solved together made the product stronger, and each mistake taught us both something new about building software.

In Part 6, I'll step back and talk about the bigger picture: the human-AI collaboration, what worked well in our partnership, what didn't, and the strategies I developed to make the most of this strange new way of building software.

Previous: The UI/UX Journey Next: The Human-AI Collaboration