AI

AI Case Study: I coded a web crawler in 3 days instead of 10

Antoine Frankart · Product Management, Local SEO, and Esports Consultant

AI Case Study: I coded a web crawler in 3 days instead of 10

There is a lot of talk about AI and "vibe coding", but how does it actually work?

Here is the process I used to create a new feature on my SaaS Begonia.pro, entirely thanks to AI.

I just developed a crawler and website audit tool focused on local SEO.

  • With AI: It took me 3 days (conception, design, code, debug).
  • Without AI: It would have easily taken me 10 days, especially since I hadn't coded a crawler before.
  • Cost: ~$15 in AI credits within my IDE.

Here is the detail of my design steps:

Step 1: Brainstorming & Specs

  • AIs used: Gemini 2.5 Pro and GPT5
  • Time: 2h

The idea of the tool is simple: the user enters their website, my tool scans it looking for SEO best practices or errors, then proposes a report and recommendations.

I used AIs to brainstorm my ideas, write functional specs, choose technical libraries, and think about UX (for non-tech users).

This allowed me to identify 25 tests to perform for the local SEO audit of a website, separated into 4 categories:

  1. Visibility on Google
  2. Performance
  3. Content & Presentation
  4. Trust & Credibility

Step 2: Database

  • AI used: GPT5
  • Time: < 1h

I don't let AI code directly; I prefer to first validate a database schema for the feature. I showed it my specs and the current database. The AI proposed the new tables, and we validated them together.

Step 3: Design

  • AI used: Claude Sonnet 4.5
  • Time: < 10min

Very simple step: I already had an audit tool (for Google Business Profile) integrated into my project. I wanted the AI to replicate the existing design exactly. I briefed the AI directly in my IDE.

Stunning result: The AI copied the right components and reproduced the interface identically.

Step 4: Initial Development

  • AI used: Claude Sonnet 4.5
  • Time: < 20min

The AI generated the crawler and all the tests in one go... well, that's what I thought. In reality, a good half of the tests were empty shells or very simple. That’s more my fault because I didn't go into detail in the specs. In any case, I would have spent hours doing the same thing manually!

Step 5: Verification, debug, and improvements

  • AIs used: Claude Sonnet 4.5 and GPT5
  • Time: 1.5 days

Finally, the big part of the work is here: verifying what the AI did, adding missing cases, explaining to it how to implement the more complicated tests.

I used GPT5 as a code reviewer (it tends to over-complicate; I then ask Claude Sonnet 4.5 to implement a middle ground). I also have to manage everything the AI left aside (translations, error handling, etc.).

But I hardly coded at all: only wrote prompts to the AI telling it what to change and add.

Conclusion

Using multiple AIs according to their strengths is a good practice I do more and more. Having AI code verified by another AI allows for more robust code.

The time saved by AIs to create a new feature is undeniable. It’s even magical!

The main problem: the (non) memory of AIs. Claude Sonnet 4.5 quickly forgets what it coded before. You have to show it the code you are talking about again. Let's see if it is possible to improve this point by having it document its actions?

I am open to your feedback and ideas to improve my process!

You can test my website SEO audit tool for local entrepreneurs for free and without registration.