A secret of reliable LLM-based code generation as the codebase grows
I unlocked the secret of how to reliably generate code with Claude Sonnet with an LM when your codebase grows beyond 100k tokens
As we know, Gemini 2.5 Pro has
the advantage of having a 1M-token context and being quite smart, but it's entirely inappropriate for writing code for an existing codebase because it tries to rewrite absolutely everything and injects stuff that the user hasn't asked for.
On the other hand, Claude 3.7 Sonnet writes good quality code and provides surgical, minimalist updates but has a modest context size of about 100k tokens, making it quite “dumb” and useless as the codebase size approaches this context limit.
Keep reading with a 7-day free trial
Subscribe to True Positive Weekly to keep reading this post and get 7 days of free access to the full post archives.