Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The comparison between Gemini Code Assist and Cursor really resonates with my own journey through AI dev tools. What strikes me most is your point about the diff feature in Gemini - that side-by-side review capability is underrated. When you're working with AI-generated code at scale, the ability to actually see what's changing before accepting it becomes critical, not just convenient.

I've been building an AI agent called Wiz using Claude Code, and one pattern I've noticed is that the "best" tool often depends heavily on the type of task. For quick autocomplete and contextual suggestions during active coding, tools like Cursor shine. But for larger refactoring or when you need the AI to understand broader system architecture, the context window size you mentioned becomes the limiting factor. That 1 million token window in Gemini is genuinely useful when you're working across multiple files that need to stay consistent.

Your observation about both tools producing buggy code is spot-on and something I wish more people talked about honestly. I've found that the real skill isn't picking the "right" AI coding tool - it's developing workflows that assume AI suggestions need verification. The tools that make code review frictionless end up winning, regardless of which underlying model they use.

I wrote about my own experience testing various AI dev tools including Cursor in a similar comparison piece: https://thoughts.jock.pl/p/cursor-vs-google-ai-studio-antigravity-ide-comparison-2025 - curious if you've tried any multi-model workflows where you use different tools for different phases of development.

No posts

Ready for more?