I picked up Claude today after being away and using only ChatGPT and Gemini for a while.
I was pretty impressed with how they’ve improved user experience. If I had to guess, I’d say Anthropic has better product people who put more attention to detail in these areas.
Many people buy two separate Claude pro subscriptions and that makes the limit become a non-issue. It works surprisingly well when you tend to hit the 5 hourly limit after a few hours, and hit the weekly limit after 4-5 days. $40 vs $100 is significant for a lot of people.
I hit limit of Pro in about 30 minutes, 1 hour max. And only when I use a single session, and when I don't use it extensively, ie waits for my responses, and I read and really understand what it wants, what it does. That's still just 1-2 hours/5 hours.
You're probably having long sessions, i.e. repeated back-and-forth in one conversation. Also check if you pollute context with unneeded info. It can be a problem with large and/or not well structured codebases.
The last time I used pro, it was a brand new Python rest service with about 2000 lines generated, which was solely generated during the session. So how I say to Claude that use less context, when there was 0 at the beginning, just my prompt?
So you had generated 2000 lines in 30 minutes and ran out of tokens? What was your prompt?
I’d use a fast model to create a minimal scaffold like gemini fast.
I’d create strict specs using a separate codex or claude subscription to have a generous remaining coding window and would start implementation + some high level tests feature by feature. Running out in 60 minutes is harder if you validate work. Running out in two hours for me is also hard as I keep breaks. With two subs you should be fine for a solid workday of well designed and reviewed system. If you use coderabbit or a separate review tool and feed back the reviews it is again something which doesn’t burn tokens so fast unless fully autonomous.
Thanks for the tip, didn’t think of using 2 subscriptions at the same company.
When reaching a limits, I switch to GLM 4.7 as part of a subscription GLM Coding Lite offered end 2025 $28/year. Also use it for compaction and the like to save tokens.
I'm using it via Copilot, now considering to also try Open Code (with Copilot license). I don't know if it's as good as Claude Code, but it's pretty good. You get 100 Sonnet requests or 33 Opus request in the subscription per month ($20 business plan) + some less powerful models have no limits (i.e. GPT 4.1), while extra Sonnet request is $0.04 and Opus $0.12, so another $20 buys 250 Sonnet requests + 83 Opus requests. This works for me better since I do not code all day, every single day. Also a request is a request, so it does not matter if it's just a plain edit task or an agent request, it costs the same.
Btw. I trust Microsoft / GitHub to not train on my data more (with the Business license) than I would trust Antrophic.
I agree! I recently migrated from ChatGPT to Claude and it is just superior in every way. It doesn't blather on the at the end ask me for clarification. It's succinct and clarifies vital information before providing a solution.
Oh interesting. I've never used voice input on either so I can't comment, but understandable why you can't switch if it's disruptive to your workflow to do so.
I held off migrating from ChatGPT to Claude Code due to being a laggard that lived in the Eclipse world. I didn't believe what I was told that I wouldn't be writing code any more. Pushed into action by recent PR gaslighting from OpenAI, I jumped to claude code and they were right - I barely venture into the IDE now and certainly don't need an integration.
I agree, but in general those chat apps have relatively bad user experiences for multibillion BtoC company. I used to have a lot of surprises and frustrations while using Claude Code / Desktop, and still encounter issues, but it's the best in major LLM services.
It's funny cause, you know, fixing all those little nitty gritty things should be practically automatic with their own offerings... have your agent put in a lot of instrumentation... have it chase down bugs or dead-end user-journeys... have it go make the changes to fix it...
I've seen these tools work for this kinda stuff sometimes... you'd think nobody would be better at it than the creators of the tools.
I was pretty impressed with how they’ve improved user experience. If I had to guess, I’d say Anthropic has better product people who put more attention to detail in these areas.