Hacker Newsnew | past | comments | ask | show | jobs | submit | piyh's commentslogin

Already paying for Google photo storage, AI pro for an extra $7 is a steal with anti-gravity.

Good luck sticking within limits, I have been burning up my baseline limits insanely fast within a few prompts, a marked change from a few weeks ago.

There's a few complaints online about the same happening to multiple users.

Otherwise anti-gravity has been great.


I use the free Chat AIs all the time; Claude, ChatGPT, Gemini, Grok, Mistral.

In the last month they have all clamped down quite heavily. I use to be able to deep-dive into a subject, or fix a small Python project, multiple times per day on the free Web UIs.

Claude, this morning, modified a small Python project for me and that single act exhausted all my free usage for the day. In the past I could do multiple projects per day without issue.

Same with ChatGPT. Gemini at least doesn't go full on "You can use this again at 1100AM", but it does fallback to a model that works very poorly.

Grok and Mistral I don't really use that much, but Grok's coding isn't that bad. The problem is that it is not such a good application for deep-diving a topic, because it will perform a web search before answering anything, making it take long.

Mistral tends to run out of steam very quickly in a conversation. Never tried code on it though.


I use a quota monitor and grind out code on Gemini 3 flash. Only go to sonnet or pro is there's issues flash can't deal with or I have a critical architecture I need nailed on the first try.

I still review every line generated.

Gemini 3.1 pro on the web interface still works if my problems are scoped to a single module or two and my better model quotas are exhausted in the IDE.

For $7 over what I was already paying for storage, primarily using flash is still a good development experience for me.


That's only good for the web based UI. If you want Gemini API access which is what this article is about then you must go the AIStudio route and pricing is API usage based. It does have a free usage tier and new signups can get $300 in free credits for the paid tier so it's I think it's still a good deal, just not as good as using the subscriptions would be.

No? Isn't the article about Codex, which is roughly equivalent to "Gemini CLI" and Google's Antigravity? Google's subscriptions include quotas for both of those, albeit the $20 monthly "Pro" plan has had its "Pro" model quota slashed in the last few weeks. You still get a large number of "Gemini 3 Flash" queries, which has been good enough for the projects I've toyed with in Antigravity.

I guess that's true but I find Google's models better than their public tooling. The Pro subscription includes "Gemini Code Assist and Gemini CLI" but the Gemini Code Assist plugin for IntelliJ which is my daily driver is broken most of the time to the degree that it's completely unusable. Sometimes you can't even type in the input box.

The only way I can do serious development with Gemini models is with other tooling (Cline, etc) that requires API based access which isn't available as part of the subscription.


I agree. Gemini models are held back by their segmentation of usage between multiple products, combined with their awful harnesses and tooling. Gemini cli, antigravity, Gemini code assist, Jules.... The list goes on. Each of these products has only a small limit and they must share usage.

It gets worse than that though. Most harnesses that are made to handle codex and Claude cannot handle Gemini 3.1 correctly. Google has trained Gemini 3.1 to return different json keys than most harnesses expect resulting in awful results and failure. (Based on me perusing multiple harness GitHub issues after Gemini 3.1 came out)


Google is by far the best deal for AI, they give you so many 'buckets' of usage for a variety of products, and they seem to keep adding them.

If you aggressively use all buckets Google is incredibly generous. In theory for one AI pro subscription you can get what is a ridiculous return in investment in a family plan.

You could probably be charging google literally thousands if all 6 members were spamming video and image generation and antigravity.


The family sharing is the real hack lol. I don't think any other provider does that.

I bought one of the google AI packages that came with a pile of drive storage and Gemini access.

Unfortunately gemini as a coding agent is a steaming useless pile. They have no right selling it, cheap open weight Chinese models are better at this point.

It's not stupid it just is incompetent at tool use and makes bad mistakes. It constantly gets itself into weird dysfunctional loops when doing basic things like editing files.

I'm not sure what GOOG employees are using internally, but I hope they're not being saddled with Gemini 3.1. It's miles behind.


Are you using gemini CLI or antigravity? The former is not really comparable to the latter in terms of quality. I wouldn't say antigravity is as good as the competition but it's pretty close. Miles behind is overstating it.

Gemini CLI but also used the Gemini models via opencode. They're terrible at CLI tool use. Like I said, just editing text files, they fall over rapidly, constantly making mistakes and then mistakes fixing their mistakes.

Antigravity wants me to switch IDEs, and I'm not going to do that.


Gemini 3.1 is a good coding agent. We've been totally spoiled now. Also, if you use Antigravity you can burn up Opus 4.6 credits off your Goog account instead, before you have to switch to Gem 3.1.

No idea, I googled "cache busting in vite" and it was by far the most comprehensive result.

I am not too surprised, I get good answers about Vite from Google’s AI mode though Microsoft’s Copilot tends to do especially poorly on Vite: like an answer that should be “use vite-ignore” becomes a 10-line Vite plugin inlined into the vite.config.js that doesn’t work.

1/3 the RAM & CPU consumed for 99% the performance


That post fails to mention Capital One's move from IBM mainframes to AWS was one of the reasons they suffered one of the largest data breaches in history.

And what was the financial cost of this?

At least $270,000,000 in direct costs [0].

[0] https://www.security.org/identity-theft/breach/capital-one/


Crazy that a dude from Iowa and his ragtag group of rocket watchers does a better job with launch coverage than NASA. I can't believe they cut away during booster separation. Absolute shit show.

maybe they should turn back and do it again

This isn't the last run for this rocket, is it? We'll do it again.

And when we do it again, maybe we should pay the dude from Iowa (who has made a career out of things like streaming rocket launches on video) to provide his team's shots and editing for the official live feed when launch time comes up.


Remember to post the link in HN next launch:

something like> It's better to watch the tivestream for DudeFromIowa that usualy has a better coverage than Nasa http://www.youtube.com/whatever .


We've already seen what happens when you allow social media types to infect the government.

Let's not foster any more of it.


I wouldn't mind if they were actually competent in what they do.

Crazy that a dude from Iowa and his ragtag group of rocket watchers does a better job with launch coverage than NASA.

You may not have noticed, but NASA was also launching an actual rocket at the time. Conducting a livestream and conducting a livestream while launching a rocket to the other side of the moon are hardly equivalent.

Absolute shit show.

You have a remarkably low threshold for "shit show."


So an organization as large as NASA can either walk, or chew gum -- but cannot do both at the same time?

Did they also shut down the bathrooms? You know, to focus the mind?

That is the worst possible take. The people launching the rocket and the people filming the launch are not actually the same people, nor do they take the same resources.

> You have a remarkably low threshold for "shit show."

I wish more people did. We certainly have an excess supply of shit shows these days.


That is the worst possible take.

Really? You lack imagination.


Eh, separation of concerns. Given NASA's PR budget, it seems reasonable that they should be able to produce quality launch coverage.

The many people involved in safely launching a rocket are not responsible for providing launch coverage, and the people who provide launch coverage are not allowed to interfere with the many people involved in safely launching a rocket. If they're going to do a bad job at one of those jobs I'd much rather they do a bad job at providing launch coverage, but the two are not mutually exclusive.



Certificate readiness across the force has been dropping as procurement and testing costs have soared with inflation. It's now estimated that only 50% of .mil website are now ready for a conflict in the South China Sea.


I just migrated my personal website to nixos and can second all of this. There's a learning curve, but the time to provision a new server once it's all working is hilariously short.



I use debian + ansible and it requires discipline (you have to make sure you never do manual steps basically) but my entire ansible playbook makes server creation a 3 min process.

I'm sure Nix is better, I just haven't needed it yet.


> it requires discipline (you have to make sure you never do manual steps basically)

Since Nix requires a declarative configuration, you need less discipline, but more up-front specification. For example, making truly idempotent Ansible scripts requires a lot of effort and some strong assumptions about your starting state and what processes piped changes into your state, and what your state changes really mean. Also, running your playbook with newer version of the same software may lead to a different result. For example, migrating from bullseye to bookworm with a cargo-deb that contained dependencies: It turned out that there were implied dependencies taken for granted in bullseye that were removed in bookworm. With Nix this will lead to a build error rather than a deployment error or a runtime error (in most cases).

Nix requires fewer assumptions.

> my entire ansible playbook makes server creation a 3 min process

I'm a big fan of Ansible, and everything has its use.

I like to categorize deployment tools as either "bottom-up" or "top-down" depending on what assumptions you make about the world: Ansible fills the slot where you have no control of how the server got there, but you gotta make use of what you have, and start from scratch. Terraform is the canonical bottom-down tool: You assume you have perfect control of what gets provisioned, and that it won't go away or go out-of-sync without active maintenance.

In this top-down/bottom-up topology, Nix can fill the whole spectrum; most people assume Nix/NixOS is available to them, at which point their automation starts. Others deploy NixOS via various automated processes that can be integrated with both top-down or bottom-up solutions, e.g. distribute via network boot, VM image repository, or via "hostile takeover" (deploy on existing Linux machines via SSH, like Ansible, or using Ansible).


I'm turning off my brain and using neo4j


proof that Neo4j won the popularity contest!


Neo4j is pretty nice.


Automated theorem provers running on a $5k piece of hardware is a cool version of the future


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: