Hacker Newsnew | past | comments | ask | show | jobs | submit | AmazingTurtle's commentslogin

6 months ago I already posted about this

https://news.ycombinator.com/item?id=45349476


If you hadn’t written that post using AI, it might’ve received more attention. Also, (1) if you’d put LinkedIn in the title, rather than the very bottom of the post, and (2) if you’d provided any insight, rather than just speculation, as to what the data might be being used for.

I have written something about Linkedin although not, about browser fingerprinting but certainly somewhat of an extremely bad experience with Linkedin.

Not sure if this counts but my post was actually sandwiched between two large Linkedin posts (the 2 tabs = 8 gb and now this) within the timing [0]

I always write things myself, even if they might take hours.

But I also believe that my post had overlapped with larger things of AI (OpenAI getting funded, Claude being leaked), I have seen some cool projects lately on Hackernews which aren't getting attention as all of that attention gets redirected to AI related news.

[0]: to be honest, I write things for myself firstly and I just upload them here for discussion related purposes, I am perfectly fine with my posts not reaching traction, because, I try to/wish to write for myself first and foremost :), Also within that Linkedin incident, In that case I just wrote things to get it off my chest really.


Thank you for that post, it describes the invasion of privacy at a deeper level. I must have missed it but YCombinator is filled with people with a vested interest in keeping the clown show going.

Bet their internal "tips team" used an LLM to generate "useful tips" for their coding agent system ;)

Yup, broken windows all the way down, to put it kindly

> like using PHP

lmao, chuckled


I just tried that in Codex CLI. With /fast mode enabled. Observations:

1. Fast mode ain't that fast

2. Large context * Fast * Higher Model Base Price = 8x increase over gpt-5.3-codex

3. I burnt 33% of my 5h limit (ChatGPT Business Subscription) with a prompt that took 2 minutes to complete.


> 8x increase over gpt-5.3-codex

How do you arrive at that number? I find it hard to make sense of this ad hoc, given that the total token cost is not very interesting; it's token efficiency we care about.


> prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.

which is basically maxxed out quickly. So there is 2x (the first lever)

Then there is the /fast mode, which they state costs 2x more (for 1.5x speedup)

And then there is the model base price ($2.50 vs $1.75), well yeah thats 42% increase. It is in fact a 5.7x total increase of token cost in fast mode and large context. (Sorry for the confusion, I thought it was 8x because I thought gpt-5.3-codex was $1.25)


(After a day of usage, I am relatively certain in practice this does not end up being a 5.7x cost increase or anything close to that, though I am still fairly unclear on what that computation is worth to begin with, given that I am entirely fine with the model using the least amount of tokens possible to get the job done)


1. it's 1.5x , it's quite fast for the level of thinking it has

2. no if you are on subscription, it's the same, at 20$ codex 5.4 xhigh provide way more than 20$ opus thinking ( this one instead really can burn 33% with 1 request, try to compare then on same tasks ) also 8x .. ??? if you need 1M token for a special tasks doesn't hit /fast and vice-versa , the higher price doesn't apply on subscription too..

3. false, i'm on pro , so 10x the base , always on /fast (no 1M), and often 2 parallel instances working.. hardly can use 2% (=20% of 5h limit , in 1h of work ( about 15/20 req/hour) ) , claude is way worse on that imo


20 req/hour is 1 req every 3 min.. you have to think a bit and then write the requests..


Doubling speed can likely come from MoE optimizations such as reducing the amount of active parameters.


Models can't improve themselves with their own (model) input, they need to be grounded in truth and reality.


But at one point the model is sufficiently large enough to accomplish any task a human could specify. For software development, I think we're pretty much at that point with the latest Anthropic/Google/OpenAI models. We have no idea where the direction of token pricing is going to go in the future, but the consensus seems to be that it will only get more expensive. If Taalas can offer the same functionality that we have with frontier models today at a 1/10 of the cost and 10x the speed then they're going to take over a large part of the market.


At this point, the pelican benchmark became so widely used that there must be high quality pelicans in the dataset, I presume. What about generating an okapi on a bicycle instead?



Or, even more challenging, an okapi on a recumbent ?!


"You'll own nothing. And you'll be happy"


I set up windows 11 on a laptop for my dad so he can read emails and browse the web. Came back 3 months later when he told me he couldn't see the PDF files anymore. Turns out he installed THREE different PDF viewers that he randomly found on google, they installed tons of bloatware/spyware, replaced browser toolbars and searches etc. to a point where I decided to just restore from a recovery point. Told him not to download weird stuff (again) and ask me when he needs help.

At that point I questioned myself: I really should have installed linux for him.


> replaced browser toolbars

This is still a thing? Browsers still have toolbars???

My go to for family is giving them no install rights, and adding a remote desktop app for me to connect to them when they need something to install.

I don't get called very often anymore, and when I do, it's for their work computer or something, to which I say, talk to your IT department, I can't fix that.


ChromeOS is a really great option for "just want to read emails and browse the web".


Oh yeah, at least with ChromeOS, Chrome isn’t installing itself like a spyware alongside any other software installer.


Browsers today view and can do limited editing for PDFs. No need for a dedicated reader. One does need a dedicated authoring tool if you need to create PDFs from scratch. Most OSes support print to PDF as well if you only need conversion.


Most of the time it's not about the money VCs send into it but the credibility that this brings. It looks a lot more mature when your idea is backed by a distribution of wealthy people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: