Hacker Newsnew | past | comments | ask | show | jobs | submit | kristopolous's commentslogin

I write far better than any llm ... I've tried to get them to help me with writing, they always fuck it up.

The biggest problem is they don't understand the time effort tradeoff between understanding and language so they don't know how to pack the densities of information properly or how to swim through choppy relationships with the world around them while effectively communicating.

But who knows, maybe they're more effective and I'm just an idiot.


I did a related approach:

A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.

This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hosted

I think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...

It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!

I'm open to crypto not all being hustles and scams

Tell me what you think?

https://github.com/kristopolous/tollbot


This is literally what HTTP 402 is for -- there's a whole buncha work going on ... but please, please, please don't let Cloudflare become another bloody gatekeeper. Please.

Always thought people should be organizing cross industry unions and planning strikes on the platform.

Why not?


Most of the heavies in my industry don’t even bother with linkedin. They get plenty of applications on their career pages already I guess. Only really startups (which aren’t really hiring at all) and the occasional blast from a middle weight company. There are more jobs for ai trainer than real jobs on linkedin right now.

I like how the author's "modern" machine to connect to it is still 20 years old.

With a concave trackpoint, respect.

BTW, I nag Framework at every conference I go to that people want this shell and keyboard. It's been years. I think it's time to go through the effort to figure out how to do the production run of the case myself. Framework actually wants people to do things like this but you know, manufacturing is hard. Anyone wanna help?


this is what needs to come back with modern hardware and modern interconnect

https://en.wikipedia.org/wiki/Xserve


He was still accepting shareware payment for it on his website, which I think is amazing... https://xv.trilon.com/

I'm with you on all points except for it being bought.

Programming has long succumbed to influencer dynamics and is subject to the same critiques as any other kind of pop creation. Popular restaurants, fashion, movies - these aren't carefully crafted boundary pushing masterpieces.

Pop books are hastily written and usually derivative. Pop music is the same as is pop art. Popular podcasts and YouTube channels are usually just people hopping unprepared on a hot mic and pushing record.

Nobody is reading a PhD thesis or a scholarly journal on the bus.

The markers for the popularity of pop works are fairly independent from the quality of their content. It's the same dynamics as the popular kid at school.

So pop programming follows this exact trend. I don't know why we expect humans to behave foundationally differently here.


> Nobody is reading a PhD thesis or a scholarly journal on the bus.

As someone who is involved in academia, I can attest that most of my colleagues (including myself) do in fact read quite a few papers on buses (and trams - can't forget those)


> I'm with you on all points except for it being bought.

Stars get bought all the time. I've been around startup scene and this is basically part of the playbook now for open core model. You throw your code up on GitHub, call it open source, then buy your stars early so it looks like people care. Then charge for hosted or premium features.

There's a whole market for it too. You can literally pay for stars, forks, even fake activity. Big star count makes a project look legit at a glance, especially to investors or people who don't dig too deep. It feeds itself. More people check it out, more people star it just because others already did.


Fully aware of the DGX spark I've actually been looking into AMD Ryzen AI Max+ 395/392 machines. There's some interesting things here like https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen... and https://www.amazon.com/GMKtec-5-1GHz-LPDDR5X-8000MHz-Display... ... haven't pulled the trigger yet but apparently inferencing on these chips are not trash.

Machines with the 4xx chips are coming next month so maybe wait a week or two.

It's soldered LPDDR5X with amd strix halo ... sglang and llama.cpp can do that pretty well these days. And it's, you know, half the price and you're not locked into the Nvidia ecosystem


unfortunately the bigger models are pretty slow in token speed. The memory is just not that fast.

You can check what each model does on AMD Strix halo here:

https://kyuz0.github.io/amd-strix-halo-toolboxes/


4xx chips are less capable than the 395

I've point thought about making things that just send garbage to any data collecting service.

You'd be surprised how useless datasets become with like 10% garbage data when you don't know which data is garbage


Geminis cli is clearly a fork of it btw

No because Gemini CLI is slow and barely functioning.

It is clearly not. Why would you think so?

the ux feels extremely similar down to the elicitation ... but I did some more research ... they were started independently in april 2025. Therefore, one being a fork of the other is almost impossible and there is no evidence for it. Also, opencode is in go and gemini is in typescript.

Sadly my above misinformation can no longer be edited.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: