Hacker Newsnew | past | comments | ask | show | jobs | submit | kuboble's commentslogin

I don't know.

Culturally from a young age we're told to not trust our guts and a lot of people shut them off.

"Don't judge a book by the cover", "you don't even know him". We're told to ignore our gut feeling especially if that feeling is consistent with negative stereotypes.


The middle path is best:

Your gut is not a liar. Give it real credence.

Your gut is not psychic. Don't rely on it solely. Skilled con artists know how to trigger "trust reactions".


One thing could be that there is an extra management cost for each person to manage.

It's much easier to manage 3 people with better tools than to manage 9 people even if their output would be the same


? The whole idea of a coding assistant is to send all your interactions with the program to the llm model.

To the provider you select in the UI, I agree. But OpenCode automatically sends prompts to their free "Zen" proxy, even without choosing it in the UI.

Imagine someone using it at work, where they are only allowed to use a GitHub Copilot Business subscription (which is supported in OpenCode). Now they have sent proprietary code to a third party, and don't even know they're doing it.


This is exactly me considering what I might have leaked to god knows who via grok. I was hyped by opencode but now I’m thinking of alternatives. A huge red flag… at best irresponsible?

Stack overflow (and internet in general) changed the programming as we (at least some of us) knew it.

When I was learning programming I had no internet, no books outside of library, nobody to ask for days.

I remember vividly having spent days trying to figure out how to use the stdlib qsort, and not being able to.


Hmm - I'm not sure I'd say that 'changed programming' - but the internet in general changed 'learning to program'. I can remember when I first discovered gopher and found I could read tons recent material for free, or finding stonybrook on the web - that was like a gold mine of algorithms! :-D


I'm not from that generation so that's a bit hard for me to understand. Even if you used a closed-source C compiler, wouldn't you still have been able to look at the header file, which would have been pretty self-explanatory?

E.g.

void qsort(void* base, size_t nmemb, size_t size, int (compar)(const void , const void* ));

And surely if you bought a C compiler, you would have gotten a manual or two with it? Documentation from the pre-Internet age tended to be much better than today.


Yeah - but you have to be a good enough programmer to really understand the headers.. the 'bootstrapping' problem was real :-) Especially if you didn't live in a metropolitan/college area. My local library was really short on programming books - especially anything 'in depth'. Also, 'C' was considered a "professional's language" back then - so bookstores/libraries were more likely to have books on BASIC then 'C'


I don't remember where we got the compilers from but we surely didn't buy them.

Also I don't know if it came with manual but my English wasn't good enough to read them anyways.


What kind of modern C wizardry is that? qsort was:

    qsort(base, nel, width, compar)
    char *base;
    int (*compar)();


Yes, exactly.

I strongly prefer directness in technical communication at work.

But the way the article author phrases his preferences as absolute truth rubs me the wrong way.

Also if I worked with that person then after reading the article I would have perhaps the opposite reaction to the author's intentions.

You still have to walk on eggshells to not offend him by including any bit of information that he might consider not relevant enough.


I love this point as much as I hate it in practice. We all have different preferences and it is more helpful to be clear about ours rather than declare them "correct". The way we expect these differences to be navigated can become oppressive.


Yes, you can get a project with claude to a state of unrecoverable garbage. But with a little experience you can learn what it's good at and this happens less and less.


I wonder what is the business model.

It seems like the tool to solve the problem that won't last longer than couple of months and is something that e.g. claude code can and probably will tackle themselves soon.


Why would the problem ever go away? It's compression technologys have existed virtually since the beginning of computing, and one could argue human brains do their own version of compression during sleep.


Your comment reminded me of this old simulacra paper (https://arxiv.org/pdf/2304.03442) :) iirc, they compressed the "memory roll" of the agents every once in a while


Claude code still has /compact taking ages - and it is a relatively easy fix. Doing proactive compression the right way is much tougher. For now, they seem to bet on subagents solving that, which is essentially summarization with Haiku. We don't think it is the way to go, because summarization is lossy + additional generation steps add latency


They are another AI avalanche skiier (or tidal wave) surfer. Potentially a 1bn company. Most likely need to pivot after next weeks claude update.

Good thing is take what they learn into the pivot.

So much AI startup I see where "why do I need that anymore...".


Don't tools like Claude Code sometimes do something like this already? I've seen it start sub-agents for reading files that just return a summarized answer to a question the main agent asked.


There is a nice JetBrains paper showing that summarization "works" as well as observation masking: https://arxiv.org/pdf/2508.21433. In other words, summarization doesn't work well. On top of that, they summarize with the cheapest model (Haiku). Compression is different from summarization in that it doesn't alter preserved pieces of context + it is conditioned on the tool call intent


Business model is: Get acquired


The "infinite context soon" concern comes up a lot — but even at 1M+ tokens, agents still hit limits on long enough tasks, and cost scales linearly with context size.

The compression models are the product, not the proxy. The gateway is open-source because it's the distribution layer. Anthropic, Codex, and others are iterating on this too — but each only for their own agent. We're fully agent-agnostic and solely focused on compression quality, which is itself a hard problem that needs dedicated iteration.

Try it out and let us know how to make it better!


Could also be selling data to model distillers.


We don't sell data to model distillers.


Anecdotally,

In public transport I see almost as many people playing games on their phones as those watching videos.


Not necessarily. In go you often calculate the score and come up with a conclusion that by playing proper moves you will lose by a small margin.

So instead you launch a desperate maneuver in a hope to either turn the game around or lose by 30 points.


I see what you're saying; this is true for any game scored win/loss. Even gridiron football if you're down by 4 points with time almost out you won't kick a field goal (worth 3 points).


There are also objective measures for more fine position evaluation.

For winning/drawn positions: "What is the smallest program that can guarantee your side to win/draw" probably adding some time constraint.


I think program size is probably not a good measure since any heuristic you can put in could be discovered at runtime with a metaheuristic that searches for good heuristics. Time and memory make more sense.


Measuring the size of a model that produces a win?

Theoretically valid, but that's not going to be a very useful/diable.


No, but in practice centipawns reported by the imperfect engine are good.

But I want to point out that in theory there is also something more than pure win/ lose/ draw with prefect play.


That is a neat variation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: