Hacker Newsnew | past | comments | ask | show | jobs | submit | famouswaffles's commentslogin

>And you know what, Qwen3's entire forward pass is just 50 lines of very simple code (mostly vector-matrix multiplications).

The code being simple doesn't mean much when all the complexity is encoded in billions of learned weights. The forward pass is just the execution mechanism. Conflating its brevity with simplicity of the underlying computation is a basic misunderstanding of what a forward pass actually is. What you've just said is the equivalent of saying blackbox.py is simple because 'python blackbox.py' only took 1 line. It's just silly reasoning.

>After the pass, you need to choose the sampling strategy: how to choose the next token from the list. And this is where you can easily make the whole model much dumber, more creative, more robotic, make it collapse entirely by just choosing different decoding strategies. So a large part of a model's perceived performance/feel is not even in the neurons, but in some hardcoded manually-written function.

So ? I can pick the least likely token every time. The result would be garbage but that doesn't say anything about the model. The popular strategy is to randomly pick from the top n choices. What do you is keeping thousands of tokens coherent and on point even with this strategy ? Why don't you try sampling without a large language model to back it and see how well that goes for you ?

>Then I also performed "surgery" on this model by removing/corrupting layers and seeing what happens. If you do this excercise, you can see that it's not intelligence. It's just a text transformation algorithm. Something like "semantic template matcher". It generates output by finding, matching and combining several prelearned semantic templates. A slight perturbation in one neuron can break the "finding part" and it collapases entirely: it can't find the correct template to match and the whole illusion of intelligence breaks. Its corrupted output is what you expect from corrupting a pure text manipulation algorithm, not a truly intelligent system.

What do you think happens when you remove or corrupt arbitrary regions of the human brain? People can lose language, vision, memory, or reasoning, sometimes catastrophically.


>The code being simple doesn't mean much when all the complexity is encoded in billions of learned weights. The forward pass is just the execution mechanism. Conflating its brevity with simplicity of the underlying computation is a basic misunderstanding of what a forward pass actually is. What you've just said is the equivalent of saying blackbox.py is simple because 'python blackbox.py' only took 1 line. It's just silly reasoning.

Look at what a transformer actually does. Attention is a straightforward dictionary look up in like 3 matmuls. A FFN is a simple space transform rule with a non-linear cutoff to adjust the signal (i.e. a few more matmuls and an activation function) before doing a new dictionary lookup in the next transformer block. Add a few tricks like residual connections, output projections, and repeat N times.

So yeah, the actual inference code is 50 lines of code, and the rest is large learned dictionaries to search in, with some transforms. So you're saying my one-liner program that consults a DB with 1 million rows is actually 1 million lines of code? Well, not quite.

This trick, coupled with lots of prelearned templates, is enough to fool people into believing there's "there" there (the OP's post above). Just like ELIZA back in the day. Well, apparently this trick is enough to solve lots of problems, because apparently lots of problems only require search in a known problem (template) space (also with reduced dimensionality). But it's still just a fancy search algorithm. I think the whole thing about "emergent behavior" is that when a human is confronted with a huge prelearned concept space, it's so large they cannot digest what is actually happening, and tend to ascribe magical properties to it like "intelligence" or "consciousness". Like, for example, imagine if there was a huge precreated IF..THEN table for every possible question/answer pair a finite human might ask in their lifetime. It would appear to the human there's intelligence, that there's "there" there. But at the end of the day it would be just a static table with nothing really interesting happening inside of it. A transformer is just a nice trick that allows to compress this huge IF..THEN table into a few hundreds gigabytes.

>So ? I can pick the least likely token every time. The result would be garbage but that doesn't say anything about the model. The popular strategy is to randomly pick from the top n choices. What do you is keeping thousands of tokens coherent and on point even with this strategy ? Why don't you try sampling without a large language model to back it and see how well that goes for you

I was referring to the OP post's:

  there is no "there" there
It doesn't even "know" what the actual text continuation must be, strictly speaking. It just returns a list of probabilities that we must select. It can't select it itself. To go from "list of probabilities" to "chatbot" requires adding additional hardcoded code (no AI involved) that greatly influences how the chatbot behaves, feels. Imagine if an actual sentient being had a button: you press it, and suddenly Steven the sailor becomes a Chinese lady who discusses Confucius. Or starts saying random gibberish. There's no independent agency whatsoever. It's all a bunch of clever tricks.

>What do you think happens when you remove or corrupt arbitrary regions of the human brain? People can lose language, vision, memory, or reasoning, sometimes catastrophically.

In an actual brain, the structure of the connectome itself drives a lot of behavior. In an LLM, all connections are static and predefined. A brain is much more resistant to failure. In an LLM changing a single hypersensitive neuron can lead to a full model collapse. There are humans who live normal lives with a full hemisphere removed.


I get irritated when people act like they know what they are talking about but then it's just nonsense they keep spitting out. I'm honestly sick of it. There's a fair amount of LLM interpretability research out there. If you're actually interested in knowing better then go read them. I'll even link what i find interesting. All this talk of lookup tables is nonsensical. You have no idea what you're talking about.

>It doesn't even "know" what the actual text continuation must be, strictly speaking. It just returns a list of probabilities that we must select. It can't select it itself. To go from "list of probabilities" to "chatbot" requires adding additional hardcoded code (no AI involved) that greatly influences how the chatbot behaves, feels. Imagine if an actual sentient being had a button: you press it, and suddenly Steven the sailor becomes a Chinese lady who discusses Confucius. Or starts saying random gibberish. There's no independent agency whatsoever. It's all a bunch of clever tricks.

You are not making any sense here. Producing a probability distribution over next tokens is the model’s decision procedure. Sampling is just the readout rule for turning that distribution into a concrete sequence. Yes, decoding choices affect style, creativity, determinism, and failure modes. That is true. It does not follow that the model is therefore “just tricks” or that the intelligence-like behavior lives outside the network.

>In an actual brain, the structure of the connectome itself drives a lot of behavior. In an LLM, all connections are static and predefined. A brain is much more resistant to failure. In an LLM changing a single hypersensitive neuron can lead to a full model collapse. There are humans who live normal lives with a full hemisphere removed.

You are moving goalposts. Fact is: randomly corrupting a system damages it. This is not a meaningful test of whether a system is "truly intelligent." Random lesions to human cortex are also catastrophic. The hemispherectomy cases you mention involve surgical removal of diseased tissue with significant neural reorganization over time, not random weight corruption. That's not even a fair comparison.

LLMs are also deeply redundant. If they weren't, techniques like quantization or layer pruning wouldn't work.


>With modern training techniques, RNNs (not just linear SSMs, potentially even vanilla LSTMs) can scale just as well as transformers or even better when it comes to enormous context lengths.

That's not true. Modern training techniques aren't enough. Vanilla RNNs with modern training techniques still scale poorly. You have to make some pretty big architectural divergences (throwing away recurrency during training) to get a RNN to scale well. None of the big labs seem to be bothered with hybrid approaches.


> That's not true. Modern training techniques aren't enough. Vanilla RNNs with modern training techniques still scale poorly. You have to make some pretty big architectural divergences (throwing away recurrency during training) to get a RNN to scale well.

SSMs move the non-linearity outside of the recurrence which enables parallelisation during training. It is trivial to do this architectural change with an LSTM (see the xLSTM paper). Linear RNNs are still RNNs.

But you can still keep the non linearity by training with parallel Newtown methods, which work on vanilla LSTMs and scale to billion of parameters.

> None of the big labs seem to be bothered with hybrid approaches.

Does Alibaba not count? Qwen3.5 models are the top performers in terms of small models as far as my tests and online benchmarks go.


>SSMs move the non-linearity outside of the recurrence which enables parallelisation during training. It is trivial to do this architectural change with an LSTM (see the xLSTM paper). Linear RNNs are still RNNs.

Removing the non-linearity from the recurrence path is exactly what constitutes a "pretty big architectural divergence." A linear RNN is an RNN in a structural sense, certainly, but functionally it strips out the non-linear state transitions that made traditional LSTMs so expressive, entirely to enable associative scans. The inductive bias is fundamentally altered. Calling that simply 'modern training techniques' is disingenous at best.

>But you can still keep the non linearity by training with parallel Newtown methods, which work on vanilla LSTMs and scale to billion of parameters.

That does not scale anywhere near as well as Transformers in compute spend. It's paper/research novelty. Nobody will be doing this for production.

>Does Alibaba not count? Qwen3.5 models are the top performers in terms of small models as far as my tests and online benchmarks go.

I guess there's some misunderstanding here because Qwen is 100% a transformer, not a hybrid RNN/LSTM whatever.


> That does not scale anywhere near as well as Transformers in compute spend. It's paper/research novelty. Nobody will be doing this for production.

What exactly makes you so confident?

The world is not just labs that can afford billion dollar datacentres and selling access to SOTA LLMs at $30/Mtokens. Transformers are highly unsuitable for many applications for a variety of reasons and non-linear RNNs trained via parallel methods are an extremely attractive value proposition and will likely feature in production in the next products I work on.

> I guess there's some misunderstanding here because Qwen is 100% a transformer, not a hybrid RNN/LSTM whatever.

See the Qwen3.5 Huggingface description: https://huggingface.co/Qwen/Qwen3.5-27B > Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.


It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.

I am certain I have subjective experience.

>"Making up for" a poor score on one test with an excellent score on another would be the opposite of generality.

Really ? This happens plenty with human testing. Humans aren't general ?

The score is convoluted and messy. If the same score can say materially different things about capability then that's a bad scoring methodology.

I can't believe I have to spell this out but it seems critical thinking goes out the window when we start talking about machine capabilities.


Just because humans are usually tested in a particular way that allows them to make up for a lack of generality with an outstanding performance in their specialization doesn't mean that is a good way to test generalization itself.

Apparently someone here doesn't know how outliers affect a mean. Or, for that matter, have any clue about the purpose of the ARC-AGI benchmark.

For anyone who is interested in critical thinking, this paper describes the original motivation behind the ARC benchmarks:

https://arxiv.org/abs/1911.01547


>Apparently someone here doesn't know how outliers affect a mean.

If the concern is that easy questions distort the mean, then the obvious fix is to reduce the proportion of easy questions, not to invent a convoluted scoring method to compensate for them after the fact. Standardized testing has dealt with this issue for a long time, and there’s a reason most systems do not handle it the way ARC-AGI 3 does. Francois is not smarter than all those people, and certainly neither are you.

This shouldn't be hard to understand.


How do you define "easy question" for a potential alien intelligence? The solution, like most solutions when dealing with outliers, in my opinion, is to minimize the impact of outliers.


I mean presumably that's what the preview testing stage would handle right ? It should be clear if there are a class of obviously easy questions. And if that's not clear then it makes the scoring even worse.

And in some sense, all of these benchmarks are tied and biased for human utility.

I don't think ARC would be designed and scored the way it is if giving consideration for an alien intelligence was a primary concern. In that case, the entire benchmark itself is flawed and too concerned with human spatial priors.

There are many ways to deal with a problem. Not all of them are good. The scoring for 3 is just bad. It does too much and tells too much.

5% could mean it only answered a fraction of problems or it answered all of them but with more game steps than the best human score. These are wildy different outcomes with wildly different implications. A scoring methodology that can allow for such is simply not a good one.



That score is in the arc technical paper [1]. It's the full benchmark score using this harness [2] (which is just open code with read, grep, bash tools).

This is already a solved benchmark. That's why scoring is so convoluted and a self proclaimed Agent benchmark won't allow basic agent tools. ARC has always been a bit of a nothing burger of a benchmark but this takes the cake.

[1] https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

[2] https://blog.alexisfox.dev/arcagi3


> For example, in a variant of environment TR87, Opus 4.6 scores 0.0% with no harness and 97.1% with the Duke harness (12), yet in environment BP35, Opus 4.6 scores 0.0% under both configuration

This is with a harness that has been designed to tackle "a small set of public environments: ls20, ft09, and vc33" (of the arc-agi-3 challenge), yet it looks like it does not solve the full arc-agi-3 benchmark, just some of them.


The harness was designed with the preview, but no it was still tested on the full public set in that environment. You can run the benchmark in different 'environments' though it's unclear what the difference between them is.

>We then tested the harnesses on the full public set (which researchers did not have access to at the time)


It may have been tested on the full set, but the score you quote is for a single game environment. Not the full public set. That fact is verbatim in what you responded to and vbarrielle quoted. It scored 97% in one game, and 0% in another game. The full prelude to what vbarrielle quoted, the last sentence of which you left out, was:

> We then tested the harnesses on the full public set (which researchers did not have access to at the time). We found extreme bimodal performance across the two sets, controlling for the same frontier model...

The harness only transfers to like-environments and the intelligence for those specific games is baked into the harness by the humans who coded it for this specific challenge.

The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments. Having a human give it more powerful tools in a harness defeats the purpose. You should go back and read the original ARC-AGI paper to see what this is about+. Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?

+ https://arxiv.org/abs/1911.01547


> intelligence for those specific games is baked into the harness

This is your claim but the other commenter claims the harness consists only of generic tools. What's the reality?

I also encountered confusion about this exact issue in another subthread. I had thought that generic tooling was allowed but others believed the benchmark to be limited to ingesting the raw text directly from the API without access to any agent environment however generic it might be.


1) Pointing out what tools to use is part of the intelligence that LLMs aren't great at.

2) one of the tools is a path finding algorithm. A big improvement/crutch over a regular LLM that has no such capability.

You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick. But apparently they just can't do that without humans stepping in to make it a standard tool for them.

Here's the paper on what they did for the Duke Harness:

https://blog.alexisfox.dev/arcagi3


>You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick.

ARC 3 doesn't allow that so.

>Here's the paper on what they did for the Duke Harness: https://blog.alexisfox.dev/arcagi3

Yeah, and the tools are general, not 'baked into the harness by the humans who coded it for this specific challenge.'


Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me. Sad part is, it's a cheat that only works on environments where pathfinding is a major part. And when it doesn't have those clues it bombs on everything.

I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.

Have a great day.


>Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me.

You would need all that if you, a human wanted any chance of solving this benchmark in the format LLMs are given. The funny thing about this benchmark is that we don't even know how solvable it is, because the baseline is tested with radically different inputs.

>I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.

I guess you really don't want to think critically. Yeah good day lol.


Really tired of you making up stuff about this. The baseline and entire benchmark evaluation is clearly defined, with a statistically sound number of participants for the baseline using the same consistent deterministic environments to perform evaluation. The fact you don't like where the "human performance" line was drawn or how the scale is derived is not the same as the benchmark being tested with "radically different inputs". Clearly you would rather hype AI than critically advance it. So I won't waste time with someone who is clearly not posting in good faith.

Byebye now.


Humans and LLMs are not seeing the benchmark in the same format. What's made up about that ? Can you solve this in the JSON format ?

Look man, don't reply if you don't want to.


>The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments.

The point is whatever Francois wants it to be.

>Having a human give it more powerful tools in a harness defeats the purpose.

Why does it defeat the purpose? Restricting the tools available is an arbitrary constraint. The Duke harness is a few basic tools. What's the problem ? In what universe would any AI Agent worth its salt not have access to read, grep and bash ? If his benchmark was as great and the difference as wide as he claimed, then it simply wouldn't matter if those tools were available. Francois removed access to tools because his benchmark falls apart with them. Simple as.

>You should go back and read the original ARC-AGI paper to see what this is about+.

>Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?

I’m not upset about anything. I do not care about ARC, and I never have. I think it is a nothingburger of a benchmark: lots of grand claims about AGI, but very little predictive power or practical utility.

When models started climbing FrontierMath, that benchmark actually told us something useful: their mathematical capabilities were becoming materially stronger. And now state-of-the-art systems have helped with real research and even contributed to solving open problems. That is what a good benchmark is supposed to do.

ARC ? Has 0 utility on its own and manages to tell you nothing at the same time.

Unsaturated benchmarks matter because they help show where the state of the art actually is. The value is not “look, the score is low,” but whether the benchmark tells you something real and useful about capability. ARC has always struggled on that front, but 3 has taken that to a new level of useless.


An Open Code Instance with Read, Grep, Bash tools achieved human performance on the preview games

For the full benchmark, The ARC-AGI 3 paper confirms Opus 4.6 scored 97.1%.

https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

I was wondering why the scoring for 3 was so convoluted and I'm starting to see why. This is a solved benchmark in any way that matters.


ARC has always had that problem but for this round, the score is just too convoluted to be meaningful. I want to know how well the models can solve the problem. I may want to know how 'efficient' they are, but really I don't care if they're solving it in reasonable clock time and/or cost. I certainly do not want them jumbled into one messy convoluted score.

'Reasoning steps' here is just arbitrary and meaningless. Not only is there no utility to it unlike the above 2 but it's just incredibly silly to me to think we should be directly comparing something like that with entities operating in wildly different substrates.

If I can't look at the score and immediately get a good idea of where things stand, then throw it way. 5% here could mean anything from 'solving only a tiny fraction of problems' to "solving everything correctly but with more 'reasoning steps' than the best human scores." Literally wildly different implications. What use is a score like that ?


The measurement metric is in-game steps. Unlimited reasoning between steps is fine.

This makes sense to me. Most actions have some cost associated, and as another poster stated it's not interesting to let models brute-force a solution with millions of steps.


Same thing in this case. No Utility and just as arbitrary. None of the issues with the score change.

Models do not brute force solutions in that manner. If they did, we'd wait the lifetimes of several universes before we could expect a significant result.

Regardless, since there's a x5 step cuttof, 'brute forcing with millions of steps' was never on the table.


The metric is very similar to cost. It seems odd to justify one and not the other.


Cost has utility in the real world and this doesn't. That's the only reason i would tolerate thinking about cost, and even then, i would never bundle it into the same score as the intelligence, because that's just silly.


>It proves you don't have AGI.

It doesn't prove anything of the sort. ARC-AGI has always been nothing special in that regard but this one really takes the cake. A 'human baseline' that isn't really a baseline and a scoring so convoluted a model could beat every game in reasonable time and still score well below 100. Really what are we doing here ?

That Francois had to do all this nonsense should tell you the state of where we are right now.


>But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

You can think whatever you want, but an untestable distinction is an imaginary one.


First of all, that's not true. Not every position has to be empirically justified. I can reason about a position in all sorts of ways without testing. Here's an obvious example that requires no test at all:

1. Functional properties seem to arise from structural properties

2. Brains and LLMs have radically different structural properties

3. Two constructs with radically, fundamentally different structural properties are less likely to have identical functional properties

Therefor, my confidence in the belief that brains and LLMs should have identical functional properties is lowered by some amount, perhaps even just ever so slightly.

Not something I feel like fleshing out or defending, just an example of how I could reason about a position without testing it.

Second, I never said it wasn't testable.


Your reasoning may lower your confidence, but until it connects to observable differences, it is still at least partly a story you are telling yourself.

More importantly, the question is not whether LLMs work the same way human brains do. You may care about that, but many people do not. The relevant question is whether they exhibit the functional properties we care about. Saying “they are structurally different, therefore not really intelligent” is a lot like insisting planes are not really flying because they do not flap like birds.

And on your last point: in practice, it is not testable. There is no decisive intelligence test that sorts all humans into one bucket and all LLMs into another. So if your distinction cannot be cashed out behaviorally, functionally, or empirically, then it starts to look less like a serious difference and more like a metaphysical preference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: