Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I presume that's what the parent post is trying to get at? Seeing if, given the cutting edge scientific knowledge of the day, the LLM is able to synthesis all it into a workable theory of QM by making the necessary connections and (quantum...) leaps

Standing on the shoulders of giants, as it were



But that's not the OP's challenge, he said "if the model comes up with anything even remotely correct." The point is there were things already "remotely correct" out there in 1900. If the LLM finds them, it wouldn't "be quite a strong evidence that LLMs are a path to something bigger."


It's not the comment which is illogical, it's your (mis)interpretation of it. What I (and seemingly others) took it to mean is basically could an LLM do Einstein's job? Could it weave together all those loose threads into a coherent new way of understanding the physical world? If so, AGI can't be far behind.


This alone still wouldn't be a clear demonstration that AGI is around the corner. It's quite possible a LLM could've done Einstein's job, if Einstein's job was truly just synthesising already available information into a coherent new whole. (I couldn't say, I don't know enough of the physics landscape of the day to claim either way.)

It's still unclear whether this process could be merely continued, seeded only with new physical data, in order to keep progressing beyond that point, "forever", or at least for as long as we imagine humans will continue to go on making scientific progress.


Einstein is chosen in such contexts because he's the paradigmatic paradigm-shifter. Basically, what you're saying is: "I don't know enough history of science to confirm this incredibly high opinion on Einstein's achievements. It could just be that everyone's been wrong about him, and if I'd really get down and dirty, and learn the facts at hand, I might even prove it." Einstein is chosen to avoid exactly this kind of nit-picking.


They can also choose Euler or Gauss.

These two are so above everyone else in the mathematical world that most people would struggle for weeks or even months to understand something they did in a couple of minutes.

There's no "get down and dirty" shortcut with them =)


No, by saying this, I am not downplaying Einstein's sizeable achievements nor trying to imply everyone was wrong about him. His was an impressive breadth of knowledge and mathematical prowess and there's no denying this.

However, what I'm saying is not mere nitpicking either. It is precisely because of my belief in Einstein's extraordinary abilities that I find it unconvincing that an LLM being able to recombine the extant written physics-related building blocks of 1900, with its practically infinite reading speed, necessarily demonstrates comparable capabilities to Einstein.

The essence of the question is this: would Einstein, having been granted eternal youth and a neverending source of data on physical phenomena, be able to innovate forever? Would an LLM?

My position is that even if an LLM is able to synthesise special relativity given 1900 knowledge, this doesn't necessarily mean that a positive answer to the first question implies a positive answer to the second.


I'm sorry, but 'not being surprised if LLMs can rederive relativity and QM from the facts available in 1900' is a pretty scalding take.

This would absolutely be very good evidence that models can actually come up with novel, paradigm-shifting ideas. It was absolutely not obvious at that time from the existing facts, and some crazy leap of faiths needed to be taken.

This is especially true for General Relativity, for which you had just a few mismatch in the mesurements like Mercury's precession, and where the theory almost entirely follows from thought experiments.


Isn't it an interesting question? Wouldn't you like to know the answer? I don't think anyone is claiming anything more than an interesting thought experiment.


This does make me think about Kuhn's concept of scientific revolutions and paradigms, and that paradigms are incommensurate with one another. Since new paradigms can't be proven or disproven by the rules of the old paradigm, if an LLM could independently discover paradigm shifts similar to moving from Newtonian gravity to general relativity, then we have empirical evidence of an LLM performing a feature of general intelligence.

However, you could also argue that it's actually empirical evidence that general relativity and 19th century physics wasn't truly a paradigm shift -- you could have 'derived' it from previous data -- that the LLM has actually proven something about structurally similarities between those paradigms, not that it's demonstrating general intelligence...


His concept sounds odd. There will always be many hints of something yet to be discovered, simply by the nature of anything worth discovering having an influence on other things.

For instance spectroscopy enables one to look at the spectra emitted by another 'thing', perhaps the sun, and it turns out that there's little streaks within the spectra the correspond directly to various elements. This is how we're able to determine the elemental composition of things like the sun.

That connection between elements and the patterns in their spectra was discovered in the early 1800s. And those patterns are caused by quantum mechanical interactions and so it was perhaps one of the first big hints of quantum mechanics, yet it'd still be a century before we got to relativity, let alone quantum mechanics.


You should read it


I mean, "the pieces were already there" is true of everything? Einstein was synthesizing existing math and existing data is your point right?

But the whole question is whether or not something can do that synthesis!

And the "anyone who read all the right papers" thing - nobody actually reads all the papers. That's the bottleneck. LLMs don't have it. They will continue to not have it. Humans will continue to not be able to read faster than LLMs.

Even me, using a speech synthesizer at ~700 WPM.


> I mean, "the pieces were already there" is true of everything? Einstein was synthesizing existing math and existing data is your point right?

If it's true of everything, then surely having an LLM work iteratively on the pieces, along with being provided additional physical data, will lead to the discovery of everything?

If the answer is "no", then surely something is still missing.

> And the "anyone who read all the right papers" thing - nobody actually reads all the papers. That's the bottleneck. LLMs don't have it. They will continue to not have it. Humans will continue to not be able to read faster than LLMs.

I agree with this. This is a definitive advantage of LLMs.


Einstein is not AGI, and neither the other way around.


AGI is human level intelligence, and the minimum bar is Einstein?


Who said anything of a minimum bar? "If so", not "Only if so".


Actually it's worse than that, the comment implied that Einstein wouldn't even qualify for AGI. But I thought the conversation was pedantic enough without my contribution ;)


I think the problem is the formulation "If so, AGI can't be far behind". I think that if a model were advanced enough such that it could do Einstein's job, that's it; that's AGI. Would it be ASI? Not necessarily, but that's another matter.


The phone in your pocket can perform arithmetic many orders of magnitude faster than any human, even the fringe autistic savant type. Yet it's still obviously not intelligent.

Excellence at any given task is not indicative of intelligence. I think we set these sort of false goalposts because we want something that sounds achievable but is just out of reach at one moment in time. For instance at one time it was believed that a computer playing chess at the level of a human would be proof of intelligence. Of course it sounds naive now, but it was genuinely believed. It ultimately not being so is not us moving the goalposts, so much as us setting artificially low goalposts to begin with.

So for instance what we're speaking of here is logical processing across natural language, yet human intelligence predates natural language. It poses a bit of a logical problem to then define intelligence as the logical processing of natural language.


The problem is that so far, SOTA generalist models are not excellent at just one particular task. They have a very wide range of tasks they are good at, and good scores in one particular benchmarks correlates very strongly with good scores in almost all other benchmarks, even esoteric benchmarks that AI labs certainly didn't train against.

I'm sure, without any uncertainty, that any generalist model able to do what Einstein did would be AGI, as in, that model would be able to perform any cognitive task that an intelligent human being could complete in a reasonable amount of time (here "reasonable" depends on the task at hand; it could be minutes, hours, days, years, etc).


I see things rather differently. Here's a few points in no particular order:

(1) - A major part of the challenge is in not being directed towards something. There was no external guidance for Einstein - he wasn't even a formal researcher at the time of his breakthroughs. An LLM might be able to be handheld towards relativity, though I doubt it, but given the prompt of 'hey find something revolutionary' it's obviously never going to respond with anything relevant, even with substantially greater precision specifying field/subtopic/etc.

(2) - Logical processing of natural language remains one small aspect of intelligence. For example - humanity invented natural language from nothing. The concept of an LLM doing this is a nonstarter since they're dependent upon token prediction, yet we're speaking of starting with 0 tokens.

(3) - LLMs are, in many ways, very much like calculators. They can indeed achieve some quite impressive feats in specific domains, yet then they will completely hallucinate nonsense on relatively trivial queries, particularly on topics where there isn't extensive data to drive their token prediction. I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination. Their ability to produce compelling nonsense makes them particularly tedious for using to do anything you don't already effectively know the answer to.


> I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination

Simply because I don't see hallucinations as a permanent problem. I see that models keep improving more and more in this regard, and I don't see why the hallucination rate can't be abirtrarily reduced with further improvements to the architecture. When I ask Claude about obscure topics, it correctly replies "I don't know", where past models would have hallucinated an answer. When I use GPT 5.2-thinking for my ML research job, I pretty much never encounter hallucinations.


Hahah, well you working in the field probably explains your optimism more than your words! If you pretty much never encounter hallucinations with GPT then you're probably dealing with it on topics where there's less of a right or wrong answer. I encounter them literally every single time I start trying to work out a technical problem with it.


Well the "prompt" in this case would be Einstein's neurotype and all his life experiences. Might a bit long for the current context windows though ;)


LLMs don't make inferential leaps like that


I think it's not productive to just have the LLM site like Mycroft in his armchair and from there, return you an excellent expert opinion.

THat's not how science works.

The LLM would have to propose experiments (which would have to be simulated), and then develop its theories from that.

Maybe there had been enough facts around to suggest a number of hypotheses, but the LLM in its curent form won't be able to confirm them.


Yeah but... we still might not know if it could do that because we were really close by 1900 or if the LLM is very smart.


What's the bar here? Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

I by no means believe LLMs are general intelligence, and I've seen them produce a lot of garbage, but if they could produce these revolutionary theories from only <= year 1900 information and a prompt that is not ridiculously leading, that would be a really compelling demonstration of their power.


> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

It turns out my reading is somewhat topical. I've been reading Rhodes' "The Making of the Atomic Bomb" and of the things he takes great pains to argue (I was not quite anticipating how much I'd be trying to recall my high school science classes to make sense of his account of various experiments) is that the development toward the atomic bomb was more or less inexorable and if at any point someone said "this is too far; let's stop here" there would be others to take his place. So, maybe, to answer your question.


It’s been a while since I read it, but I recall Rhodes’ point being that once the fundamentals of fission in heavy elements were validated, making a working bomb was no longer primarily a question of science, but one of engineering.


Engineering began before they were done with the experimentation and theorizing part. But the US, the UK, France, Germany, the Soviets, and Japan all had nuclear weapons programs with different degrees of success.


> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?

Yes. It is certainly a question if Einstein is one of the smartest guy ever lived or all of his discoveries were already in the Zeitgeist, and would have been discovered by someone else in ~5 years.


Both can be true?

Einstein was smart and put several disjointed things together. It's amazing that one person could do so much, from explaining the Brownian motion to explaining the photoeffect.

But I think that all these would have happened within _years_ anyway.


> Does anyone say "we don't know if Einstein could do this because we were really close or because he was really smart?"

Kind of, how long would it have realistically taken for someone else (also really smart) to come up with the same thing if Einstein wouldn't have been there?


But you're not actually questioning whether he was "really smart". Which was what GP was questioning. Sure, you can try to quantify the level of smarts, but you can't still call it a "stochastic parrot" anymore, just like you won't respond to Einstein's achievements, "Ah well, in the end I'm still not sure he's actually smart, like I am for example. Could just be that he's just dumbly but systematically going through all options, working it out step by step, nothing I couldn't achieve (or even better, program a computer to do) if I'd put my mind to it."

I personally doubt that this would work. I don't think these systems can achieve truly ground-breaking, paradigm-shifting work. The homeworld of these systems is the corpus of text on which it was trained, in the same way as ours is physical reality. Their access to this reality is always secondary, already distorted by the imperfections of human knowledge.


Well, we know many watershed moments in history were more a matter of situation than the specific person - an individual genius might move things by a decade or two, but in general the difference is marginal. True bolt-out-of-the-blue developments are uncommon, though all the more impressive for that fact, I think.


Well, if one had enough time and resources, this would make for an interesting metric. Could it figure it out with cut-off of 1900? If so, what about 1899? 1898? What context from the marginal year was key to the change in outcome?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: