> But to even know what is more useful, it is crucial to have walked the walk.
I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!
The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.
> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.
Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:
- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.
- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.
- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.
- This can tell us about how the function works.
Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.
I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.
This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.
> I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
From a science standpoint, I'd say whatever "results" you got are completely worthless.
> I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct
And how do you know if your understanding is correct, if you are only taking what the LLM gives to you and you are not able to verify independently?
> Science is what happens when you expect something, test something, and get a result.
Right, but has any LLM come up with any hypothesis on its own? Has any AI said "given all this literature that I read, I'd expect <insert something completely out of the training data space>?".
Asking all of these questions after (allegedly) reading my entire comment either means you didn't pay attention, in which I'm not going to spend any more effort responding either; or you've completely missed the point, in which case I can probably save myself the effort anyway. In any case, if you're genuinely interested in answers to your questions instead of merely posturing, I suggest you re-read carefully and then make a better faith attempt at engaging with it.
I'll leave these direct quotes from the comment as a hint:
> But that only matters if I take its output at face value. […] If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification.
The problem I have with your logic is that you are hedging your arguments so much that the whole point become meaningless.
If you are trying to argue that young aspiring scientists will be able to use LLMs to learn new concepts instead of doing the hard work themselves, then you also need to explain how they will be able to develop the skills to analyze and "run more thorough verification" INDEPENDENTLY of LLMs.
> then you also need to explain how they will be able to develop the skills to analyze and “run more thorough verification” INDEPENDENTLY of LLMs
I’m sure the students will manage. This is the exact same discussion we’ve all been through before, during the rise of Wikipedia, just wearing a new hat. The answer is “vet your sources, don’t trust unsourced claims.” The way they’ll develop the skills is the same way aspiring scientists and students have developed them throughout the entirety of human history’s vast corpus across time: by having good teachers teach them.
Here’s a very simple program I thought of from the top of my head in a minute or two. I’m sure people whose job it is to create educational content will be able to come up with something far better:
Design a small research project with as many LLM-tailored pitfalls as possible. It involves real measurements and real data, and the students may use their LLM to whichever extent they wish. Then, we compare results against the reference data, and find out all the myriads of ways in which LLMs can taint the data and the conclusions to be made from it, and then explore ways how to mitigate it.
Probably not perfect and nitpickable to oblivion, but also not the hardest mental exercise I’ve ever subjected myself to.
Science did fine in a world where information took years or decades to travel the globe, people thought diseases were spread by evil mojo and we had a grand total of four liquids circulating inside our bodies, and scientists saying the wrong things were actively hunted down and silenced. It got there. It’ll do fine in a world where you can semantically search every single written source model trainers could get their hands on _and_ ground the results with references to tangible sources using the same natural language query.
> The answer is “vet your sources, don’t trust unsourced claims.”
This was already a problem for Wikipedia (articles being written which upon further investigation were based on nothing but Wikipedia itself). With LLM themselves facilitating AI slop and plagiarism, this problem gets to a scale that it becomes impossible to control.
> I’m sure the students will manage.
The problem with your hubris is that you are not going to be the one solely facing the fallout when this blows up.
I have yet to see a single substantive argument from you that isn’t some sort of paraphrase for “this is definitely going to blow up because the article says so and I agree”. Three times now you’ve asked me to provide you with a detailed pitch deck contains every single solution for every single problem while offering absolutely nothing yourself that couldn’t just as well have come from an LLM for how much meaningful content it had.
I’ll get back to you as soon as you make an actual point that’s based in some sort of precedent, some sort of data.
Basically, stop indignantly demanding that I contribute and start doing it yourself. I’m tired of having to spell the same thing out for you in entire paragraphs over and over only to have you refuse to even make an attempt at comprehension, cherry pick two lines, and proceed to add absolutely nothing substantial.
All I'm saying is "I do not know what is the real upside on leaving the current practices in academia and education in favor of 'let the LLM guide you'". If it was my ass on the line, I would apply the precautionary principle and I wouldn't take any significant bets with my future around this.
You on the other hand are the one "being sure" about how everything will be fine and of course there is no way for you to bring actual evidence because all you have is conjecture. So, given there is no way for you to back up your argument with evidence, the next best thing you can do is to put some Skin In The Game: can you back up with beliefs with actions? Are you willing to take any substantial risk in case your bet doesn't pay off?
> “I do not know what is the real upside on leaving the current practices in academia and education in favor of ‘let the LLM guide you’”
The fact that you think “let the LLM guide you” is the argument I’ve been making tells me everything about how honestly you’ve engaged with it. I’m done here.
And the fact that you only got to shut up after being asked what you are willing to put on the line tells me how devoid of meaning your argument is, no matter what it is.
Being unable or unwilling to grasp how to use an LLM without delegating your entire thinking to it sounds like a you problem. Perhaps one day you can try engaging in some epistemic thought exercises, that might help. In more ways than just this, too.
I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!
The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.
> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.
Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:
- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.
- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.
- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.
- This can tell us about how the function works.
Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.
I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.
This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.