Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What ChatGPT (and its cousins) expose is that the way humans have been taught in most schools - memorizing and regurgitating information - is now a commodity.

What humans being to the table over ChatGPT is our ability to create new links between information, aka creativity. Teaching creativity, imo, will require a return to the methods like those of Sophocles and his contemporaries. I would rather this author be writing about how he is going to re-examine how he teaches rather than bemoaning that students can shortcut his current approach.



I don't think it is the same.

I didn't memoriza Python sintaxy or the name of every function or how to do small things. I use Google for that. But I know what I need to do in the best possible way (at least that's what I am pay for!). Should I set this variable here? Should this method be private? Should I design an interface or a public class? A dict or a dataclass?

That's what I have to decide as an engineer and where my value resides. If ChatGPT only replaces the memorization part, that would be OK, but it replaces a lot more of it that requires the people using it not to question themselves the things I mentioned before.

I had a bet with a friend, he has no knowledge of programming and was convinced he could make an online game (!) using only ChatGPT. He said one month was going to be enough. Of course, a few months have passed already and he is way off. He asks the questions that a non-programmer would ask, and what ChatGPT gives him back is not usable, not thought for the future, not easily modifiable, etc. His code is a Frankestein that won't do anything good.


> What humans being to the table over ChatGPT is

Statements like these are premature. ChatGPT is three months old! This is a rapidly advancing field. The capabilities of these models are very likely to be radically different five years from now. Any conclusions drawn now about what value is uniquely human and out of reach for AI may be proven wrong quickly.


> The capabilities of these models are very likely to be radically different five years from now.

This is an article of faith. Specifically, it believes that at some point in the future the current paradigm will result in qualitatively different behaviors than the mimicry these systems (all GPT variants, "Attention is All You Need") currently perform.

Much ado has been made of the previous Symbolic AI researchers "moving the goalposts." In this criticism, it is the old guard of AI who is constantly bemoaning the current state of affairs as not real AI. But there is no actual goalpost moving. They have said it wasn't real AI at the beginning, and they are saying it isn't real AI now. Whether or not the symbolists' model was real AI is irrelevant: when you bake in "this is a rapidly moving field" as a hand-wavy explanation for why this may result in AGI, you are the one implying a moving of the goalposts.

If it ever turns out that these models need to be qualitatively different, then it will be clear that attention is not in fact all you need. In that eventuality, I fully expect the new guard to hem and haw and find some tricky sophistry to explain why they were right all along, despite qualitative shifts unattributable to adding mountains of new data or connectionist trickery.


> connectionist trickery

I don't know, man, you kind of come off as having a chip on your shoulder about this. I'm not predicting AGI specifically here, and I'm not making any argument about symbolic vs connectionist AI at all. Maybe the model of the future is half symbolic! I'm just saying that asserting that you know exactly which things AI models aren't going to be able to do is pretty foolish at this point.


> know exactly what these models aren't going to be able to do is pretty foolish at this point

This can be said about absolutely any new technology but it does not make it true. It's simply the inscrutability of these tensors that allow people to imagine the intelligence is in there somewhere. The original comment was what humans "bring to the table" over ChatGPT specifically. And that's that they have real intelligence and not memorization.

As others have said, these models have been around for many years. Their core innovation is to add more and more data to them like a dictionary and compress those basis functions in the network architecture. This memorization has a limit and is actually the opposite of intelligence. Intelligence or creativity can do more with less information. As per the original comment, this is what human intelligence and creativity is (currently) superior at and what people should prioritize if they don't want to be replaced.


This is actually very accurate.


Agreed!

Anyone not answering with “IDK, but maybe…” is just wasting bandwidth.

This Gen1 tech. Most of us are already shocked at how good it is, and it won’t get worse.


You seem confident that it won't get worse, but it's only as good as its training data. Which is the internet. What happens when the internet is filled with generic Gen1 output? I'm doubtful copy averages can ever lead to anything other than increasing mediocrity.


Isn't it like Gen3?


Chat GPT is still at its heart GPT technology (with some clever embedding of the running transcript acting as a reoccurring prompt) which is several years old at this point.


I went to uni 10 years ago and even then, I can't think of any classes that were just memorizing and regurgitating. You'd have to memorize fundamental concepts, but come exam time you are applying those concepts to new questions, not regurgitating anything. In high school a lot of exams were regurgitation, but I attribute it to teachers at that level just not having the niche experience required to craft clever "apply this theory" sort of questions that a domain expert in a university could do, and students in highschool are also responsible for a lot less theory learning on their own.


I think the problem with what you’re saying is that you are not mindlessly agreeing with some straw man, overly reductionist view of what higher education is that seems to be majority view.


Creativity doesn’t exist in isolation. In order to be creative, and create unexpected connections, one first needs to know a lot of seemingly unrelated things.


> I would rather this author be writing about how he is going to re-examine how he teaches rather than bemoaning that students can shortcut his current approach.

He alludes to this: "The first solution is hard for lots of reasons, not least that the current funding model of post-secondary institutions, which does not prioritize the ratio of faculty-to-students necessary for ever more personalized or real-time assessment methods. Larger and larger classes make many of these good ideas impractical. Faculty have zero control over this, but by all means, please talk to our senior leadership. It would be great."

In other words, the Universities are pushing "on-line-all-the-time" because it's co$t effective.


> I would rather this author be writing about how he is going to re-examine how he teaches rather than bemoaning that students can shortcut his current approach.

Did you read the entire article? What you're asking for is exactly how he ends his discussion.


> What humans being to the table over ChatGPT is our ability to create new links between information

I'm confident that "creativity" is a combination of:

1) reproduction errors (when we badly copy things and the wrong way to do it leads luckily to a better way to do it), and

2) systematically or by luck applying established and productive models from one context to another, unrelated context, and getting a useful result.

Just not a believer in some essential, formless creativity that generates something out of nothing.


I think that is poppycock. If you don’t know anything then how can you bring all this “creativity” to bear on a task of knowledge work? This whole line that all of higher education is just regurgitation or something of no value seems fallacious. What is an example of not regurgitation? Why is it that using Wikipedia or whatever is superior learning to what is done in university?


I agree though I didn't find the author to be bemoaning students. Rather they were writing a "state of the art" piece explaining how things are currently happening and leaving it open for people to follow with thoughts about how to make meaningful changes to assessment style and curriculum in the face of ChatGPT.


> What humans being to the table over ChatGPT is our ability to create new links between information, aka creativity

Maybe not, or not for long. Maybe AGI is coming within 20 years, and maybe human workers won’t have anything to bring to it afterwards.

Maybe this is the beginning of the downfall of the value of intellectual human workforce.


I grew up in the late 1980s and early 1990s and have a phenomenal memory. My dad used to tell me how valuable it would be once I grew up and got a job. It’s so funny how false that ended up being.

It’s much more beneficial socially because I can recall jokes that fit situations incredibly quickly and get a good laugh.


Schools haven't been focused on rote learning for eons. I don't know where you get that idea from.


> memorizing and regurgitating information - is now a commodity.

Google Search already did this! Connections have been the value for a very long time.


Absolutely this. Change the questions you're asking of your students. Harder to grade than option A though huh?


He even used ChatGPT to evaluate the approach of one of his student's homework. I don't understand his ignorance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: