Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A perspective from a non-professional who has been teaching himself to code:

My knowledge of exact functions is poor. I might know that I can use Framer library to animate on-page elements, but I have little to no understanding of the exact function needed to animate an object from, say, left-to-right on hover.

My normal workflow was to either read the documentation or search StackOverflow for answers. I would then have to rework the function to fit my current use case.

Now, I've been asking chatGPT directly to build the exact function for me.

So far, it's been a massive timesaver. I'll probably learn more if I dig through the documentation, but since I'm a hobbyist, not a professional, it's much more convenient for me to just get the information I need, without digging through Stackoverflow or documentation.



FYI this is probably not a good habit if you're trying to teach yourself, rather than just trying to get some task done. Reading documentation and searching StackOverflow are genuinely useful skills that take practice to get good at. Asking chatGPT is equivalent to just asking a friend for the answer, which is fine if you want to be efficient but not ideal for learning.

Obviously this doesn't matter if we think chatGPT is so good that you'll never need to read documentation yourself, but I think this is one of those situations where you need to be an expert before you're allowed to break the rules. Without experience, you won't know if chatGPT is really giving you everything you'd get from reading the docs yourself, or only a small and potentially inaccurate slice.


ChatGPT generally goes into a lot of details about its decisions, and provides detailed explanations. You still have to fact-check it, or verify by running the code, because it will make mistakes, but if that happens you can say "Hey, this isn't quite right because ..., how do I actually do this" and it will usually figure it out.

As a software dev of 10 years, I've done the "googling and reading documentation" a fair bit, which is kind of like stumbling around in the dark and feeling around to get a sense of where things are. For some well-defined, well-documented things, using ChatGPT to do the same is like having having an overconfident junior-intermediate dev to pair with who's familiar in a stack that I'm not. I still have to guide it a fair bit, and adjust my expectations to account for that overconfidence. But it can absolutely guide me as well, and teach me new things.


Yeah that makes sense, I'm just saying that if you didn't have 10 years of experience, you might not know how to guide it and might not notice when it doesn't seem quite right, and end up learning a lot less than you could.

It's a little bit like "you won't always have a calculator in your pocket". We do always have a calculator these days, but it's still useful to know how to do arithmetic so you can do things in your head, and notice if an answer doesn't make sense because you made a typo or something. Maybe in the future we'll all have chatGPT running locally on our phones or brain interfaces or whatever and be able to quickly train it on new datasets, but even then it will still be useful to know how to do things yourself.


I think that sometimes just copying and pasting from stack overflow is not much better than using chatGPT. But I agree with you about reading documentation. When you read the docs you build up a model of the system in your head. You can then play with this model in your head and come up with good solutions. This seems to be exactly what chatGPT can't do.

Also I'm senior and sometimes don't get to program for long periods of time. What I find is that when I don't program I get worse and solving higher level problems. The important part of programming is not about knowing APIs etc. It is modeling a problem and its solution in a domain that forces you to be precise. For that reason I would say to junior developers: Keep programming. It will make you a better problem solver and it will make you better at the things that chatGPT can't do.


Even for Stack Overflow, knowing what to copy is a learned skill. Stuff like "this accepted answer with 10k upvotes is 8 years old, I should scroll down and see if there's a newer answer with the right way to do it in $latest version". Maybe chatGPT can handle that, I don't know, I just think that finding the right answer within the right forum post is an important learned skill and I wouldn't want new people to miss out on it. It carries over to many other areas of your life.


As an aside, programming is the lowest "willpower depletion" activity I've ever done, apart from producing (i.e. in a software DAW) music. I can program for hours without getting particularly tired or feeling like I need a break.

Have to say I'm a little jealous of people who get to do this for a living.


I understand that and I'm fine with it, especially since I'm using it for a hobby project, and mostly looking up non-core libraries that I'll likely not use often again (such as framer motion).

My point is that it's making newbies like me way more productive than we have any right to be.


It will be interesting if it replaces stackoverflow considering that it probably trained itself on a lot of the questions and answers. On the one hand it's not much different than training on github or how google put translators out of business by using their translations. But it is just a more direct connection that demonstrates how these guys are funneling the wealth generated by other people's work up to themselves. Before stackoverflow the state of questions and answers on the web was really bad and full of noise. They took a risk and put a lot of effort and engineering knowledge into building it.

What really annoys me is that it will probably further train itself on this text I'm writing now. I am writing it in the spirit of exchange with other similar people. Not in the spirit of some mechanical turk worker for OpenAI.


I agree and I think this is similar to some people's very legitimate objections to Stable Diffusion and DALL-E. When people put artwork up on the internet they were expecting a handful of human beings to draw some enjoyment and maybe inspiration from it. They were not expecting billions of identical robots to ingest it in a nanosecond and remember and build off of it for eternity.

Scale matters, and robot and human inspiration are not ethically equivalent even if you think they are mechanically equivalent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: