Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have also tried to do this and it didn't work as smoothly as you claim.

I don't think either of you are wrong; it just heavily depends on the complexity of the app and how familiar LLMs are with it.

E.g. rewriting a web scraper, CRUD backend or a build script? Sure, maybe. Rewriting a bootloader, compiler or GUI app? No chance.



Its funny seeing the goalposts move in real time.

"Yes, AI can make human sounding sentences, but can it play chess?"

"Well yes, it can play chess. But no computer can beat a human grandmaster at chess."

"Well it beat Kasperov - but it has no hope of beating a human at Go."

"Its funny - it can beat humans at go but still can't speak as well as a toddler."

"Alright it can write simple problems, but it introduces bugs in anything nontrivial, and it can't fix those bugs!"

I write bugs in anything nontrivial too! My human advantages are currently that I'm better at handling a large context, and I can iterate better than the computer can.

But - seriously, do you think innovation will stop here? Did the improvements ever stop? It seems like a pretty trivial engineering problem to hook an AI up to a compiler / runtime so it can iterate just like we can. Anthropic is clearly already starting to try that.

I agree with you, today. I used claude to help translate some rust code into typescript. I needed to go through the output with a fine toothed comb to fix a lot of obvious bugs and clean up the output. But the improvement over what was possible with GPT3.5 is totally insane.

At the current rate of change, I give it 5-10 years before we can ask chatgpt to make a working compiler from scratch for a novel language.


You may appreciate this quote about constantly moving the goalposts for AI:

"There is superstition about creativity, and for that matter, about thinking in every sense, and it's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something - play good checkers, solve simple but relatively informal problems - there was a chorus of critics to say, but that's not thinking."

That's from 1979! https://simonwillison.net/2024/Sep/13/pamela-mccorduck-in-19...


I side with Roger Penrose on this one. I'm still not convinced it's "thinking", and don't expect I ever will be, any more than a book titled "I am Thinking" would convince me that it's thinking.


Separate thinking from conscious. I.e. We have built machines which are processing data similar to our thinking process. They are not conscious.


My point is that I don't accept the concept of unconscious thought. "Processing data similar to our thinking process" doesn't make it "thinking" to me, even if it comes to identical conclusions - just like it wouldn't be "thinking" to just read off a pre-recorded answer.

The idea of ChatGPT being asked to "think" just reminds me of Pozzo from Waiting for Godot.


Well you can't have a conversation with a book... I don't understand your comment.

> I'm still not convinced birds can fly any more than a rock shaped like a bird would convince me that it's flying.


I agree. Some people think Google is sentient I guess? Data retrieval and mangling is not all we do, luckily.


Why do you care if its thinking or not?


I don't, in and of itself. I care that other people think that passing increasingly complicated tests of this sort is equivalent to greater proof of such "thought", and that the nay-sayers are "moving the goalposts" by proposing harder tests.

I don't propose harder tests myself, because it doesn't make sense within my philosophy about this. When those tests are passed, to me it doesn't prove that the AI proponents are right about their systems being intelligent; it proves that the test-setters were wrong about what intelligence entails.


> ... passing increasingly complicated tests of this sort is equivalent to greater proof of such "thought",

Nobody made any claim in this thread that modern AIs have thoughts.

What these (increasingly complicated) tests do is demonstrate the capacity to act intelligently. Ie, make choices which are aligned with some goal or reward function. Win at chess. Produce outputs indistinguishable from the training data. Whatever.

But you're right - I'm smuggling in a certain idea of what intelligence is. Something like: Intelligence is the capacity to select actions (outputs) which maximise an externally defined given reward function over time. (See also AIXI: https://en.wikipedia.org/wiki/AIXI ).

> When those tests are passed, [..] to me it proves that the test-setters were wrong about what intelligence entails.

It might be helpful for you to define your terms if you're going to make claims like that. What does intelligence mean to you then? My best guess from your comment is something like "intelligence is whatever makes humans special". Which sounds like a useless definition to me.

Why does it matter if an AI has thoughts? AI based systems, from MNIST solvers to deep blue to chatgpt have clearly gotten better at something. Whatever that something is, is very very interesting.


>But you're right - I'm smuggling in a certain idea of what intelligence is.

Yes, you understand me. I simply come in with a different idea.

>AI based systems, from MNIST solvers to deep blue to chatgpt have clearly gotten better at something. Whatever that something is, is very very interesting.

Certainly the fact that the outputs look the way they do, is interesting. It strongly suggests that our models of how neurons work are not only accurate, but creating simulations according to those models has surprisingly useful applications (until something goes wrong. Of course, humans also have an error rate, but human errors still seem fundamentally different in kind.)


Modern neural networks have very little to do with their biological cousins. It makes a cute story, but it’s over claimed. Transformers and convolution kernels think in very different ways than the human mind.


That gives me less reason to accept that it qualifies as "thinking".


Again, I don’t know of anyone, here or elsewhere who claims chatgpt thinks, in the way we understand it in humans. I think our intuitions largely agree.


... Then why did I get so much pushback in this comment chain?


Is there anything that a non-human could do that would cause you to accept that it was thinking?


Of course. Animals demonstrate sapience, agency and will all the time.


So, if a machine demonstrated sapience, agency, and will, then you would grant that it could think?


Yes; but if you showed me a machine that you believed to be doing those things, given my current model, I wouldn't agree with you that it was.


You are saying that even if it did the same thing that animals do that you attribute to thinking, you would refuse to acknowledge it could be thinking?

Is there something particularly unique about biological circuits that allow thought, as opposed to electronic ones?


I believe so, yes. No, I can't explain what it is. (Because I think they're obvious follow-up questions: No, I don't consider myself particularly religious. Yes, I do believe in free will.)


… But you believe there’s something special about intelligence grounded in biology that can’t be true of intelligence grounded in silicon? That just sounds like magical thinking to me.


I agree. Thinking is clearly a compositional process and computers are Turing complete so it seems like and impossibility to me. Unless you reach for some quantum microtubule woo...


> At the current rate of change, ...

We've seen that the rate of change went up hugely when LLMs came around. But the rate of change was much lower before that. It could also be much slower for the foreseeable future.

LLMs are only as good as their training materials. But a lot of what programmers do is not documented anywhere, it happens in their head, and it is in response to what they see around them, not in what they scrape from the web or books.

Maybe what is needed is for organizations to start producing materials for AI to learn from, rather than assuming that all they need is what they find on the web? How much of the effort to "train" AI is just letting them consume the web, and how much is concsiously trying create new learning materials for AI?


It could slow down again. We don’t know. But the people working at OpenAI seem to believe the models will keep improving for the foreseeable future. The “we’ll run out of training data” argument seems overblown.


> Its funny seeing the goalposts move in real time.

Another way to look at it is that we're refining our understanding of the capabilities of machine learning in real time. Otherwise one could make basically the same argument about any field that progresses - take our theories of gravity for example. Was Einstein moving the goalposts? Or was he building on previous work to ask deeper questions?

Set against the backdrop of extraordinary claims about the abilities of LLMs, I don't think it's unreasonable to continue pushing for evidence.


Yeah I totally agree with you. Lots of goalpost moving, and it is absolutely insane what it can do today and it will only improve.

It just can't translate the kinds of programs I write between languages on its own. Today.


Indeed, the constant goal shifting is tiresome.

I mean, we first put up a ladder and we could reach the peaches! Next, we put a ladder next to the apple tree and we could pluck those. Now, in their incessant goal post moving people said, great, now setup a ladder to the moon. There is no reason to assume this won’t work. None at all. People are just complaining and being angry at losing their fancy jobs.

More specific: it cannot learn, because it has no concept of learning from first principles. There is no way out, not even a theoretical one.


Of course it can stop, once legislation catches up and forbids IP theft using a thinly disguised probabilistic and compressed database of other people's code.


> a thinly disguised probabilistic and compressed database of other people's code

Speaking as a software engineer, I feel seen.


You really think those laws are coming? That the US and Chinese governments will force AI companies to put the genie back in the bottle?

I think you're going to be very disappointed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: