Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s no ideological battle here. The first self-driving DARPA grand challenge was passed in 2005, everybody thought we’d have self driving on the road within a decade.

20 years later that’s still not the case, because it turns out NN/ML can do some very impressive things at the 99% correct level. The other 1% ranges in severity from “weird lane change” to “a person riding a bicycle gets killed”.

GPT-3.5 was the DARPA grand challenge moment, we’re still years away from LLM being reliable - and they may never be fully trustworthy.



> everybody thought we’d have self driving on the road within a decade.

This is just not true. My reaction to the second challenge race (not the first) in 2005 was, it was a 0-to-1 kind of moment and robocars were now coming, but the timescale was not at all clear. Yes you could find hype and blithe overoptimism, and it's convenient to round that off to "everybody" when that's the picture you want to paint.

> 20 years later that’s still not the case

Also false. Waymo in public operation and expanding.


Waymo has limited service in one of the smallest “big” cities by geographic area in the United States. You can’t even get a Waymo in Mountain View.

Fact is Google will never break even on the investment and it’s more or less a white elephant. I don’t think it’s even accurate to call it a Beta product, at best it’s Alpha.


Have you been in one? It's pretty extraordinary as an actual passenger.


I’d give it a go if price competitive with Uber/Lyft - I can’t think of a way a robotaxi would be worth a premium though.


> Fact is

... followed by speculation about the future.

> [not everywhere]

The standard you proposed was "on the road". In their service areas (more than "one", they've been in Phoenix for some time) anyone can install their app and get a ride.

I shouldn't have poked my nose in here, I was just kind of croggled to see someone answer "ideological battle" by bringing up another argument where they don't seem to care about facts.


That might have been your reaction but it wasn't the reaction of many hype-inclined analyst types. Tesla is particular has been promising "full self driving next year" for like a decade now.

And despite everything, Waymo is not quite there yet. It's able to handle certain areas at a limited scale. Amazing, yes, but it has not changed the reality of driving for 99.9% of the population. Soon it will, I'm sure, but not yet.


> they may never be fully trustworthy.

So? Neither are humans. Neither is google search. Chatgpt doesn't write bug free code, but neither do I.

The question isn't "when will it be perfect". The question is "when will it be useful?". Or, "When is it useful enough that you're not employable?"

I don't think its so far away. Everyone I know with a spark in their eye has found weird and wonderful ways to make use of chatgpt & claude. I've used it to do system design, help with cooking, practice improv, write project proposals, teach me history, translate code, ... all sorts of things.

Yeah, the quality is lower than that of an expert human. But I don't need a 5 star chef to tell me how long to put potatoes in the oven, make suggestions for characters to play, or listen to me talk about encryption systems and make suggestions.

Its wildly useful today. Seriously, anyone who says otherwise hasn't tried it or doesn't understand how to make proper use of it. Between my GF and I, we average about 1-2 conversations with chatgpt per day. That number will only go up.


I find it very interesting the primary rebuttals to people criticizing LLM from the “converted” tends to result in implicit suggestions the critique is rooted in old fashioned thinking.

That’s not remotely true. I am an expert, and it’s incredibly clear to me how bad LLM are. I still use them heavily, but I don’t trust any output that doesn’t conform to my prior expert knowledge and they are constantly wrong.

I think what is likely happening is many people aren’t an expert in anything, but the LLM makes them feel like they are and they don’t want that feeling to go away and get irrationally defensive at cogent criticism of the technology.

And that’s all it is, a new technology with a lot of hype and a lot of promise, but it’s not proven, it’s not reliable, and I do think it is messing with people’s heads in a way that worries me greatly.


I don't think you understand the value proposition of chatgpt today.

For context, I'm an expert too. And I had the same experience as you. When I asked it questions about my area of expertise, it gave me a lot of vague, mutually contradictory, nonsensical and wrong answers.

The way I see it, ChatGPT is currently a B+ student at basically everything. It has broad knowledge of everything, but its missing deep knowledge.

There are two aspects to that to think about: First, its only a B+ student. Its not an expert. It doesn't know as much about family law as a family lawyer. It doesn't know as much about cardiology as a cardiologist. It doesn't know as much about the rust borrow checker as I do.

So LLMs can't (yet) replace senior engineers, specialist doctors, lawyers or 5 star chefs. When I get sick, I go to the doctor.

But its also a B+ student at everything. It doesn't have depth, but it has more breadth of knowledge than any human who has ever lived. It knows more about cooking than I do. I asked it how to make crepes and the recipe it gave me was fantastic. It knows more about australian tax law than I do. It knows more about the american civil war than I do. It knows better than I do what kind of motor oil to buy for my car. Or the norms and taboos in posh british society.

For this kind of thing, I don't need an expert. And lots of questions I have in life - maybe most questions - are like that!

I brainstormed some software design with chatgpt voice mode the other day. I didn't need it to be an expert. I needed it to understand what I was saying and offer alternatives and make suggestions. It did great at that. The expert (me) was already in the room. But I don't have encyclopedic knowledge of every single popular library in cargo. ChatGPT can provide that. After talking for awhile, I asked it to write example code using some popular rust crates to solve the problem we'd been talking about. I didn't use any of its code directly, but that saved me a massive amount of time getting started with my project.

You're right in a way. If you're thinking of chatgpt as an all knowing expert, it certainly won't deliver that (at least not today). But the mistake is thinking its useless as a result of its lack of expertise. There's thousands and thousands of tasks where "broad knowledge, available in your pocket" is valuable already.

If you can't think of ways to take advantage of what it already delivers, well, pity for you.


I literally said I do use it, often.

But just now had a fairly frequent failure mode: I asked it a question and it gave me a super detailed and complicated solution that a) didn’t work, and b) required serious refactoring and rewriting.

Went to Google, found a stack overflow answer and turns out I needed to change a single line of code, which was my suspicion all along.

Claude was the same, confidentially telling me to rewrite a huge chunk of code when a single line was all that was needed.

In general Claude wants you to write a ton of unnecessary code, ChatGPT isn’t as bad, but neither writes great code.

The moral of the story is I knew the gpt/claude solutions didn’t smell right which is why I tried Google. If I didn’t have a nose for bad code smells I’d have done a lot of utterly stupid things, screwed up my code base, and still not have solved my oroblwm.

At the end of the day I do use LLM, but I’m experienced so it’s a lot safer than a non-experienced person. That’s the underlying problem.


Sure. I'm not disagreeing about any of that.

My point is that even now, you're only talking about using chatgpt / claude to help you do the thing you already know how to do (programming). You're right of course. Its not currently as good at programming as you are.

But so what? The benefit these chat bots provide is that they can lend expertise for "easy", common things that we happen to be untrained at. And inevitably, thats most things!

Like, ChatGPT is a better chef than I am. And a better diplomat. A better science fiction writer. A better vet. And so on. Its better at almost every field you could name.

Instead of taking advantage of the fields where it knows more than you, you're criticising it for being worse than you at your one special area (programming). No duh. Thats not how it provides the most value.


Sorry my point isn’t clear: the risk is you are being confidently led astray in ways you may not understand.

It’s like false memories of events that never occurred, but false knowledge - you think you have learned something, but a non-trivial percent of it, that you have no way of knowing, is flat out wrong.

It’s not a “helpful B+ student” for most people , it’s a teacher, and people are learning from it. But they are learning subtly wrong things, all day, every day.

Over time, the mind becomes polluted with plausible fictions across all types of subjects.

The internet is best when it spreads knowledge, but I think something else is happening here, and I think it’s quite dangerous.


Ah thankyou for clarifying. Yes, I agree with this. Maybe, its like a B+ student confidently teaching the world what it knows.

The news has an equivalent: The Gell-Mann amnesia effect, where people read a newspaper article on a topic they're an expert on and realise the journalists are idiots. Then suddenly forget they're idiots when they read the next article outside their expertise!

So yes, I agree that its important to bear in mind that chatgpt will sometimes be confidently wrong.

But I counter with: usually, remarkably, it doesn't matter. The crepe recipe it gave produced delicious crepes. If it was a bad recipe I would have figured that out with my mouth pretty quickly. I asked it to brainstorm weird quirks for D&D characters to have, some of the ideas it came up with were fabulous. For a question like that, there isn't really such a thing as right and wrong anyway. I was writing rust code, and it clearly doesn't really understand borrowing. Some code it gives just doesn't compile.

I'll let you in on a secret: I couldn't remember the name of the gell-mann amnesia effect when I went to write this comment. A few minutes ago I asked chatgpt what it was called. But I googled it after chatgpt told me what it was called to make sure it got it right so I wouldn't look like an idiot.

I claim most questions I have in life are like that.

But there are certainly times when (1) its difficult to know if an answer is correct or not and (2) believing an incorrect answer has large, negative consequences. For example, Computer security. Building rocket ships. Research papers. Civil engineering. Law. Medicine. I really hope people aren't taking chatgpt's answers in those fields too seriously.

But for almost everything else, it simply doesn't matter that chatgpt is occasionally confidently wrong.

For example, if I ask it to write an email for me, I can proofread the email before sending it. The other day asked it for scene suggestions in improv, and the suggestions were cheesy and bad. So I asked it again for better ones (less chessy this time). I ask for CSS and the CSS doesn't quite work? I complain at it and it tries again. And so on. This is what chatgpt is good for today. It is insanely useful.


The problem, at least for me, is that I feel like the product offerings suggested to us in other comments (not Claude/ChatGPT, but the third party tools that are supposed to make the models better at code generation) either explicitly or implicitly market themselves as being vastly more capable than they are. Then, when I complain, it’s suggested that the models can’t be blamed (because they’re not experts) and that I’m using the tools incorrectly or have set my expectations too high.

It’s never the product or its marketing that’s at fault; only my own.

In my experience, the value proposition for ChatGPT lies in its ability to generate human language at a B+ level for the purposes of a an interactive conversation; its ability to generate non-trivial code has proven to be terribly disappointing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: