Hacker Newsnew | past | comments | ask | show | jobs | submit | krupan's commentslogin

Maybe because they don't want to lose money even faster than Anthropic is?

Is that also why they allow to use their subscriptions in OpenClaw and 3rd party harnesses?

It has indeed been baffling. Ad I dig deeper into what developers are doing with AI, it's basically like what I did customizing and tweaking emacs when I was younger (and fine, I'll admit I still do it sometimes). They are having so much fun playing with these new tools that they aren't really noticing how little the new tools are actually helping them

And then you find out the dipshit that didn't keep the comments up to date was you all along

It wasn't.

Very soon, and at this point I'm not sure even that would cure the delusions of the few who practically worship LLMs


If you have to do all that, then what's the point of the AI? I'm joking, but I'm afraid many others say the same thing 100% seriously


As an article that was here recently claims, every verification you do in a chain increases the total time of your work by an order of magnitude. So, it's only work optimizing any productive task if you already removed most verifications.

Now, some people claim that you need to improve the reliability of your productive tasks so you can remove the verifications and be faster. Those people are, of course, a bunch of coward Luddites.


At least pre-LLM automation was written by a careful human who's job was on the line, and was deterministic.


It's more like, the LLM "hallucinated" (I hate that term) and automatically posted the information to the forum. It sounds like the human didn't get a chance to reason about it. At least not the original human that asked the LLM for an answer


I’m not in AI, but what is happening is that it is building output from the long tail of its training data? Instead of branching down the more common probability paths, something in this interaction had it travel into the data wilderness?

So I asked AI to give it a good name, and it said “statistical wandering” or “logical improv”.


If you don't like hallucinate, try bullshit. [NB: bullshit is a technical term; see https://en.wikipedia.org/wiki/On_Bullshit]

https://www.psypost.org/scholars-ai-isnt-hallucinating-its-b...


That is my preferred term, but it seems to derail discussions that might have otherwise been productive (might...the hope I have)


Read TFA. It's not "Someone vibe coded too hard and leaked data"


I hit back, clicked the link again, and it let me through


"A human, however, might have done further testing and made a more complete judgment call before sharing the information"

Because a human would have been fired for posting something that incorrect and dangerous


But funny enough the person who was responsible for setting up the bot will likely face no repercussions. In fact they will probably be rewarded for transitioning their team's workflows to AI.


A machine doesn’t need food, leisure time, or vacations. It doesn’t care.

It also doesn’t care.


I mean, only if it leads to embarrassment right off the bat.

If there is a year or two between writing your security fuck up and it being discovered the likelihood of repercussions drops significantly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: