Hacker Newsnew | past | comments | ask | show | jobs | submit | f1shy's commentslogin

Context is important. Maybe that was told in the first 3 minutes of the briefing, and them came 30 minutes about the shocks. I would not assume the briefing was so thorough.

If anything, this makes the study more revealing and terrifying.

Basically under ill guidance of authority, people can become real monsters. That is the conclusion I got from it, and is now still worse.


> people can become real monsters

It's being consistently verified in real time if you track current events.


> So the greatest physics, maths, poetry and pop music are done by people in their 20s.

I can, just from feeling, agree to the pop music. About math I would cite the example of Gilbert Strang, who made many books at advanced age, including one at age 86 or other publications well over the 70s. Another example (well not math, but CS) Donald Knuth. I do not know how is the whole statistic, but writing good books, even text books, does not seem to be teenager thing.


Serre is known for being active in old age as well

I had an interesting conversation with a guy at work past week. We were discussing some unimportant matter. The guy has a pretty high self esteem, and even if he was discussing, in his own words, “out of belief and guess” and I was telling him, I knew for a fact what I was talking about, I had a hard time because he wouldn’t accept what I was saying. At some point he left, and came back with “Gemini says I’m right! So, no more discussion” I asked what did he exactly asked. He: “I have a colleague who is arguing X, I’m sure is Y. Who is right?!”

Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying.

LLM are pretty dangerous in confirming you own distorted view of the world.


I agree with your conclusion, but that's by design. The goal is not to tell people the truth (how would they even do that). The goal is to give the answer that would have come from the training data if that question were asked. And the reality is that confirmation is part of life. You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

> The goal is to give the answer that would have come from the training data if that question were asked.

Or more cynically, the goal is to give you the answer that makes you use the product more. Finetuning is really diverging the model from whats in the training set and towards what users "prefer".


The loss function is based on predicting the response based on the training data, or based on subsequent RLHF. The goal is usually to make money. Not only does the training data contain a lot of "you're absolutely right" nonsense, but that goal tends to push more of it in the RLHF step.

> You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

I don't dispute that but man that is some shitty marriage. Even rather submissive guys are not happy in such setup, not at all. Remember its supposed to be for life or divorce/breakup, nothing in between.

Lifelong situation like that... why folks don't do more due diligence on most important aspect of long term relationships - personality match? Its usually not a rocket science, observe behavior in conflicts, don't desperately appease in situations where one is clearly not to blame. Masks fall off quickly in heated situations, when people are tired and so on. Its not perfect but pretty damn good and covers >95% of the scenarios.


Not everyone has a supportive family or the requisite childhood / life experiences to do “due diligence”.

All this, and yet, people are so angered by the term "stochastic parrot".

I use LLMs every day, I use Claude, Gemini, they're great. But they are very elaborate autocomplete engines. I'm not really shaking off that impression of them despite daily use .


It's weird. It's literally what they are. It's a gigantic mathematical function that takes input and assigns probabilities to tokens.

Maybe they can also be smart. I'm skeptical that the current LLM approach can lead to human-level intelligence, but I'm not ruling it out. If it did, then you'd have human-level intelligence in a very elaborate autocomplete. The two things aren't mutually exclusive.


People are hung up on what they “really” are. I think it matters more how the interact with the world. It doesn’t matter if they are really intelligent or not, if they act as if they are.

Totally agreed. Although the difference between sounding intelligent and being intelligent is proving to be a bit troublesome.

Yes, it is. But those distinctions are going to be a lot less relevant with robotics. It won’t matter if it’s impatient or just acting impatient. Feels slighted or just acting like it feelss slighted. Afraid, or just acting afraid. For better or for worse, we are modeling AI after ourselves.

I am hearing this term for the first time but I love it. It is novel and creates a picture. Exactly what Scott Adams says about labels used for persuasion. I usually say "highly trained autocomplete" in discussions at work, but I am going to say "stochastic parrot" from now on.

oh, OK. You should google the term to see where it comes from. it's from someone who is essentially an anti-LLM activist and it's meant as a slur. That's likely why people consider it to be a slur, due to its origins.

You can't make a "slur" against software. It isn't a person, it doesn't have feelings.

"stochastic parrot" describes what an LLM does, that it (like a parrot) generates coherent human language without understanding its meaning.

Being offended on behalf of software is weird.


> Being offended on behalf of software is weird.

Yes. I can really recommend this essay of PG about that:

https://paulgraham.com/identity.html


> You may even struggle to stay married if you don't learn to confirm your wife's perspectives

Nope. You picked the wrong wife if that is the situation you are finding yourself in. My partner and I accept each others perspectives even if we disagree. I would never date a woman who can't accept that different opinions exist and that we both will sometimes be wrong.


>And the reality is that confirmation is part of life.

Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting.

Those aren't the good parts of life.

Those aren't the parts I want the machine to do to people on a mass scale.

>You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

Sorry what?

The important part is validating the way someone feels, not "confirming perspectives".

A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.")

You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false.

There's a word for that.

It's lying.

When you're deliberately lying to keep someone in a relationship, that's manipulation.

When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) -

... - that, my friend, is gaslighting.

This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me".

It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement".

I.e., prevents someone from leaving.

That's abuse.

That, too is a part of life.

>I agree with your conclusion, but that's by design

All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse).

Anyone can verify the correctness of the labeling in this context.

I agree with your assertion, as well as that of the parent comment. And putting them together we have this:

LLM chatbots today are abusive by design.

This shit needs to be regulated, that's all. FDA and CPSC should get involved.


> “I have a colleague who is arguing X, I’m sure is Y. Who is right?!”

This is why I've turned off Claude/ChatGPT's ability to use other conversations as context. I allow memories (which I have to check/prune regularly) but not reading other conversations, there is just too high of a chance of poisoning or biasing the context.

Once I switched to a new chat to confirm an assumption and the LLM said "Yes, and your error confirms that..." but I hadn't sent the error to that chat. At that point I had to turn it off, I open a new chat specifically to get "clean" context. I wish these platforms would give more tools to turn on/off that and have "private" chats (no memories, no system prompt edits) as well (some do, I know).

Obviously, context poisoning from other chats is not what happened in your case, but it's in the same "class" of issue, "leading the witness". I think about "leading the witness" _constantly_ while using LLMs. I often will not give it all the context or all of what I'm thinking, I want to see if it independently gets to the same place. I _never_ say "I'm considering X" when presenting a problem because I've seen it latch onto my suggestion too hard, too often.


It's more like insufficient emotional control is very dangerous. It's nothing new but I guess LLMs highlighted that problem a bit.

I would like to have a legal advisor based on that. At least for a first question, qithout paying a lawyer

It would help making the ad less distracting, in some cases.

The line is fine. Even if I use GPS a lot, I still try to keep my ability to interpret maps and find my way.

Same with calculators, even when today are dirt cheap, are not allowed in school, and being able to do math without it is a valuable skill.

So maybe there are like 2 groups of things: one where using it you are losing nothing, some where you lose some valuable ability.


It's quite difficult to tell exactly the extent your life depends on technology.

How different do you think your life would be if the combine harvester did not exist?


Combines have been had modern ai image recognition cameras (same technology as a llm) in the base model for a few years now.

> It makes me wonder if there was some LLM help

I would bet there was


In italian finestra

I can attest that. I installed panels on my house, they are not enough to cover my electrocity needs, let alone gas (heating). Even if it would be enough to cover electricity needs, the cost was (upfront) more than the equivalent of the next 20 years in bills.

To be fair, many of the costs are because of high demand (artificial, because the gov. mandates it to be installed) and lots of work to be integrated in the national grid. But as things are right now, it not economically convenient (at least where I live) and for what I have heard, in other places is not much different.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: