Hacker Newsnew | past | comments | ask | show | jobs | submit | roncesvalles's commentslogin

As an ex-vegetarian, I never understood the premise of the Impossible/Beyond stuff because when they launched there already was a really good soy burger in the supermarket frozen aisle that had excellent macros, priced reasonably, and tasted great.

I never thought the notion of "let's make the veggie burger taste like meat" made any sense.


Anyone who talks about pair programming has either never done them or just started doing them last week.

My sense is that there is a narrow slice of software developers who genuinely do flourish in a pair programming environment. These are people who actually work through their thoughts better with another person in the loop. They get super excited about it and make the common mistake of "if it works for me, it will work for everybody" and shout it from the hilltops.

Then there are the people who program best in a fugue state and the idea of having to constantly break that to transform their thoughts into words and human interaction is anathema.

I say this as someone who just woke up in the wee hours of the morning when nobody else is around so I can get some work done (:


I like pair programming. Not everytime or even everyday, but to shadow a junior a few hours a week, or to work with another senior on a complex/new subject? It's fine.

I like pair programming for certain problems: things that are genuinely hard / pushing the boundaries of both participants knowledge and abilities. In those scenarios sometimes two minds can fill in each other's gaps much more efficiently than either can work alone.

>shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits

hell in one sentence


Arguably the PM role only exists because SWEs don't want to do PM work, and the industry acquiesced to this because SWEs are in very short supply - if you could hire a layperson (sorry) to take a few hours of non-technical work off a SWE's plate, it is worth it.

In a (hypothetical; not quite there yet) world where SWEs are in surplus, there is no reason to have PMs.

The really eye-popping efficiency gains from LLM coding won't come from doing the coding faster but from consolidating the PM, SWE, and QA/SDET roles under the same person. Then you'll start seeing startup/indie level productivity-per-person inside large organizations. Imagine Google is like 50,000 Pieter Levels.


The concept of a large organization doesn’t even make sense in this model. How do you make decisions? How do you coordinate? What is Google when you have 50,000 individual silos?

Decisions are less costly. When a swe can take 4 days to do what would have cost 6 months, the math of making sure you are doing the right thing before executing goes away.

FWIW I find LLMs to be excellent therapists.

The commercial solutions probably don't work because they don't use the best SOTA models and/or sully the context with all kinds of guardrails and role-playing nonsense, but if you just open a new chat window in your LLM of choice (set to the highest thinking paid-tier model), it gives you truly excellent therapist advice.

In fact in many ways the LLM therapist is actually better than the human, because e.g. you can dump a huge, detailed rant in the chat and it will actually listen to (read) every word you said.


Please, please, please don’t make this mistake. It is not a therapist. At best, it might be a facsimile of a life coach, but it does not have your best interests in mind.

It is easy to convince and trivial to make obsequious.

That is not what a therapist does. There’s a reason they spend thousands of hours in training; that is not an exaggeration.

Humans are complex. An LLM cannot parse that level of complexity.


You seem to think therapists are only for those in dire straits. Yes, if you're at that point, definitely speak to a human. But there are many ordinary things for which "drop-in" therapist advice is also useful. For me: mild road rage, social anxiety, processing embarrassment from past events, etc.

The tools and reframing that LLMs have given me (Gemini 3.0/3.1 Pro) have been extremely effective and have genuinely improved my life. These things don't even cross the threshold to be worth the effort to find and speak to an actual therapist.


Which professional therapist does your Gemini 3.0/3.1 Pro model see?

Do you think I could use an AI therapist to become a more effective and much improved serial killer?


I never said therapists were only for those in crisis; that is a misreading of my argument entirely.

An LLM cannot parse the complexity of your situation. Period. It is literally incapable of doing that, because it does not have any idea what it is like to be human.

Therapy is not an objective science; it is, in many ways, subjective, and the therapeutic relationship is by far the most important part.

I am not saying LLMs are not useful for helping people parse their emotions or understand themselves better. But that is not therapy, in the same way that using an app built for CBT is not, in and of itself, therapy. It is one tool in a therapist’s toolbox, and will not be the right tool for all patients.

That doesn’t mean it isn’t helpful.

But an LLM is not a therapist. The fact that you can trivially convince it to believe things that are absolutely untrue is precisely why, for one simple example.


As you said earlier, therapists are (thoroughly) trained on how to best handle situations. Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

Training LLMs we can do.

Though it might be important for the patient to believe that the therapist is empathizing, so that may give AI therapy an inherent disadvantage (depending on the patient's view of AI).


Socialization with other humans has so many benefits for happiness, mental health, and longevity. Conversely, interaction with LLMs often leads to AI psychosis and harms mental health. IMO, this is pretty strong evidence that interaction with LLMs is not similar to socialization with real humans, and a pretty good indicator that LLM “therapy” is significantly less helpful or even harmful than human-driven therapy.

Precisely.

> Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

The word “just” is not in my comment anywhere. Being human is necessary, but not sufficient.

And no, you cannot train an LLM to be human.

An LLM is not a therapist. Please do not confuse the two.

You cannot train an LLM on how to be human.


While I agree with you, I also find that an LLM can help organize my thoughts and come to realizations that I just didn't get to, because I hadn't explained verbally what I am thinking and feeling. Definitely not a substitute for human interaction and relationships, which can be fulfilling in many-many ways LLM's are not, but LLM's can still be helpful as long as you exercise your critical thinking skills. My preference remains always to talk to a friend though.

EDIT: seems like you made the same point in a child comment.


Yeah, I agree with all of that. A friend built an “emotion aware” coach, and it is extremely useful to both of us.

But he still sees a therapist, regularly, because they are not the same and do not serve the same purpose. :)


I think prompt engineering is obsolete at this point, partly because it's very hard to do better than just directly stating what you want. Asking for too much tone modification, role-playing or output structuring from LLMs very clearly degrades the quality of the output.

"Prompt engineering" is a relic of the early hypothesis that how you talk to the LLM is gonna matter a lot.


You don't have to believe in it. You just have to believe someone else will believe in it and be willing to pay a higher price.

>Many of these posts come from people who were all in on crypto companies a few years ago.

This is ditto my observation. There seems to be a certain "type" of people like this. And it's not just people looking for work.

My guess is either they have super low critical thinking, a very cynical view of the world where lies and exaggeration are the only way to make it, or something more pathological (narcissism etc).


The "type" is simply the get-rich-quick schemers.

I have a relative who was late to crypto, late to drop shipping, late to carbon credits, but is now absolutely all-in on AI as his ticket out. It honestly depresses the hell out of me trying to talk to him because everything is about money and getting rich.

People like this don't care about underlying technologies or learning past the most basic surface level of understanding.


There are actually cases when Java (the HotSpot JVM) runs faster than the same logic written in C/C++ because the JVM is doing dynamic analysis and selective JIT compilation to machine code.

And the worst part is, these people don't even use the flagship thinking models, they use the default fast ones.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: