Hacker Newsnew | past | comments | ask | show | jobs | submit | nextos's commentslogin

It should be better for the reasons explained in the article. Pure functions require no context to understand. If they are typed, it's even simpler. LLMs perform badly on code that has lots of state and complex semantics. Those are hard to track.

In fact, synthesis of pure Haskell powered by SAT/SMT (e.g. Hoogle, Djinn, and MagicHaskeller) was already of some utility prior to the advent of LLMs. Furthermore, pure functions are also easy to test given that type signatures can be used for property-based test generation.

I think once all these components (LLMs, SAT/SMT, and lightweight formal methods) get combined, some interesting ways to build new software with a human-in-the-loop might emerge, yielding higher quality artifacts and/or enhancing productivity.


Wouldnt a fair counter argument be, that llms have been trained on way less fu ctional code though?

Like they are trained on a LOT of js code -> good at js Way less functional code -> worse performance?


You can write functional-style code in many languages, as I have in JS and occasionally Python to great benefit.

For sure. I write functional style code in C# but it is not the same thing as writing OCaml or F#.

That's a very fair point. There are some publications showing lower performance for languages with less training data. I imagine it also applies to different paradigms. Most training code will be imperative and of lower quality.

I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.

I used to find it better to shortcut the AI by asking it to write python to do a task. Claude 4.6 seems to do this without prompting.

Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.


You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].

[1] https://www.anthropic.com/research/small-samples-poison

[2] https://arxiv.org/abs/2510.07192


Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index

No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.

This is very impressive, top labs doing research often don't have experimental designs that are this elaborate. Was the TCR and BCR-seq you conducted helpful to design cell therapies, neoantigen vaccines, and monitor progress?

Given that you carry the HLA-B*27:05 allele, you might have been blessed by being predisposed to a better response. But probably you want to keep an eye on future autoimmunity issues. Talking from experience...


Thanks for the warning, I hope that it wasn't a personal experience for you.

Thanks for the compliment about the elaborate design. I think that when you make something for one or a few patients it is easier to be more elaborate, even with the same knowledge and equipment.

Maybe the TCR and BCR-seq was most helpful for mRNA design and effectiveness monitoring, but hopefully someone else on my team will answer that better.


The TCR sequencing has been helpful for downselecting TCRs for a TCR based cell therapy, and for monitoring response to various immune therapies (including the vaccines)

Interesting, thanks for your replies.

You should consider publishing a patient case report somewhere, as I believe there are lots of valuable conclusions to be extracted from your work.




> the reason it has been limited to those cases is drug development, today, is constrained by commercialization.

That's a good observation, but I think it's an incomplete picture. Another important constraint is often regulatory inertia and historical baggage.

The UK pioneered small classical and adaptive trials using Bayesian methods, and there were some promising results. A lot of modern Bayesian methodology was, in fact, developed at the MRC BSU Cambridge with this goal in mind. For example, the probabilistic programming language BUGS (1989).

Given that most drugs fail, the industry is highly incentivized to use Bayesian methods to fail faster. These models allow for more rapid dose-finding and the ability to distinguish promising leads using interim data, which is vital given the massive cost of any trial, especially late-stage failures.

But for Bayesian methods to make a dent, they'd need to be applied to a large number of trials, and change doesn't happen overnight. Lots of big pharma players, e.g. GSK, are becoming interested in moving to Bayesian methods in order to leverage prior information and work better within small-data regimes.


You don't need an app if you don't want one.

In a CLI, oath lets you calculate a TOTP.

But it's maybe a bit more insecure if you use the same machine.


The problem with mRNA vaccines for cancer is effectiveness. Vaccines already work well for prevention of relapses in e.g. tumors that have been surgically removed.

They might also be great combined with early-stage detection via ctDNA.

But in late-stage patients, the effectiveness is limited because the host immune system is compromised.

Several landmark mRNA cancer vaccine trials by BioNTech and others have pointed in this direction.

In vivo reprogramming of T cells might be the next frontier. In fact, the BioNTech founders are moving to a new venture, but it's unclear what their thesis is.


CAR-T recipient here! It's been a cure some some bleak cancers. Very much a game changer with seemingly a lot more to uncover with it before we move onto something else. Unfortunately for me, mine resides in bone which is hard to traverse.

Yes, and the EU, due to this fragmentation, seems to be a fertile playground for all this unacceptable interference by foreign powers.

Actually, no. The decentralization of power means that it takes a lot more effort to subvert each country individually, rather than propping up a few candidates for the entire region like they do in the US.

They only need one country for veto rights.

The EU is perfectly capable of collaborating even when it can't reach full consensus or when it wants to include peripheral states without them becoming full members. See for example the Schengen area, Eurozone, European Economic Area, and more recently (and specifically to circumvent member state vetos) when the enhanced cooperation procedures were invoked to lend money to Ukraine.

Exactly, see what is happening in Hungary.

Controlling Hungary is enough to veto some support for Ukraine.


That’s true but that fragmentation is also what limits the propagation of fractures. You can see it like sandboxing.

A deal with foreign intelligence is a dead with the devil that comes with a lifetime of subservience. And subservience to foreign powers is a greater evil than yo usual internal corruption. At least the locally corrupt in a democracy have some interest in things going somewhat well in their country. The foreign actors only care about theirs.


No because any attempt at interference would in that case trigger article 5 of NATO.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: