> That is usually configurable at the terminal level
And if you use Emacs, it's configurable at the buffer level. [1] This lets me build a version of Iosevka where `~=` and `!=` both become ligaturized but in different major modes, avoiding any confusion.
No need to rely on app-specific configs. You can disable it globally in your fontconfig. For example, this disables ligatures in the Cascadia Code font:
I'm not either. I think it may look "cool" visually but when trying to work with code with those in it, it seems odd, like that it's a single character even though it's not and it just breaks the flow
Because most of those who commented are among those who do not like ligatures, I must present a counterpoint, to diminish the statistical bias.
Some people like ligatures, some people do not like them, but this does not matter, because any decent text editor or terminal emulator has a setting to enable or disable ligatures.
Any good programming font must have ligatures, which will keep happy both kinds of users, those who like and those who dislike ligatures.
I strongly hate the straitjacket forced by ASCII upon programming languages, which is the root cause of most ambiguous grammars that complicate the parsing of programming languages and increase the probability of bugs, and which has also forced the replacement of traditional mathematical symbols with less appropriate characters.
Using Unicode for source programs is the best solution, but when having to use legacy programming languages in a professional setting, where the use of a custom preprocessor would be frowned upon, using fonts with ligatures is still an improvement over ASCII.
A coding font is supposed to help you distinguish between characters, not confuse them for each other. Also, ASCII ligatures usually look worse than the proper Unicode character they are supposed to emulate. The often indecisive form they take (glyphs rearranged to resemble a different character, but still composed of original glyph shapes; weird proportions and spacing due to the font maintaining the column width of the separate ASCII code points) creates a strong uncanny valley effect. I wouldn't mind having "≤", "≠" or "⇒" tokens in my source code, but half-measures just don't cut it.
The simplest refutation of your point of view is, who or what is responsible if the work submission is wrong?
It will always be the person’s, never the computer’s. Conveniently, AI always acts as if it has no skin in the game… because it literally and figuratively doesn’t… so for people to treat it like it does, should be penalized
You sound like someone who has literally zero understanding as to why that is a ridiculous comparison.
There are a thousand and one ways that I participate when building something with LLM assistance. Everything from ORIGINATING AN IDEA TO BEGIN WITH, to working on a thorough spec for it, to ensuring tests are actually valid, to asking for specific designs like hexagonal design, to specific things like benchmarks... literally ALL OF THE INITIATIVE IS MINE, AND ALL OF THE SUCCESS/FAILURE CONSEQUENCES ARE MINE, AND THAT IS ULTIMATELY ALL THAT MATTERS
Please head towards a different career if you now have a stupid and contrived excuse not to continue working with the machines, because you sound like a whining child
And you're not answering the question, because you know it would end your point: WHO OR WHAT IS RESPONSIBLE IF THE CODE SUCCEEDS OR FAILS?
I started working in the industry when you were able to buy a Lisp Machine new and have been studying AI even longer, and I’ve been very successful in it. I not only know what I’m talking about, I have the experience to back it up.
You sound like someone who’s deeply in denial about exactly how the LLM plagiarism machines work. You really do sound like a student defending themselves against a plagiarism charge by asserting that since they did the work of choosing the text to put into their essay and massaging the grammar so it fit, nobody should care where it came from.
By that definition, every single human who wrote a paper after reading a source document is a “plagiarism machine”
and I’m 53 and well remember Symbolics from freshman year at Cornell, in fact my application essay to it was about fuzzy logic (AI-tangential) and probably got me in, so I too am quite familiar
i’m also quite good at debate. the flaw in your logic is that plagiarism requires accountability and no machine can be accountable, only the human that used it, ergo, it is still the work of the human, because the human values, the human vets, the human initiates, and the human gains or loses based on the combined output, end of story; accelerated thought is still thought, and anyway, if a machine can replicate thought, then it wasn’t particularly original to begin with
You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.
Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.
Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.
1) Most people still don't use TDD, which absolutely solves much of this.
2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.
3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.
4) Most people ask it to do too much and then get disappointed when it screws up.
Perfect example:
> you can't trust anything they put out
Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.
> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint
Yep, that’s why my forks of all their libraries with bugs fixed such as https://github.com/pmarreck/zigimg/commit/52c4b9a557d38fe1e1... will never ever go back to upstream, just because an LLM did it. Lame, but oh well- their loss. Also, this is dumb because anyone who wants fixes like this will have to find a fork like mine with them, which is an increased maintenance burden.
The PR doesn't disclose that "an LLM did it", so maybe the project allowed a violation of their policy by mistake. I guess they could revert the commit if they happen to see the submitter's HN comment.
Dunno but a commenter already noted that some begins to say: "No LLM generated PR, but we'll accept your prompt" and another person answered he saw that too.
Hugely unpopular opinion on HN, but I'd rather use code that is flawed while written by a human, versus code that has been generated by a LLM, even if it fixes bugs.
I'd gladly take a bug report, sure, but then I'd fix the issues myself. I'd never allow LLM code to be merged.
Because human errors are, well, human. And producing code that contains those errors is a human endeavor. It bases on years, decades of learning. Mistakes were made, experience was gained, skills were improved. Reasoning by humans is relatable.
Generating slop using LLMs takes seconds, has no human element, no work goes into it. Mistakes made by an LLM are excused without sincerity, without real learning, without consequence. I hate everything about that.
This is nonsense. There's plenty of work that goes into it. In fact, if no human work goes into it, then it is unlikely to pass human muster/judgment. It is just a tool for accelerated work, like literally every technological progress before it, but hey, you can go continue banging away at your loom making bespoke textiles, no one's gonna stop you.
For the parent there's immaterial value knowing that is written by a human. From what I read in your comment, you see code more as a means to an end. I think I understand where the parent is coming from. Writing code myself, and accomplishing what I set out to build sometimes feels like a form of art, and knowing that I build it, gives me a sense of accomplishment. And gives me energy. Writing code solely as a means to an end, or letting it be generated by some model, doesn't give that same energy.
This thinking has nothing to do with not caring about being a good teammate or the business. I've no idea why you put that on the same pile.
Sure, but back in reality no you’re not? No more than any other contributor?
If I want to use an auto-complete then I can, and I will? Restricting that is as regressive as a project trying to specify that I write code from a specific country or… standing on my head.
Sure, if they want me to add a “I’m writing this standing on my head” message in the PR then I will… but I’m not.
No, you can't. See, that's where you are just wrong: when you don't respect the boundaries an open source project sets that you want to contribute to then you are a net negative.
Restricting this is their right, and it is not for you to attempt to overrule that right. Besides the fact that you do not oversee the consequences it also makes you an asshole.
They're not asking for you to write standing on your head, they are asking for you to author your contributions yourself.
They are asking me to author my contributions in a way that they approve of. The essence of the request is the same as asking someone to author them whilst standing on their head.
Except they don’t, won’t and can’t control that: the very request is insulting.
I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs.
Their choice is to accept it or reject it based purely on the change itself, because that’s all there is.
If you’re going to lie and say there was no LLM involved, what else are you going to lie about? Copying code from another codebase with incompatible license terms, perhaps?
I would say people should be wary of any contributions whatsoever from a filthy fucking liar.
Nothing? Everything? Does it fucking matter? Assigning trust across a boundary like this is stupid, and that’s my point.
Oh, would you just accept my blatantly, verbatim copied-from-another-codebase-and-relicensed PR just because I said “I solemnly swear this is not blatantly, verbatim copied from another codebase and relicensed”?
That’s on you for stupidly assigning any trust to the author of the change. It’s the internet: nobody knows you’re a dog.
> Oh, would you just accept my blatantly, verbatim copied-from-another-codebase-and-relicensed PR just because I said “I solemnly swear this is not blatantly, verbatim copied from another codebase and relicensed”?
At that point you've proven intention, meaning you'll get the chance to argue your viewpoint in front of a judge.
Many major projects now require a signed DCO with a real name. That can be a nickname if you have a reasonable online presence under that name, but generally it has to identify you as an individual.
So you wouldn't sign it as "xXImADogOnTheInternet86Xx", but as "Tom Forbes (orf)".
And even if there won't be direct legal consequences, it'd certainly affect your ability to contribute to this or other projects in the future.
I'm really struggling to understand why you would burn down a decade+ old reputation over this particular issue. Is this really the hill you wanted to die on?
It’s an abstract argument with one pretty clear point that you can’t seem to grasp: people lie, on the internet, all the time. Any system, policy or discussion that pretends this isn’t the case is worthless.
This is not an abstract argument, you are showing a willingness to do the wrong thing in spite of being told not to, repeatedly, by many other participants here. I see only two things here:
(1) you would lie
(2) you fundamentally don't understand the concept of consent
> "I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs."
The fact that other people would lie is besides the point: those other people would get the exact same treatment if found out. Whether or not they would be found out is moot, it is the act of lying and ignoring consent that makes this what it is: asshole behavior. By extension anybody that practices this behavior is an asshole as well and by extension of that tying your own rep to people that would behave like that makes you an asshole and I highly doubt that that was your intention.
So now you've - over endless comments - shown that you fundamentally don't get this very important concept. Yes, people lie. But there are mechanisms for dealing with liars. Misrepresentation and fraud are serious things. Lawsuits, fines and in an extreme case jail, but on a more immediate level ostracizing. It makes you as a person into an undesirable. It also makes the world as a whole a worse place to live in, which is why such behavior is strongly discouraged, even if it is possible.
That's why we don't structurally go around clubbing old ladies over the head as a revenue model, not because we can't do it or because it would be acted upon by the law (that's for the few who don't get it) but because it is simply a bad thing to do. It is a matter of ethics. That's why if an open source project has a 'No AI' policy you either abide by the policy or you can expect massive backlash.
To think that you could do this and even should do this to make the point is as stupid as walking out and grabbing some old lady's hand bag to prove that it can be done: you are hurting an innocent to prove your point and it will cause a reaction that is at a minimum proportional to what you did and worst case you will be made an example of. This can be the proverbial career ending move. If you are Elon level rich and your inner asshole seeks a way out then yes, you could probably do it. But for normal folks such behavior is highly discouraged. Actions usually have consequences.
Finally: open source is a massive gift to society. The whole reason you can use AI in the first place is because that gift got abused in a way that open source contributors did not anticipate. If you're going around to pollute open source with AI contributions to effectively karma farm you have to wonder why you are so intent on doing that. Is it your purpose to destroy open source? Or is it just because you enjoy destroying stuff in general? I don't see any other options, this is a pathology and it would do you good to introspect on this for a bit instead of to respond with yet another ill conceived reply digging yourself in further. You've gone from 'mildly annoying' to 'wouldn't work with this person for any amount of money because they are a massive liability' in the space of 15 comments. I hope it was worth it to you.
This is a lot of words and I’m honestly not sure it’s worth reading. At a skim it seems naive at best, at worst a pretty stupid, pearl-clutching interpretation of the discussion.
> If you're going around to pollute open source with AI contributions to effectively karma farm you have to wonder why you are so intent on doing that? Is it your purpose to destroy open source? Or is it just because you enjoy destroying stuff in general? I don't see any other options, this is a pathology and it would do you good to introspect on this for a bit instead of to respond with yet another ill conceived reply digging yourself in further
Just in case you misunderstood things (it’s easy when you get so upset about trivial arguments on the internet!), I don’t use AI when contributing to open source projects.
Thanks for the imaginary psychoanalysis though I guess.
You not only broke the site guidelines badly with this comment, you actually escalated how bad the thread was by quite a margin. Please don't do that.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it. Note this one: "Don't feed egregious comments by replying; flag them instead."
Lying that you didn’t use an LLM when told that contributions made using LLMs are banned does indeed make you a sociopath. Whether you have also commit sexual assault is an independent axis, but when someone shows such blatant disregard for boundaries and consent, it does raise questions.
Instead of arguing for violating the boundaries of a "slow, bespoke" no-LLM project, you can simply start one that enjoys all the benefits of LLMs by NOT having that boundary. Very simple solution.
Their boundaries. If they don’t want to accept the code, cool. Nobody is forcing them to, and I respect that.
But if they can’t enforce their boundaries, because they can’t tell the difference between AI code and non-AI code without being told, then their boundaries they made up are unenforceable nonsense.
About as nonsense and enforceable as asking me to code upside down.
I'll make this blunt: if you're a guy then half the population is not capable of 'enforcing their boundaries' against you, more so if you count children. The problem you seem to have is to think that if someone is not capable of enforcing their boundaries that they are not allowed to have those boundaries and that it is your god given right to do whatever the F* you want just because you can. That's not how the world works, nor is it how it should work.
Boundaries - of all kinds - are not unenforceable nonsense, they are rights that you willingly and knowingly violate.
This is such an easily refuted assertion. Tell me, if something is wrong with the submitted code, who or what is responsible? If it's not "the LLM", then your opinion makes zero sense. The responsible party is always a human; therefore the responsible party rightfully deserves the credit whether it succeeds or fails.
I am authoring my contributions, using Clause Code as a tool. It doesn't make me an asshole.
If the maintainers don't want to accept it, fine. Someone will eventually fork and advance and we move on. The Uncles can continue to play in their no AI playground, and show each other how nice their code is.
The world is moving on from the "AI is bad" crowd.
Forking the code can be perfectly reasonable, with this or any other disagreement about policy. The main point of contention in this thread is whether you ought to lie about having used an LLM. I agree with Jacques: doing something like that would make you an asshole.
So is fraudulently claiming your code has a different author & copyright (you) than it actually has (whether that's someone else's code, or LLM-generated code).
You can, in fact, be pursued both civilly and criminally for fraud.
Your admissions here are enough that if you tried to contribute to any of my own Open Source projects, I would reject your contributions, and if I had accepted any prior ones I would pursue legal remedies.
I’d really like to know the specific legal remedies you’d pursue, assuming that I had contributed to one of your projects, based on this hacker news thread.
Can you stop LARPing and walk me through it? Please?
You stated that you will fraudulently misrepresent the origin of contributions you make to projects if you feel like it, and that nobody has any recourse. That’s you LARPing, by thinking there’s no recourse for fraud.
First of all, I don’t take anonymous or pseudonymous contributions to any of my projects, so if you had made any contributions I would have your real-world identity. That should tell you right away that recourse is possible.
Then, if I learned or had reasonable suspicion that your real-world identity mapped to Hacker News user “orf,” I would instruct my attorney to send a formal contributor agreement to you to sign within a certain period of time that certifies that you are indeed the sole author of all of the content you submitted to the project, and that you did not copy it from another codebase without proper attribution or license, or use an LLM to write it.
If you refused to sign such an agreement, or signed it and were discovered to be lying, I would file a lawsuit for the cost of having having to remove your contributions for possible fraudulent misrepresentation of their origin, for the cost of having to hire one or more developers to recreate any any important downstream work that depended upon your contributions using clean-room techniques, and for punitive damages to ensure you were dissuaded from making fraudulent misrepresentations in the future.
That’s not LARPing, that’s what any business will do in the event of a possible breach of contract. Just because many open source projects don’t have someone like me involved with the financial resources to pursue such a suit as far as necessary doesn’t mean that none do.
You’d send me a contributor agreement, after I’ve contributed, to retroactively ask if I used a LLM to write the code, and if I refused you’d then sue me for nebulous ill-defined damages and for breaching a non-existent contract?
So in your head, I could contribute a change that introduces a bug and as a result you could sue me for the time it took you to fix it?
…
Are you OK?
I was hoping for something with a “I’m a big strong serious tough guy” vibe but that’s a bit much. However I guess you can file a civil case for practically anything in some countries, and if you’re retired/unemployed maybe writing this kind of internet police fan-fiction is considered fun?
Do another one, this time where it’s not thrown out as a clearly frivolous suit with no legal basis.
You broke the site guidelines repeatedly in this thread, including by crossing into all sorts of personal attacks. I realize that you were provoked, but you were also provoking.
We've actually been asking you not to do this for years. This is bad:
I'm not going to ban you for this episode because everyone goes on tilt sometimes. But if you'd please review https://news.ycombinator.com/newsguidelines.html and do what it takes to recalibrate so that you're using the site as intended going forward, we'd be grateful.
No, you’re still either being intentionally obtuse or unintentionally clueless.
A condition of making a contribution to one of my projects is that you haven’t used an LLM to create that contribution. By making a contribution, you are agreeing to this restriction, even without having any formal document signed.
If I then found out that you may have defrauded the project by lying about the origin of your contribution—say because you said openly and publicly “I would just lie about using an LLM”—then I would first give you a chance to declare that no, really, you didn’t commit fraud in these cases because even though you publicly said you would just lie, I’m betting that you wouldn’t lie in signing a multipage contract with specific penalties for breach.
If you wouldn’t sign that contract, then I would sue you to address the damage your fraud caused the project, which would include removing all of your contributions and anything depending upon them from not just the present codebase but the project history, as well as documenting and hiring someone from outside the project to clean-room recreate anything I deem important that did depend upon them.
These damages are not nebulous or ill-defined: Because of the untrustworthy provenance of your contributions, they *must* be removed, and they also taint anything dependent upon them.
In all of your replies on this topic you really sound like a teenager who hasn’t quite understood that your actions really can have consequences.
If you look into why it was historically very difficult to find GNU emacs code for older versions, it’s because of a situationexactly like this: Stallman just copied some code from Unipress (Gosling) emacs into GNU emacs, presumably thinking he could get away with the copyright violation. (He evidently hadn’t learned from getting smacked down for directly copying Symbolics code into the LMI codebase.) The end result is that FSF and mirrors had to stop distributing the versions of GNU emacs containing the Unipress-originated code.
This is not a LARP, this is stuff that actually happens in the software industry including in Open Source, and anyone involved in the industry needs to actually take it seriously because to do otherwise is to invite substantial liability.
You broke the site guidelines repeatedly in this thread, including by crossing into quite vicious personal attack. I realize that you were provoked, but you were also provoking.
I'm not going to ban you for this episode because everyone goes on tilt sometimes. But if you'd please review https://news.ycombinator.com/newsguidelines.html and do what it takes to recalibrate so that you're using the site as intended going forward, we'd be grateful.
Surely you know that you can't do this on HN. "sociopathic piece of shit [...] Do the world a favor and remove yourself" isn't just bannable, it's 100x what we'd ban an account for.
You've been a good user generally* so I'm going to put this down to the unfortunate circumstances of this thread, but please don't do it again.
Even before AI, getting a fix into an open source project required a certain level of time and effort. If you prefer to spend your time on other things, and you assume it will eventually get fixed by someone else, using an LLM to fix it just for yourself makes sense.
reply