Copyleft is copyright held in smart way. Nobody can take code under GPL and make its _copy_ proprietary because it would be violation of copyright.
In the other thread you argued that AI output is not copyrighted.
Do you think I can take proprietary code and lauder through AI to get a non-copyrighted copy of it, then modify to my needs? How can I obtain the proprietary code legally in the first place?
Exactly like someone else here said, in retrospect he probably just wishes he had chosen a more permissive license now that he has forever received the credit and wants to have his cake and eat it too.
I would want to use the license that does not ask for credit; the only requirement is that any further restrictions are not legally effective (except that, for practical reasons, it is allowed to be relicensed by GPL and AGPL (if you are able to follow all of the requirements of those licenses) in order to combine it with software having such licenses).
Current social consensus is that copyright exists and one can only use software on conditions stated in license. Thus, proprietary and copyleft have same protection.
Another possible consensus would be that copyright don't exist, and anyone can copy proprietary or copyleft work and improve it. Nobody would be harmed in such situation, original author still have its copy. I would have no problem with such state - but it must be same for everyone, not just FOSS.
If I release something as MIT or Apache, all I want is some credit, either for my own self-satisfaction or for resume fuel.
If a library I wrote was used by BigCo, then I could point to their license file and mention that in a job interview or something. If they have Claude generate something based on my code, they don't put it into their license, I don't get the resume fuel, and my work is unrewarded.
I have gone back and forth about how I feel about AI training on code, and whether I think it's "theft", but my point is that the original code being available is kind of missing the point.
Easy to check: try to speak with someone talking foreign language you don't know and estimate what percentage of what they said you understood from tone of voice etc. I would guess it's less than 80%.
That's very easy and very wrong. Let's say you have a 100 page book. Page 1 contains fundamental knowledge that allows you to understand the rest of it. If you skip page 1 then you won't understand the other 99.
How much of the book will you understand if you only read page 1?
That then raises the question: what is a unit of communication?
If communication is 20% verbal and 80% nonverbal, and if communication is very nonlinear in understanding (as with your book example), how do we know what 1% of communication is? What does it mean, and how can we tell that the figure is correct, when our main or only way of detecting whether communication succeeded is through understanding or lack thereof?
But tonal information can be parsed without lexical understanding and vice versa.
Somebody cursing in French can still be interpreted as anger even if you don't understand French, and written profanity can still be interpreted as anger even if you didn't hear it spoken.
Tone and language do complent each other, but neither is a prerequisite for the other like your book analogy would suggest.
> but tonal information can be parsed without lexical understanding
Parsed perhaps, but it's so context sensitive that it's not useful, save for extremities. The same tone of voice can have so many meanings based on what's actually being said and yet another if you add context.
Better statement would be "no standard definition possible for square root, sin, and so on with rationals", as rationals with unbounded denominators don't have an inherent precision limit---something like ulps in floating point systems---and the max denominator wouldn't be a good choice for precision even when denominator is limited. Every such function now has to be given an explicit denominator value or similar strategy, which complicates both the interface and implementation.
Are you thinking about a specific library? You aren't the only person who commented this way. But, the truth is that root, sin and so on don't "work" with floats either. In fact, there are common ways to implement these functions by either using tables (which are approximate) or algebraic approximations (that give you... drum roll: rationals!)
But, really, there isn't any way (except symbolically) to represent transcendental functions in computers. It doesn't matter what kind of number you choose to do it.
√2 with floating point is obviously closest representable number. With fixed point it is obviously closest representable number as well. With rationals, you need to arbitrarily limit precision, and the point of using rational was to use exact values.
I don't think that's a big deal though. You deliberately choose to use a rational system, because you understand the problem domain, which could greatly benefit from such a representation. If you throw a rational system at every math problem as a catch-all representation, then you are doing it wrong.
The fact that the mastodon devs put the content in the HTML but refuse to show the content tells me everything I need to know about mastodon. It may even be worse than twitter. Generally I will not go out of my way to use or view such links. I just close the tabs.
In my experience, Mastodon (and other relatives from its family) is much better than current Twitter, at least for non-logged-in users. On Mastodon, as long as you enable JavaScript, you can see the whole thread for the post, while Twitter (which also needs JavaScript) only shows the initial post; and if you go to an account's page, Mastodon shows the whole timeline in reverse-chronological order (which is generally what you want), while Twitter shows old posts in a random order.
I understand that POV and within it's context I agree. But to a non-JS user they are equivalent (Mastodon and Twitter) in functionality: blank pages telling you to execute random code. Very similar to phishing emails telling one to execute the random code attachment.
Twitter has to be bad by virtue of being a corporation driven by profit motive and requiring JS to collect and sell user data. Mastodon doesn't have to be crap since it is not required to generate maximum profit. It just chooses to be as shown from the content being in the HTML but hidden. This makes Mastodon worse. Either it's malicious or, more likely, incompetent cargo culting of corporate practices. Devs unable to separate themselves from the use-cases they have to develop for at their paid jobs or lacking the knowledge to do so.
I think Silverblue comes preconfigured with the proprietary one, but I don't know how to check and also don't think it's possible to change it, since it's an immutable system.
It worked fine until that happened and basically made it unusable since I don't have an iGPU.
Maybe I'll try some other Fedora, or just go back to X11 on Arch, which worked well enough.
With proprietary software, how many times a producer did fix breakage or add missing feature for you because you bought a licence (subscription)? How fast was it?
Nobody's done anything for me specifically, but if it's something multiple people need in their workflows, I've seen plenty of examples of things being added due to community demand. It's not fast, but nobody gave me a "do it yourself and open a pull request" attitude, which is one of my greatest gripes with open-source software.
Same for imperative languages with "parameter list" style. In python, with
def f(a, b): return c, d
def g(k, l): return m, n
you can't do
f(g(1,2))
but have to use
f(*g(1,2))
what is analogical to uncurry, but operate on value rather than function.
TBH I can't name a language where such f(g(1,2)) would work.