Hacker Newsnew | past | comments | ask | show | jobs | submit | gspr's commentslogin

It also makes no sense! "Fuck this, it doesn't matter - but I'll happily spend effort communicating that to others, because apparently making others not care about something I don't care about is something I do care about." Wut?!

Well, I say it makes no sense. Alternatively, it makes a lot of sense, and these people actually just wanna destroy everything we hold dear :-(


Perhaps the current societal trajectory is destroying everything that they hold dear.

I mean, just look around you.


Then do something about it. Vote for better politicians. Donate money to causes that you think are important. If you think you can do it better, and this isn't meant to be facetious, run for political office.

Being fatalistic can be a great excuse not to do anything.


>Vote for better politicians.

I cannot. I can only vote better politicians if they are there. That is without even going into the minefield of what is "better". My implication is that I have no confidence whatsoever in any current politician in my state.

> Donate money to causes that you think are important.

I have no money.

> If you think you can do it better, and this isn't meant to be facetious, run for political office.

I have no money, no visibility and no connections. Even if I was magically given tons of money, I would still need a strong network to attempt any real change, even without taking into consideration the strong networks already in place preventing it.

Telling random citizens "run for office" is facetious, whether you mean it or not.


> Telling random citizens "run for office" is facetious, whether you mean it or not.

Hard disagree. At least where I live, "random citizens" run for local office and succeed all the time.

Also, complaining that you "have no network" is a you problem, not a system problem. I'm truly sorry if you feel you have no friends, but you'll be better off at least trying to get some (independent of politics). And if that's something you've tried and failed at before, I do feel pity. But I don't think hope is lost for anyone. And even if it were lost, please don't actively spread the misery!


Don't spread the misery?? Wow, fucking thanks.

You are kind of proving my point. You are actively justifying doing literally nothing about what bothers you and acting indignant and self righteous about it.

To feel something. To resist something bad. To stand for what is right.

Do those sentiments mean nothing to you?


Well why not head for the front lines of Ukraine? Or Russia, depending on your preference.

This is such an incredibly imbecilic comment.

Listen to this guy: "because you don't take the ultimate risk for what you believe in, you are dumb for suggesting you should do anything whatsoever".

Go away. The world doesn't need your dark resignation.


Wasting your life fighting things that can't be fought is functionally equivalent to dying sooner.


> The underlying tension is that "you own the car" means something very different from "you own the software running the car."

How is this different from the 2000s, or the 90s, or even before, when the normal thing to do with commercial software was to purchase a license to use said software and a physical medium containing a copy? You'd also then not "own the software", but you owned the right to install a copy on your own computer and use it. That worked without having to hand over the keys to your own computer.

Sure, the physical delivery medium is gone, but that's just a detail. Why do we now think that just because we license software for use, we can't be in ultimate charge of our own devices?


In 1990 Ford couldn't turn off your Mustang because you plugged a TwEECer into the J3 port and screwed around with the tune. Best they could do was void your warranty and deny you further upgrades (i.e. tunes flashed as part of a recall or TSB).

These days unauthorized access tends to lose you effective use of the hardware you bought because the hardware requires software features to work and that software often unnecessarily phones home so if the OEM toggles a field in a DB somewhere you lose access to back up assist or whatever other fancy tech features that you a) paid for b) don't strictly need to have dependencies that phone home to work but do "because reasons".


I realize that that's the situation. I'm asking why we're accepting it. Especially on flimsy grounds like "we don't own the software".

Have a lawyer look up the Magnuson-Moss Warranty Act for you if that happens. What Ford can do is legally limited.

I've definitely experienced this on public transit in cities in several different countries here in Europe. It's not an everyday experience, but it definitely happens.


Yes, but that isn't a flight.


But you said people on busses and trains doing this get shut down. My experience is they don't.


I think "physical" refers to the fact that you initialize the state by pressing physical buttons. That's quite accurate.


Well keyboard also has physical buttons. And mouse.


> I have seen more benefit from it than harm.

Same. I, too, am sick of bloated code. But I use the quote as a reminder to myself: "look, the fact that you could spend the rest of the workday making this function run in linear instead of quadratic time doesn't mean you should – you have so many other tasks to tackle that it's better that you leave the suboptimal-but-obviously-correct implementation of this one little piece as-is for now, and return to it later if you need to".


That is a great example of when the original quote works as intended.

I'm reacting to experiences where the software that emerges from a relatively large team effort can't really be made meaningfully faster because there are millions of tiny performance cuts - from the root to the fruit.


yes! See Rule 4

/* If we can cache this partial result, and guarantee that the cache stays coherent across updates, then average response time will converge on O(log N) instead of O(N). But first make the response pass all the unit tests today */


Because Thiel is plainly, and in his own words, a regressive threat to democracy.


I absolutely believe you on the facts, and it all sounds very disgusting, but here's what I don't understand: customers and staff alike no longer like the clinic. Won't that be a huge boon to competitors, essentially ruining the VC's investment?

I get that it's not so clean cut with something as equipment- and licensing heavy as the veterinarian sector. But I've heard the same story exemplified with pizza parlors instead. Won't all the good staff take all the loyal customers and go elsewhere very easily in that case?


Interesting! Is there anywhere a discussion around their refusal to include your fix?


See this, for example: https://groups.google.com/g/opensshunixdev/c/FVv_bK16ADM/m/R...

It boilds down to using a Linux-specific API, though it's really BSD that is lacking support for a standard (RFC 5014).


It would also seem to break address privacy (usually not much of a concern if you authenticate yourself via SSH anyway, but still, it leaks your Ethernet or Wi-Fi interface's MAC address in many older setups).


This is a good argument for not making it the default, but it would be nice to have it as a command line switch.


Well, yss, but SSH is hardly ever anonymous and this could simply be a cli option.


Not anonymous, but it's pretty unexpected for different servers with potentially different identities for each to learn your MAC address (if you're using the default EUI-64 method for SLAAC).


> Training on copyleft licensed code is not a license violation. Any more than a person reading it is.

Some might hold that we've granted persons certain exemptions, on account of them being persons. We do not have to grant machines the same.

> In copyright terms, it's such an extreme transformative use that copyright no longer applies.

Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?


>Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?

I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.


> I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.

It doesn't need to for my argument to make sense. It's a problem if it reproduces a single copyrighted work (near)-verbatim. Which we have plenty of examples of.


Do we? Even when people attempt to jail break most models with 1000s of prompts they are only able to get a paragraph or two of well known copyrighted works and some blocks of paraphrased text, and that's with giving it a substantially leading question.


It surely doesn't matter how leading or contorted the prompt has to be if it shows that the model is encoding the copyrighted work verbatimly or nearly so.


It definitely does, which is why I put substantial amount of verbatim material. If someone can recite the first paragraph of Harry Potter and the sorcerers stone from memory, it surely doesn't mean they have memorized the entire book.


Of course not. But if the passage they can recite is long enough that it is copyrightable, then surely distributing a thing that (contortedly or not) can do said recitation is a form of redistribution of the work itself?


No. It is against their TOS to attempt to jailbreak their models. While I don't agree that the models can recite longer periods of verbatim copyrighted material, even if it could, the person who is at fault is the person subverting the system, not the creator of the system. If I steal a library book and make copies of it to distribute illegally, it wouldn't make sense to hold the library at fault for infringing on the book publisher's copyright.


This is an interesting take that I hadn't considered. Your analogy with a library break-in is good. I'll need to digest this. Thanks.


> We do not have to grant machines the same.

No we don't have to, but so far we do, because that's the most legally consistent. If you want to change that, you're going to need to pass new laws that may wind up radically redefining intellectual property.

> Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim?

Of course it has, if the transformation is extreme, as it appears to be here. If I memorize the lyrics to a bunch of love songs, and then write my own love song where every line is new, nobody's going to successfully sue me just because I can sing a bunch of other songs from memory.

Also, it's not even remotely clear that the LLM can produce the training data near-verbatim. Generally it can't, unless it's something that it's been trained on with high levels of repetition.


I want to briefly pick at this:

> you're going to need to pass new laws that may wind up radically redefining intellectual property

You're correct that this is one route to resolving the situation, but I think it's reasonable to lean more strongly into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself that would draw a pretty clear distinction between human creativity and reuse and LLMs.


> into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself

But you're missing the other half of copyright law, which is the original intent to promote the public good.

That's why fair use exists, for the public good. And that's why the main legal argument behind LLM training is fair use -- that the resulting product doesn't compete directly with the originals, and is in the public good.

In other words, if you write an autobiography, you're not losing significant sales because people are asking an LLM about your life.


"Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim?...."

So do 10 000 chimpanzees on typewriters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: