So, isn't this a rather longwinded way to say that a signature only extends to the scope of the message it contains?
It doesn't matter if I sign the word "yes", if you don't know what question is being asked. The signature needs to included the necessary context for the signature to be meaningful.
Lots of ways of doing that, and you definitely need to be thoughtful about redundant data and storage overhead, but the concept isn't tricky.
Hi, post author here. Agree that the idea isn't tricky, but it seems like many systems still get it wrong, and there wasn't an available system that had all the necessary features. I've tried many of them over the years -- XDR, JSON, Msgpack, Protobufs. When I sat down to write FOKS using protobufs, I found myself writing down "Context Strings" in a separate text file. There was no place for them to go in the IDL. I had worked on other systems where the same strategy was employed. I got to thinking, whenever you need to write down important program details in something that isn't compiled into the program (in this case, the list of "context strings"), you are inviting potentially serious bugs due to the code and documentation drifting apart, and it means the libraries or tools are inadequate.
I think this system is nice because it gives you compile-time guarantees that you can't sign without a domain separator, and you can't reuse a domain separator by accident. Also, I like the idea of generating these things randomly, since it's faster and scales better than any other alternative I could think of. And it even scales into some world where lots of different projects are using this system and sharing the same private keys (not a very likely world, I grant you).
Also, the defining feature of capitalism is that it encloses what was previously common.
Land used not to be owned (feudal lordship was functionally different than private ownership.) Then, society shifted, land became private, and that was the beginning of rent. This is enclosure.
The whole concept of IP is to explicitly extend this process to ideas -- they are not free, they are owned, and I have to pay you to use them. This is also enclosure, precisely.
The "rent" in "rent-seeking" does not refer to "rent" it refers to "economic rent."
Totally different concept. But don't take my word for it:
> "Rent-seeking" is an attempt to obtain economic rent (i.e., the portion of income paid to a factor of production in excess of what is needed to keep it employed in its current use) by manipulating the social or political environment in which economic activities occur, rather than by creating new wealth.[0]
> In economics, economic rent is any payment to the owner of a factor of production in excess of the costs needed to bring that factor into production. [1]
This measures the ability of a LLM to succeed in a certain class of games. Sure, that could be a valuable metric on how powerful (or even generally powerful) a LLM is.
Humans may or may not be good at the same class of games.
We know there exists a class of games (including most human games like checkers/chess/go) that computers (not LLMs!) already vastly outpace humans.
So the argument for whether a LLM is "AGI" or not should not be whether a LLM does well on any given class of games, but whether that class of games is representative of "AGI" (however you define that.)
Seems unlikely that this set of games is a definition meaningful for any practical, philosophical or business application?
It's to do with how the creators of ARC-AGI defined intelligence. Chollet has said he thinks intelligence is how well you can operate in situations you have not encountered before. ARC-AGI measures how well LLMs operate in those exact situations.
To an extent, yes. Interdependent variables discovery and then hopefully systems modeling and navigating through such a system. If that's the case, then this is a simplistic version of it. How long until tests will involve playing a modern Zelda with quests and sidequests?
"AGI" is a marketing term, and benchmarks like this only serve to promote relative performance improvements of "AI" tools. It doesn't mean that performance in common tasks actually improves, let alone that achieving 100% in this benchmark means that we've reached "AGI".
So there is a business application, but no practical or philosophical one.
This is bad in tech. But at least we are (relatively) well equipped to deal with it.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
My only gripe is how myopic the AI discussion on HN is. We barely talk about how it hits everyone else.
In the relocation industry, it's losing translators, relocation consultants and immigration lawyers a lot of work. Their cases are also getting tougher because people are getting false information from ChatGPT and arguing with them.
This problem is compounded by the lack of training data for that topic. I spent years surfacing that sort of information and putting it online, but with AI overviews killing the economics of running a website, it feels pointless.
I see such stories everywhere. People being replaced by something half as good but a tenth of the cost. It's putting everyone out of work and making everything worse.
That entirely depends on what you are buying. If you’re in need of a lawyer to keep you out of the bottom bunk, I’d happily spend a lot more for a little better.
It's fine to an extent, but it kills what happens in the other half.
You can feel it with AI-generated content and responses, in AI-generated art, customer service bots and vibe-coded software. This gradual worsening of everything won't lead to lower prices or a better experience, so it's not really a tradeoff.
I've been telling my curious/adrift relatives that it's a machine takes a document and guesses what "usually" comes next based on other documents. You're not "chatting with it" as much as helping it construct a chat document.
The closer they can map their real problems to make-document-bigger, the better their results will be.
Alas, that alignment is nearly 100% when it comes to academic cheating.
The wild part is they’re having this reaction while using the most rigid and limited interfaces to the LLMs. Imagine when the capabilities of coding agents surface up to these professions. It’s already starting to happen with Claude Cowork. I swear if I see another presentation with that default theme…
This. As annoying as all sorts of 'safety features' are, the sheer amount of effort that goes into further restricting that on the corporate wrapper side side makes llm nigh unusable. How can those kids even begin to get the idea of what it can do, when it seems like its severely locked down.
Sure. In the instance I am aware of, SQL ( and xml and few others )files are explicitly verbotten, but you can upload them as text and reference them that way; references to personal information like DOB immediately stops the inference with no clear error as to why, but referencing the same info any other way allows it go on.
It is all small things, but none of those small things are captured anywhere so whoever is on the other end has to 'discover' through trial and error.
This is true even at large colleges. Better cut faculty jobs to deal with budget shortfall. Never mind the football program can raise $200m with a dozen phone calls.
This is really interesting. I've been out of education for a long time, but I was wondering how they were dealing with the advent of AI. Are exams still a thing? Do people do coursework now that you can spew out competent sounding stuff in seconds?
I teach CS at a university in Spain. Most people here are in denial. It is obvious to me that we need to go back to grading based on in-person exams, but in our last university reform (which tried to copy the US/UK in many aspects) there was so much political posturing and indoctrination about exams being evil and coursework having to take the fore that now most people just can't admit the truth before their own eyes. And for those of us that do admit it, we have a limited range of maneuver because grading coursework is often a requirement that emanates from above and we can't fundamentally change it.
So in most courses nothing has changed in the way we grade. Suddenly coursework grades have gone up sharply. Anyone with working neurons know why, but in the best case, nothing of consequence is done. In the worst case (fortunately uncommon), there are people trusting snake oil detectors and probably unfairly failing some students. Oh, and I forgot: there are also some people who are increasing the difficulty of the coursework in line with LLMs. Which I guess more or less makes sense... Except that if a student wants to learn without using them, then they suddenly will find assignments to be out of their league.
> Except that if a student wants to learn without using them
My son, who is a freshman at a major university in NYC, when he said to his freshman English professor that he wanted to write his papers without using AI, was told that this was "too advanced for a freshman English class" and that using AI was a requirement.
I don't understand what they think it is they're teaching? Will we teach kids to "read" by taking a photo of their bedtime story and hitting a button next?
One of the teaching methods is "look at the context, like pictures, and guess what the word is". One example I remember was thinking "pony" is "horse" due to association without being able to sound it out.
Better than those who just want to burn the system down with no real plan for what comes next, and unable to comprehend the inevitable bloodshed of the 'glorious revolution' that they crave.
You think you are describing the Bolsheviks, but your description is equally fitting for those who want to abolish human labor without providing people alternative ways to make a living.
And no, hand waving about "UBI" doesn't count unless they start actually doing the politics required to implement UBI.
There's a lot of bloodshed going on under the status quo. Why do you think people are 'unable to comprehend' it? Maybe they just want to reallocate it and aren't especially sympathetic to those who who have avoided it up to now.
Do you comprehend the scale of the inevitable bloodshed that maintaining the status quo is bound to lead to? You don't do so any better than those you're chastising.
Most of them fried their brains with stimulants long ago. Thankfully for them, they no longer have to think. An LLM does it for them.
But it’s just the same idiots were rabidly cheering the latest JavaScript framework a decade ago, NFT’s and all manors or ridiculous things anyone with 2 working brain cells saw transparently though.
Not sure if you're being sarcastic or not, but I think this is actually good advice. It's great to be a free-thinker and question things, but I do think there is some (monetary) value in just not asking too many questions, but optimizing to be the best at whatever you're doing.
Edit: to give an example, I probably would have done better in school had I spent less time questioning the education system and more time just accepting it and trying to get good grades.
Yeah, succeed in the system, fuck everybody else. If the system is making the world a worse place, all the better, you can take advantage since you’re in the system. All that until you find yourself spat out by the system and get to experience what you’ve been part of with no recourse.
I want to agree, but there is the tension that in business code, what you pass as arguments is very often already named like the parameter, so having to indicate the parameter name in the call leads to a lot of redundancy. And if you’re using domain types judiciously, the types are typically also different, hence (in a statically-typed language) there is already a reduced risk of passing the wrong parameter.
Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.
This is an issue in Python but less so in languages like JavaScript that support "field name punning", where you pass named arguments via lightweight record construction syntax, and you don't need to duplicate a field name if it's the same as the local variable name you're using for that field's value.
That forces you to name the variable identically to the parameter. For example, you may want to call your variable `loggedInUser` when the fact that the user is logged in is important for the code’s logic, but then you can’t pass it as-is for a field that is only called `user`. Having to name the parameter leads to routinely having to write `foo: blaFoo` because just `blaFoo` wouldn’t match, or else to drop the informative `bla`. That’s part of the tension I was referring to.
I write plenty of business code, and I do not like even the possibility of a mistake like:
fn compute_thing(cost: whatever, num_widgets: whatever) -> Whatever;
let cost = …;
let num_widgets = …;
let result = compute_thing(num_widgets, cost);
(This can by most any language including Haskell or Lean, with slightly different syntax.)
One can prevent this very verbosely with the Builder pattern. Or one can use named parameters in languages that support them.
An interesting analogue is tensor math. In Einstein’s work, there were generally four dimensions and you probably wouldn’t lose track of which letter was which. In linear algebra, at least at the high school or early undergrad level, there are usually vectors and tensors and, well, that’s it. But in data crunching or modern ML, tensors have all kinds of cool axes, and for some reason we usually just identify them by which slot they are in the order that they happen to be in in the input tensor. Some people try to creatively make this “type safe” by specializing on the length of the dimension, which is an incomplete solution at best. I would love to see adoption of some solution that gives these things explicit names and does not ever guess which axis is being referenced.
(I find 95% of ML code and a respectable fraction of papers and descriptions to be locally incomprehensible because you need to look somewhere else to figure out what on Earth A • B' actually means.
OCaml has a neat little feature where it elides the parameter and variable name if they're the same:
let warn_user ~message = ... (* the ~ makes this a named parameter *)
let error = "fatal error!!" in
warn_user ~message:error; (* different names, have to specify both *)
let message = "fatal error!!" in
warn_user ~message; (* same names, elided *)
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side.
I'm not speaking of burdens of proof about unfalsifiable statements.
I'm saying that I think this is an important enough question that I think we should seek real evidence in either direction, especially since apparently everyone already has a strong opinion (warranted or not.)
I agree with the thrust of this article, that norms and what we perceive as good or desirable extend considerably beyond the minimum established by law.
But a point that was not made strongly, which highlights this even more, is that this goes in every direction.
If this kind of reimplementation is legal, then I can take any permissive OSS and rebuild it as proprietary. I can take any proprietary software and rebuild it as permissive. I can take any proprietary software and rebuild it as my own proprietary software.
Either the law needs to catch up and prevent this kind of behavior, or we're going to enter an effectively post-copyright world with respect to software. Which ISN'T GOOD, because that will disincentivize any sort of open license at all, and companies will start protecting/obfuscating their APIs like trade secrets.
Companies can take open-source software and make a proprietary reimplementation. You can't take a proprietary software and make an open source GPL version.
I am absolutely certain that if you tried you would be sued to oblivion. But big company screwing up open source is not even news anymore. In fact I (still) believe that the fact that even though LLMs were trained on tons of GPL and AGPL or even unlicensed software it's considered ok to use LLM code in proprietary projects is example of just that.
From a strictly legal perspective the two are equivalent. The fact that there are structural injustices in the system is true, but that's not a question that any answer to "what should be legal" can fix.
I've been thinking this for over two years, that's why I stopped contributing to open source at that time - my work was only gonna be exploited to make rich people richer regardless of the license.
Crazy that only now we're seeing a bunch of articles coming to the same conclusion now.
I think copyright should still apply, but if it doesn't, we need new laws - ones which protect all human work, creative or not. Laws should serve and protect people, not algorithms and not corporations "owning" those algorithms.
I put owning in quotes because ownership should go to the people who did the work.
Buying/selling ownership of both companies and people's work should be illegal just like buying/selling whole humans is. Even if it took thousands of years to get here.
Money should not buy certain things because this is the root cause of inequality. Rich people are not getting richer at a faster rate by being more productive than everyone else but by "owning" other people's work and using it as leverage to extract even more from others.
Maybe LLM and mass unemployment of white collar workers will be the wakeup call needed for a reform. Or revolution.
Last time this happened was during the second industrial revolution and that's how communism got popular. We should do better this time because this is the last revolution which might be possible.
It doesn't matter if I sign the word "yes", if you don't know what question is being asked. The signature needs to included the necessary context for the signature to be meaningful.
Lots of ways of doing that, and you definitely need to be thoughtful about redundant data and storage overhead, but the concept isn't tricky.
reply