I think he's working at OpenAI now, so the priority would shift from MVP that gets people excited, to "make it actually reliable for a billion people".
It is no derivate. It's just the tartrate salt of LSD. There is no pharmacological difference. It's like saying I got this new Magnesium Tartrate which is now different to the Magnesium Oxide / Citrate / Glycinate / whatever you are taking. It might affect stability or absorption rate or similar, but Tartrate itself doesn't have an effect.
My bad, in my language its derivate. It's equivalent to the English derivative:
"In chemistry, a derivative is a compound that is derived from a similar compound by a chemical reaction, or that can be imagined to arise from another compound, if one atom or group of atoms is replaced with another atom or group of atoms."
Willie Nelson is pretty sharp for his age. I compare him to the much younger President of the United States who blathers absolute nonsense constantly despite no known history of cannabis use and a claimed history of abstaining from all substances.
Funny how we now see AI go through developmental phases similar to what we see in young child development. In a weird convoluted way. Strawberry spelling and car wash aren't particularly intuitive as cognitive developmental stages.
E.g. well known mirror-test [1], passed by kids from age 1.5-2
Or object permanence [2], children knowing by age 2 that things that are not in sight do not disappear from existence.
Also strawberry spelling isn't any real test for current LLMs as they have no concept of letters, they work on tokens which may be several characters including punctuation and numerals. To have any hope of getting that question right tokens would have to have the granularity of individual letters, massively ballooning model size and training time, or the LLM needs to be able to call out to an external tool that will return the result (and needs sufficient examples in the training data to prime that trigger to fire).
While that's true, the tokenizer is half the problem. The important fault demonstrated is it doesn't _know_ it can't see the letters, and won't express this unless it has been trained or instructed to. "I can't see letters through the tokenizer" never appears in a corpus of human writing.
reply