If by the spirit, you only mean the bazaar model, then yes. But it's in the original spirit of free software. GNU preferred to keep the development somewhat contained, even so many years ago.
This is really nice to know. I remember trying to compile pandoc to Wasm after finding out that ghc had Wasm support, hitting all kinds of problems and then realising that there was no real way to post an issue to Haskell's gitlab repo without being pre-approved.
I guess now with LLMs, this makes more sense than ever, but it was a frustrating experience.
I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.
I feel this has causality reversed. I'd say they are good at analogies because they have to compress well, which they do by encoding relationships in stupidly high-dimensional space.
Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.
This is explained in more detail in the book "Human Being: reclaim 12 vital skills we’re losing to technology", which I think I found on HN a few months ago.
The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.
This seems really nice, and looks like something I have been wanting to exist for some time. I will definitely play with it when I have some time.
I know this is a personal project and you maybe didn't want to make it public, but I think the README.md would be better suited with a section about the actual product. I clicked on it wanting to learn more, but with no time to test it for now.
Thanks for the feedback, I did update the README and included all the futures
and also there is https://talimio.com, I think it shows the future in a better way visually
I have been looking for the same thing, either from Meta's SAM 3[1] model, either from things like the OP.
There has been some research specifically in this area with what appears to be classic ML models [2], but it's unclear to me if it can generalize to dances it has not been trained on.
reply