I wonder how model competence and/or user preference on web development (that leaderboard) carries over to more complex and larger projects, or more generally anything other than web development ?
In addition to whatever they are exposed to as part of pre-training, it'd be interesting to know what kind of coding tasks these models are being RL-trained for? Are things like web development and maybe Python/ML coding overemphasized, or are they also being trained on things like Linux/Windows/embedded development etc in different languages?
Would be great for shiptracks, too— which used to mitigate 1/3 of the warming impact of maritime shipping — until the 2022 clean fuel standards were implemented.
Everyone learns that the renaissance was sparked by the translation of Ancient Greek works.
But few know that the Renaissance was written in Latin — and has barely been translated. Less than 3% of <1700 books have been translated—and less than 30% have ever been scanned.
I’m working on a project to change that. Research blog at www.SecondRenaissance.ai — we are starting by scanning and translating thousands of books at the Embassy of the Free Mind in Amsterdam, a UNESCO-recognized rare book library.
We want to make ancient texts accessible to people and AI.
If this work resonates with you, please do reach out: Derek@ancientwisdomtrust.org
I can see you being right, I didn't make the connection with 20th,19th century documents and the comment felt disconnected from the thread. Either way, very cool project, worth a show hn post.
I think we were all thinking that. Acoustic Cavitation has also been proposed as a mechanism for enabling cold fusion. https://www.science.org/doi/10.1126/science.1067589
reply