I like the declarative nature of it. It makes it so easy to debug anything, with "simple" introspection tools. I feel that many horrors will be created when we introduce control flow to CSS.
And this project kinda show how far you can go, still, if you really want it :D
I'm not sure what you mean by Socrates was proven dead wrong.
The study you linked doesn't show that people are becoming dumber because of LLMs, its just showing that when you offload tasks to these tools your brain engages less in that specific task, just like you'd do with a calculator, instead of doing complex calculations on paper, the calculator will do them for you, or when writing and using a spell-checker or using a search engine, instead of opening a book and searching. The question is whether in the long-term cognitive capacity is reduced, and like I said before this argument predates LLMs (All the way back to Socrates)
Also, take the study with a grain of salt as this is a small sample with only 54 participants for a single task on a short term study.
Personally, I believe LLMs just allows us to have a higher level of abstraction.
Endgame is to produce AI which will not need any supervision by the time the current generation of experienced developers will retire or even sooner. I don’t know if it will happen but many bet on this and models are still improving, flattening is not yet seen.
This implies programming is done and there will be no other advancements.
And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?
Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs.
I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.
[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]
reply