So you are basically saying "it failed on some of my Rust tasks, and those other languages aren't even real programming languages, so it's useless".
I've used LLMs to generate quite a lot of Rust code. It can definitely run into issues sometimes. But it's not really about complexity determining whether it will succeed or not. It's the stability of features or lack thereof and the number of examples in the training dataset.
I realize my comment seems dismissive in a manner I didn't intend. I'm sorry for that, I didn't mean to belittle these programming tasks.
What I meant by complexity is not "a task that's difficult for a human to solve" but rather "a task for which the output can't be 90% copied from the training data".
Since frontend development, small scripts and SQL queries tend to be very repetitive, LLMs are useful in these environments.
As other comments in this thread suggested: If you're reinventing the wheel (but this time the wheel is yellow instead of blue), the LLM can help you get there much faster.
But if you're working with something which hasn't been done many times before, LLMs start struggling. A lot.
This doesn't mean LLMs aren't useful. (And I never suggested that.) The most common tasks are, per definition, the most common tasks. Therefore LLMs can help in many areas, and are helpful to a lot of people.
But LLMs are very specialized in that regard, and once you work on a task that doesn't fit this specialization, their usefulness drops, down to being useless.
I've used LLMs to generate quite a lot of Rust code. It can definitely run into issues sometimes. But it's not really about complexity determining whether it will succeed or not. It's the stability of features or lack thereof and the number of examples in the training dataset.