Tell it to write code like a Senior developer for your respective language, to "write the answer in full with no omissions or code substitutions", tell it you'll tip based on performance, and write more intimate and detailed specs for your requests.
One of the most interesting things about current LLMs is all the "lore" building up around things like "tell it you'll tip based on performance" and other "prompt engineering" hacks that by the very nature nobody can explain, they just "know it works" and how its evolving like the kind of midwife remedies that historically ended up being scientifically proven to work and others were just pure snake oil. Just absolutely fascinating to me. Like in some far future there will be a chant against unseen "demons" that will start with "ignore all previous instructions."
I call this superstition, and I find it really frustrating. I'd much rather use prompting tricks that are proven to work and where I understand WHY they work.
I care less that such prompting hacks/tricks are consistently useful; I care more about why they work. These hacks feel like “old-wives tales” or, as others have mentioned, “superstitious”.
If we can’t explain why or how a thing works, we’re going to continue to create things we don’t understand; relying upon our lucky charms when asking models to produce something new is undoubtedly going to result in reinforcement of the importance of those lucky charms. Feedback loops can be difficult to escape.
What I would expect is a lot of "non-idiomatic" Go code from LLMs (but eventually functional code iff the LLM is driven by a competent developer), as it appears scripting languages like Python, SQL, Shell, etc are their forte.
My experience with Python and Cursor could've been better though. For example when making ORM classes (boilerplate code by definition) for sqlalchemy, the assistant proposed a change that included a new instantiation of a declarative base, practically dividing the metadata in two and therefore causing dependency problems between tables/classes. I had to stop for at least 20 minutes to find out where the problem was as the (one n a half LoC) change was hidden in one of the files. Those are the kind of weird bugs I've seen LLMs commit in non-trivial applications, stupid 'n small but hard to find.
But what do I know really. I consider myself a skeptic, but LLMs continue to surprise me everyday.
Since mid 2023, I've yet to have an issue