Thank you! This information is the kind of information for which I’ve been searching.
That said, l feel like there’s a mutual-exclusivity problem between ‘Start with the "Whole" Picture’ and ‘Break Down the Process’.
For example, how does this from your first suggestion:
> explain the end-state you want (even simple features like error handling or specific libraries). If you have ANY achieved method or inclusion for what the end-state should include, you write it out clearly.
not contradict this from your second suggestion:
> Instead of expecting it to design an entire feature, ask for components step-by-step
Additionally, you said:
> There is a very decent chance that you have to do this across multiple new chats, after 3-5 iterations in, the AI will most likely crash and burn. At that point, you open a new chat, paste in the whole working codebase, and picked up from where you left off in the last chat. You have to do this A LOT.
But IME, by the time the model chokes on one chat, the codebase is already large enough that pasting the whole thing into another chat typically results in my hitting context-window limits. Perhaps, in the kinds of projects I typically work, a good RAG tool would offer better results?
To be clear, right now I’m only discussing my difficulties with the chatbots offered by the model providers—which, for me, is mostly Claude but also a bit of ChatGPT; my experience with Copilot is outdated so it probably deserves another look, and I’ve not yet tried some of the third-party, code-centric apps like aider or cursor that have previously been suggested, though I will soon.
As for your recommended hacks, these look to be helpful; thank you! The only part I find odd is your inclusion of “Write the code in full with no omissions or code comment-blocks or 'GO-HERE' substitutions”; I myself feel like I get far better results when I ask the model to 1) write full code for the methods that are likely to be the kinds of generic CS logic that a junior would know, 2) write stubs for the business logic, then 3) implementing the more complex business logic myself manually. IOW—and IME—they’re really good at writing boilerplate and generating or reasoning about junior-level CS logic. That’s indeed helpful to me, but it’s a far cry from the kinds of “ChatGPT can write entire apps with minimal effort” hype I keep seeing, and it’s only marginally better, IME at least, than what I’ve been able to do with the inline-completion and automatic boilerplate features that have been included in the IDEs I’ve used for over a decade.
> It's a labor of love and it's things you learn over time, and it won't happen if you don't put the work in.
Indeed. I do love playing with this stuff and learning more. Thank you again for sharing your knowledge!
> I wrote all of this haphazardly in a Google Doc. GPT-4 organized it for me cleanly.
I am regularly impressed at how well these models behave when asked to summarize a document or even when asked to expand a set of my notes into something more coherent; it’s truly remarkable!
That said, l feel like there’s a mutual-exclusivity problem between ‘Start with the "Whole" Picture’ and ‘Break Down the Process’.
For example, how does this from your first suggestion:
> explain the end-state you want (even simple features like error handling or specific libraries). If you have ANY achieved method or inclusion for what the end-state should include, you write it out clearly.
not contradict this from your second suggestion:
> Instead of expecting it to design an entire feature, ask for components step-by-step
Additionally, you said:
> There is a very decent chance that you have to do this across multiple new chats, after 3-5 iterations in, the AI will most likely crash and burn. At that point, you open a new chat, paste in the whole working codebase, and picked up from where you left off in the last chat. You have to do this A LOT.
But IME, by the time the model chokes on one chat, the codebase is already large enough that pasting the whole thing into another chat typically results in my hitting context-window limits. Perhaps, in the kinds of projects I typically work, a good RAG tool would offer better results?
To be clear, right now I’m only discussing my difficulties with the chatbots offered by the model providers—which, for me, is mostly Claude but also a bit of ChatGPT; my experience with Copilot is outdated so it probably deserves another look, and I’ve not yet tried some of the third-party, code-centric apps like aider or cursor that have previously been suggested, though I will soon.
As for your recommended hacks, these look to be helpful; thank you! The only part I find odd is your inclusion of “Write the code in full with no omissions or code comment-blocks or 'GO-HERE' substitutions”; I myself feel like I get far better results when I ask the model to 1) write full code for the methods that are likely to be the kinds of generic CS logic that a junior would know, 2) write stubs for the business logic, then 3) implementing the more complex business logic myself manually. IOW—and IME—they’re really good at writing boilerplate and generating or reasoning about junior-level CS logic. That’s indeed helpful to me, but it’s a far cry from the kinds of “ChatGPT can write entire apps with minimal effort” hype I keep seeing, and it’s only marginally better, IME at least, than what I’ve been able to do with the inline-completion and automatic boilerplate features that have been included in the IDEs I’ve used for over a decade.
> It's a labor of love and it's things you learn over time, and it won't happen if you don't put the work in.
Indeed. I do love playing with this stuff and learning more. Thank you again for sharing your knowledge!
> I wrote all of this haphazardly in a Google Doc. GPT-4 organized it for me cleanly.
I am regularly impressed at how well these models behave when asked to summarize a document or even when asked to expand a set of my notes into something more coherent; it’s truly remarkable!