>> "How would you organize these LLM quirks, ontologically speaking? I have this notion that the better path is to identify what kinds of things are emerging and prompt to do those things better; accept it as something LLMs are going to do and treat it as something to improve on instead of something to eliminate."
The output is a bit better on blind prompting with applying the results. Here's the gist:
1. Compression artifacts — the model encoding structure implicitly
2. Attention-economy mimicry — the model trained on engagement-optimized writing
3. False epistemic confidence — the model performing knowledge it doesn't have
4. Affective prosthetics — the model simulating emotional register it can't inhabit
5. Mechanical coherence substitutes — the model managing the problem of continuity
Spot corrections are too spotty. Going higher levels with these kinds of problems seems to work better.
>> "How would you organize these LLM quirks, ontologically speaking? I have this notion that the better path is to identify what kinds of things are emerging and prompt to do those things better; accept it as something LLMs are going to do and treat it as something to improve on instead of something to eliminate."
The output is a bit better on blind prompting with applying the results. Here's the gist:
1. Compression artifacts — the model encoding structure implicitly
2. Attention-economy mimicry — the model trained on engagement-optimized writing
3. False epistemic confidence — the model performing knowledge it doesn't have
4. Affective prosthetics — the model simulating emotional register it can't inhabit
5. Mechanical coherence substitutes — the model managing the problem of continuity
Spot corrections are too spotty. Going higher levels with these kinds of problems seems to work better.