But that's a reason you should expect it to stop working soon, just like all the older tricks like "my grandmother will die". If you have a universal 'blind' prompt which can increase performance a little bit... the AI labs can just toss that into the training loop to teach the model to do it automatically, whatever 'it' was, like 'trying harder' or 'writing down a useful idea'. And then the prompt stops working because the next generations do it by default.
(This also suggests that you should expect them to generally be bad at judging novel self-generated prompts/skills - if they could judge those, they would already be using them! There is a generator-verifier gap, but it is already exploited heavily during post-training and not much low-hanging fruit left there.)
> But that's a reason you should expect it to stop working soon
I agree. (And it seems like it already stopped working, if I understood others here correctly.)
But again if I understood others here correctly, an academic paper like this would necessarily be studying models that are well behind the leading edge at time of publication. My argument is that the study authors shouldn't be faulted for investigating something that currently seems unlikely to work, because at the time of investigation it would have seemed much more likely to work.
> I'm not convinced that it possibly could have. It takes time for papers to get published and the LLM world is moving rather quickly.
The paper was submitted to arXiv on 13th February, and we're here reading it, less than a week later.
But we don't have to assume. The list of models is right there in the paper, on page 5:
We select seven frontier models: GPT-5.2 (OpenAI), Claude Opus 4.5, Claude Opus 4.6, Claude Sonnet 4.5, Claude Haiku 4.5 Anthropic), Gemini 3 Pro, and Gemini 3 Flash (Google). All models use temperature 0 for deterministic sampling.
Have there not been previous iterations of these tools where such techniques were actually effective?