Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tree Search Distillation for Language Models Using PPO (ayushtambde.com)
67 points by at2005 12 hours ago | hide | past | favorite | 5 comments
 help



Why is almost every RL paper done on Qwen-2.5 ? That decreases its credibility.

> One might note that MCTS uses more inference compute on a per-sample basis than GRPO: of course it performs better

This part confused me, it sounded like they were only doing the MCTS at train time, and then using GRPO to distill the MCTS policy into the model weights. So wouldn’t the model still have the same inference cost?


Ah, I meant that MCTS uses more inference-time compute (over GRPO) to produce a training sample

great write up (and effort!! ;))

what are your thoughts on MCTS for coding?

this can/must be paired with a smart execution harness to optimise roll out and roll back of execution paths and system state.

does this change the calculus for optimal post-training ?


Great post! I wonder why MCTS is not more popular as a test time compute harness. Did you compare performance of MCTS (without distillation) against other methods (eg best of N) with the same compute budget?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: