Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The streamed execution idea is novel to me. Not sure what’s it significance ?

I have been working on something with a similar goal:

https://github.com/livetemplate/tinkerdown

 help



The significance is responsiveness — instead of waiting for the LLM to finish generating the entire code block before anything happens, each statement executes as soon as it's complete. So API calls start, UIs render, and errors surface while the LLM is still streaming tokens.

Combined with a slot mechanism, complex UIs build up progressively — a skeleton appears first, then each section fills in as the LLM generates it.

I wrote a deeper dive on how the streaming execution works technically: https://fabian-kuebler.com/posts/streaming-ts-execution/


That is super cool. Sorry to be nitpicky but would really like to know your mental model: I didn’t understand from the blog why user waiting for a functional UI is a problem ? isn’t the partial streamed UI non-functional ?

I can see the value in early user verification and maybe interrupting the LLM to not proceed on an invalid path but I guess this is customer facing so not as valuable.

"In interactive assistants, that latency makes or breaks the experience." Why ? Because user might just jump off ?

(edited)


Maybe I am a bit overdramatic ;) For me this is mostly about user experience. If the agent creates a complex mini app, the user might have to wait 30 seconds. That's 30 seconds without feedback. It's way nicer to already see information appearing - especially if that information is helpful. Also the UI can be functional already, even if it's not 100% complete!

No that makes sense. Waiting for feedback might lead to churn. Pretty cool idea.

Add a video or a live demo, there's still too much friction on this readme.

Always Show then Ask.


and I meant to say: tinkerdown looks pretty cool!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: