If so, it would be great to provide more models through OpenRouter.
This looks interesting but not enough to make me go through the trouble of setting up a separate account, funding it, etc.
for smaller start ups, it's easier to go through one provider (OpenRouter) instead of having the hassle of managing different endpoints and accounts. you might get access to many more users that way.
mid to large companies might want to go directly to the source (you) if they want to really optimize the last mile but even that is debatable for many.
Hey @nnx & @hazelnut, good question, but no, we're not IonStream on OpenRouter.
The purpose of IonRouter is to let people publicly see the speed of our engine firsthand. It makes the sales pipeline a lot easier when a prospect can just go try it themselves before committing. Signup is low friction ($10 minimum to load, and we preload $0.10) so you can test right away.
That said, we do plan to offer this as a usage-based service within our own cloud. We own every layer of the stack— inference engine, GPU orchestration, scheduling, routing, billing, all of it. No third-party inference runtime, no off-the-shelf serving framework. So there's no reason for us to go through a middleman.
This looks very interesting, but I wonder how's the rewrite approach gonna impact the long-term maintenance and porting changes _back_ from Tree Sitter.
As you mention WASM-readiness, did you consider using the official Tree Sitter WASM builds nicely packaged with wazero (pure Go WASM runtime) ?
It may help staying sync with upstream for the long term and, while probably a bit slower, has nice security and GC advantages too.
Hmm no, because in the case of purchasing alcohol the ID check is 1:1, in time and in space, it's ephemeral (unless the clerk has extreme photographic memory).
In the case of an online-based ID check, even with nice looking privacy terms, there is no guarantee that your ID won't be stored forever and/or re-analyzed many times cross-checking with other services, and worse leaked.
Really interesting, I’m currently using https://github.com/fastschema/qjs but would love a bit lower-level control like your reactor and Go library provide.
loop_once may have been called run_microtask if I understand the “loop” boundary correctly?
Is there a way to be more granular in execution? Like running a single “basic block” (until a jump) or until next function call?
I agree. It’s the enshittification of the internet. Luckily we still have infrastructure providers with more sensible offerings. We don’t have to use aws, gcp, etc.
reply