Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[under-the-rug stub]

[see https://news.ycombinator.com/item?id=45988611 for explanation]



How we manage agentic memory is going to be key in scaling the future of agents. I'm curious how this scales with larger datasets? Like let's say an agent has to keep 5 parallel conversations, across 5 different social media acccounts, and each of the conversations is 10000 messages long. How would it manage parsing through huge DBs like that? or is it more for like more recent context?

Also, let's say an agent runs like 1000s of times, would each of those times become a version history?

I'm particularly interested in how parsing through agent context would work!


Great questions.

On scaling: appends don't create versions, only updates and deletes do. So for your 10k message conversations, uc.get() is O(n) reads. Standard database scaling. The versioning overhead only kicks in when you're actually mutating context, and even then we handle the optimization so you don't have to think about it.

On version history: each agent run doesn't create a version. Versions are created when you update or delete a message. So if your agent appends 1000 messages across 1000 runs, that's just 1000 appends. No version explosion.

Time travel (rewinding to a specific point) is also O(n). This was my personal main bottleneck when deploying B2C agents, so the API is heavily optimized for it.

For your 5 accounts x 5 conversations setup: you'd have 25 separate contexts. Each scales independently. Parse through them however you want, filter by metadata, retrieve by timestamp or index.


I don't get it. Why wouldn't I just use a Vector DB (Pinecone/Weaviate) for this? Just retrieve relevant chunks and stick them in the prompt. Feels like you're reinventing the wheel here.


Great question. They solve different problems.

Vector DBs are great for retrieval: "find relevant chunks to add to the prompt." But they're not designed for state management. When you update a message, there's no version history. When something breaks, you can't rewind to see exactly what the agent saw.

UltraContext manages the structured context (conversation history, tool calls, system prompts) with git semantics. Fork, rewind, merge. You can't git revert a vector embedding.

They're complementary. You'd use a vector DB to decide what goes in the context, and UltraContext to manage what's already there.


Adding an API call between my Agent and the LLM seems like it would kill latency. How much overhead does this add vs just managing the list locally?


~20ms overhead. Built on Cloudflare's edge, so it's fast globally. The value isn't speed—it's that you stop rebuilding context infrastructure and get versioning/debugging for free.


i've always wondered (for this, portkey, etc) - why not have a parallel option that fires an extra request instead of mitm the llm call?


You can fire them in parallel for simple cases. The issue is when you have multi-agent setups. If context isn't persisted before a sub-agent reads it, you get stale state. Single source of truth matters when agents are reading and writing to the same context.

For single-agent flows, parallel works fine.


This may interesting for agent orchestration. Can I use it to make multiple agents interact and save this context as a separate tree?


Yes. That's a core use case. You can either use a separate tree or even nest contexts. It's very powerful for sub-agents like the one Claude Code has built in with the Explore and Task subagents

On UltraContext, you'll do it like this:

// Shared context all agents read/write to

await uc.append(sharedCtx, { agent: 'planner', content: '...' })

await uc.append(sharedCtx, { agent: 'researcher', content: '...' })

// Or fork into separate branches per agent

const branch = await uc.create({ from: sharedCtx, at: 5 })

Schema-free, so you can tag messages by agent, track provenance in metadata, and branch/merge however you want. Full history on everything.


Lots of comments here from new accounts.


What about privacy does this imply storing messages in your layer?


Yes, messages are stored on our infrastructure

For sensitive use cases, I'm exploring a few options: client-side encryption, data anonymization, or a local-first deployment where context never leaves your infra. Not there yet, but it's on the roadmap.

What's your use case? Happy to chat about what privacy guarantees you'd need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: