This is interesting but don't you worry that you're competing with entire companies (e.g. Anthropic) and thus it's a losing battle? Since you're re-implementing a bunch of stuff they either do in their harness or have decided it was better not to do?
I think it's worth remembering that for any offering like that, it necessarily needs to be ~one-size-fits-all, while what you come up with.. doesn't.
They're solving a different problem than you. So I think it's very plausible that you could come up with something that, for your use case, performs considerably better than their "defaults".
Personally I don't see aipack's pro@coder and other approaches (claude code, cursor, copilot, etc...) as competitors anymore. I use both approaches to solve different problems. I keep using the agentic solutions (claude code style) for more operational tasks, a bit like "smart interfaces to terminal", and pro@coder for coding / engineering tasks where I need a much tighter control over long running work sessions.
TUIs built today should be usable by AI agents. I'm not sure exactly what it looks like but I'm imagining that every UI view has an associated CLI command that can yield precisely that view. Maybe like formally structured breadcrumbs, or maybe like Emacs "keyboard macros".
I've found agents effective using GUI apps with nothing but the ability to take screenshots and send mouse and keyboard commands. I imagine they'd work even better with a TUI, even if it's not designed with agents in mind at all.
For local inference, sure, but we simply lack the computing power to train them on all the images and html content that is available in the internet and books. That will happen sometime in the future, though.
Ah right, sorry, you were making a much more interesting point than my reply! I read "UI development" and jumped to the conclusion that the point was just about inference-time modify-test cycles. Yes, agreed, if they trained on images, or even better (?) on (code, image) or (code-delta, image-delta) pairs, they would surely be better at UI development.
> Markdown is already beautiful. We don't render it. We don't preview it. You read it raw, the way it was meant to be.
I don't want to be inflammatory or shallowly dismissive of other people's opinions. But I find this puritanical view surprising when we're talking about presenting markdown for reading by humans.
Take markdown links for example. In a terminal those should surely be rendered as OSC8 hyperlinks where supported: that gives actual link functionality, as well as being much more readable.
Or take markdown code blocks; to me it seems clear that they should be rendered with syntax highlighting, probably in a box or against a slightly different background color to set them off from the rest of the document. Triple backticks are for machines, not humans, surely? I don't think they're beautiful.
I don't know the history / lore of what is common mark vs non-standard addons etc. But github supports things like <details> tags; clearly it's no good just rendering that in plain text. A browser renders it well; not sure how to in a terminal.
Similarly tables should surely at least have padding added so that each column has constant width as you look down the rows, but promising to output it raw wouldn't do that since markdown itself has no such requirement. Which gets at my overall point: markdown is a format for capturing richer document data while writing; this should be rendered for humans to read.
Agreed. I want my h1’s to be larger than my h2’s. That visual distinction is how I parse data faster. Flat markdown with no formatting feels like it’s missing the point of Markdown.
And are they really proposing that we ought to read italics and *bold* like this?
Edit: Oops. Looks like HN has formatted bold/italics for me. Italics should be bracketed with one asterisk and bold bracketed with two asterisks.
Most of what the article says is true regarding coding agents, but articles like this are making a big mistake: they're completely forgetting that agentic applications aren't all claude code.
We're entering an era where many organisations will have agentic loops running in their own backends. There's a spectrum of constraint that can be applied to these apps -- at one end claude code running unsandboxed on your laptop with all permissions off able to cook up anything it wants with bash and whatever CLIs and markdown skill documents are available, and at the other end an agentic loop running in the backend of a bank or other traditionally conservative "enterprise"/corporate organisation. Engineering teams working in that latter category are going to want to expose their own networked services to the agentic app, but they're going to want to do so in a controlled manner. And a JSON-RPC API with clearly defined single-purpose tool-calling endpoints is far, far closer to what they're looking for than the ability for the agent to do wtf it wants by using bash to script its own invocation of executables.
Sure, but it's pretty trivial to generate a CLI application that talks to that API.
That's how I let agents access my database too. Letting them access psql is a recipe for disaster, but a CLI executable that contains the credentials, and provides access to a number of predefined queries and commands? That's pretty convenient.
Yes. But are you letting your agent make the decision of when and how to call that CLI? And presumably you're invoking it via the Bash tool. In which case your agent is free to write ad-hoc bash orchestration around your CLI calls. And perhaps you don't just have one such CLI but rather N for N different services.
And so we've arrived at the world of ad-hoc on-the-fly bash scripting that teams writing backend agentic applications in more "traditional"/conservative companies are not going to want.
Don't get me wrong, it's great for claude-code-type local computer automation use cases -- I do the same as you there.
The main problem with many IT and security people at many tech companies is that they communicate in a way that betrays their belief that they are superior to their colleagues.
"unlock innovators" is a very mild example; perhaps you shouldn't be a jailor in your metaphors?
I find it interesting that you latched on their jailor metaphor, but had nothing to say about their core goal: protecting my privacy.
I'm okay with the people in charge of building on top of my private information being jailed by very strict, mean sounding, actually-higher-than-you people whose only goal is protecting my information.
Quite frankly, if you changed any word of that, they'd probably be impotent and my data would be toast.
A bit crude, maybe a bit hurt and angry, but has some truth in it.
A few things help a lot (for BOTH sides - which is weird to say as the two sides should be US vs Threat Actors, but anyway):
1. Detach your identity from your ideas or work. You're not your work. An idea is just a passerby thought that you grabbed out of thin air, you can let it go the same way you grabbed it.
2. Always look for opportunities to create a dialogue. Learn from anyone and anything. Elevate everyone around you.
3. Instead of constantly looking for reasons why you're right, go with "why am I wrong?", It breaks tunnel vision faster than anything else.
Asking questions isn't an attack. Criticizing a design or implementation isn't criticizing you.
Away from applied math/stats, and physics etc, not that many scientists use LaTeX. I'm not saying it's not useful, just I don't think many scientists will feel like a product that's LaTeX based is intended for them.
Economists definitely use LaTeX, but as a field, it's at the intersection of applied math and social sciences so your point stands. I also know some Data Scientists in the industry who do.
> the worst case scenario for a rebase gone wrong is that you delete your local clone and start over. That’s it. Your remote fork still exists.
This is absolute nonsense. You commit your work, and make a "backup" branch pointing at the same commit as your branch. The worst case is you reset back to your backup.
The article focused on the local stdio MCP tools used by coding / computer automation agents like claude code and cursor, but missed the fact that we will need protocols for AI agents to call networked services, including async interactions with long-running operations.
reply