Hacker Newsnew | past | comments | ask | show | jobs | submit | ianm218's commentslogin

Total addressable market

Can you expand on this? You definitely don’t need 6 months for a note taking app to be useable it is more you need to compete with the state of the art right

I'd argue you need between 6 minutes and 6 years.

It depends entirely on what you want. You can literally code a JavaScript 1-liner that will make a <textarea> then put the content back in the URL and it will work serverless on pretty much any platform with a Web browser.

You can also write a note taking app that will be federated yet private, that will have its own scripting language, etc. I mean you can yak-shave your way to write your own OS or even designing your own CPU for that.

So... I'm not sure that metric, time, means much without a proper context, including who does it. It's quite different if to do that, regardless of the tooling used, if you are a professional developer, designer, fullstack dev, prototypist, PM, marketer, writer, etc.


> Can you expand on this?

sure. does your note taking app supports formatting? you don't need it today. you will need it at some point. images? same.

does it handle file corruption etc? no? then its pretty much useless.

does it work across devices? in modern world, again, it is pretty much useless without it

it works across devices? then it needs hosting. if it is hosted it needs auth, it needs backups

you can go on for ever.

the bar for very minimal note taking app that you actually will use is very high, with other software it is even higher.

and this is not even state of art, this is must haves


Obsidian is super popular and is generally local first and device specific.

And even so if your starting a note taking app most of those problems like file corruption and image support are largely solved problems. There is also the benefit of being able to reference tons of open source implementations.

I think one month to notion like app that is prod ready if you just need Auth + markdown + images + standard text editing


> If we end up in a place where the craft truly is dead, then congratulations, your value probably just dropped to zero

I think the craft is going to die and am not thrilled about it. I dont feel like there is a contradiction there


There's no contradiction, but if/when it happens, being "not thrilled" will overflow off the bottom of your list of concerns

I’m not rooting for open AI but OpenRouter is a very self selecting group. Most API users of Anthropocic or OpenAi would just go through the normal API


This was my initial understanding but if you want ai agents to do complex multi step workflows I.e. making data pipelines they just do so much better with MCP.

After I got the MCP working my case the performance difference was dramatic


Yeah this is just straight up nonsense.

Its ability to shuffle around data and use bash and do so in interesting ways far outstrips its ability to deal with MCPs.

Also remember to properly name your cli tools and add a `use <mytool> --help for doing x` in your AGENTS.md, but that is all you need.

Maybe you're stuck on some bloated frontend harness?


> Yeah this is just straight up nonsense.

I was just sharing my experience I'm not sure what you mean. Just n=1 data point.

From first principles I 100% agree and yes I was using a CLI tool I made with typer that has super clear --help + had documentation that was supposed to guide multi step workflows. I just got much better performance when I tried MCP. I asked Claude Code to explain the diff:

> why does our MCP onbaroding get better performance than the using objapi in order to make these pipelines? Like I can see the performance is better but it doesn't intuitively make sense to me why an mcp does better than an API for the "create a pipeline" workflow

It's not MCP-the-protocol vs API-the-protocol. They hit the same backend. The difference is who the interface was designed for.

  The CLI is a human interface that Claude happens to use. Every objapi pb call means:
  - Spawning a new Python process (imports, config load, HTTP setup)
  - Constructing a shell command string (escaping SQL in shell args is brutal)
  - Parsing Rich-formatted table output back into structured data
  - Running 5-10 separate commands to piece together the current state (conn list, sync list, schema classes, etc.)

  The MCP server is an LLM interface by design. The wins are specific:

  1. onboard://workspace-state resource — one call gives Claude the full picture: connections, syncs, object classes, relations, what exists, what's missing. With the CLI, Claude
  runs a half-dozen commands and mentally joins the output.
  2. Bundled operations — explore_connection returns tables AND their columns, PKs, FKs in one response. The CLI equivalent is conn tables → pick table → conn preview for each. Fewer
   round-trips = fewer places for the LLM to lose the thread.
  3. Structured in, structured out — MCP tools take JSON params, return JSON. No shell escaping, no parsing human-formatted tables. When Claude needs to pass a SQL string with quotes
   and newlines through objapi pb node add sql --sql "...", things break in creative ways.
  4. Tool descriptions as documentation — the MCP tool descriptions are written to teach an LLM the workflow. The CLI --help is written for humans who already know the concepts.
  5. Persistent connection — the MCP server keeps one ObjectsClient alive across all calls. The CLI boots a new Python process per command.

  So the answer is: same API underneath, but the MCP server eliminates the shell-string-parsing impedance mismatch and gives Claude the right abstractions (fewer, chunkier operations
   with full context) instead of making it pretend to be a human at a terminal.

For context I was working on a visual data pipeline builder and was giving it the same API that is used in the frontend - it was doing very poorly with the API.


I have never had a problem using cli tools intead of mcp. If you add a little list of the available tools to the context it's nearly the same thing, though with added benefits of e.g. being able to chain multiple together in one tool call


Not doubting you just sharing my experience - was able to get dramatically better experience for multi step workflows that involve feedback from SQL compilers with MCP. Probably the right harness to get the same performance with the right tools around the API calls but was easier to stop fighting it for me


Did you test actually having command line tools that give you the same interface as the MCP's? Because that is what generally what people are recommending as the alternative. Not letting the agent grapple with <random tool> that is returning poorly structured data.

If you option is to have a "compileSQL" MCP tool, and a "compileSQL" CLI tool, that that both return the same data as JSON, the agent will know how to e.g. chain jq, head, grep to extract a subset from the latter in one step, but will need multiple steps with the MCP tool.

The effect compounds. E.g. let's say you have a "generateQuery" tool vs CLI. In the CLI case, you might get it piping the output from one through assorted operations and then straight into the other. I'm sure the agents will eventually support creating pipelines of MCP tools as well, but you can get those benefits today if you have the agents write CLI's instead of bothering with MCP servers.

I've for that matter had to replace MCP servers with scripts that Claude one-shot because the MCP servers lacked functionality... It's much more flexible.


Isn’t there something off about calling predictions about the future, that aren’t possible with current tech, hype? Like people predicted AI agents would be this huge change, they were called hype since earlier models were so unreliable, and now they are mostly right as ai agents work like a mid level engineer. And clearly super human in some areas.


> ai agents work like a mid level engineer

They do not.

> And clearly super human in some areas.

Sure, if you think calculators or bicycles are "superhuman technology".

Lay off the hype pills.


> They do not. Do you have anything to back this up? This seems like a shallow dismissal. Claude Code is mostly used to ship Claude Code and Claude Cowork - which are at multi billion ARR. I use Claude Code to ship technically deep dev tools for myself for example here https://github.com/ianm199/bubble-analysis. I am a decent engineer and I wouldn't have the time or expertise to ship that.


>Sure, if you think calculators or bicycles are "superhuman technology".

Uh, yes they are? That's why they were revolutionary technologies!

It's hard to see why a bike that isn't superhuman would even make sense? Being superhuman in at least some aspect really seems like the bare minimum for a technology to be worth adopting.


By "superhuman" the LLM cultists mean "the singularity", "brain in a chip", "eternal life via digitization" and the rest of the claptrap.

Don't let them off the hook.


Started working on an application to make it easy to see what parcels in NYC are upzoned with the City of Yes[1] changes that were passed last year.

I started off trying to make it a service to help people who are interested in ADU's get connected with architects/ contractors but spent a lot of time working on the interactive map to explore related ideas. The site is here buildbound.xyz and map here buildbound.xyz/map. Right now for example, it's very hard to tell if your site qualifies for the TOD upzoning portion of the City of Yes so maybe there is room to crunch those kind of numbers and provide it as a public service.

Trying to decide to keep going down the ADU route in NYC, even though the market is really early here, expand to NY State/ California where the ADU market is a bit further along or keep doubling down on making the best interactive zoning/ land use map in NYC and see if there is any product market fit to be found.

[1]https://www.nyc.gov/content/planning/pages/our-work/plans/ci...


I'm in the same boat of some of the other commenters using Claude Code but I have found it atleast a 2X in routine backend API development. Most updates to our existing APIs would be on the order of "add one more partner integration following the same interface here and add tests with the new response data". So it is pretty easy to give it to claude code, tell them where to put the new code, tell it how to test, and let it iterate on the tests. So something that may have taken a full afternoon or more to get done gets done much faster and often with a lot more test coverage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: