Hacker Newsnew | past | comments | ask | show | jobs | submit | properbrew's commentslogin

Note that this looks to have been a few days ago:

> 23 March 2026


> I'm about to embark on some months of cold-calling (it will be brutal)

This is something I need to do, have no knowledge about and is definitely going to be harder than building the thing.

> it would be cool to record the calls and feed them to AI for some simple/crude auto-summary which automatically pulls out rejection reason, concerns, interesting points etc

This is very close to what the software I've been building over the last few months is, offline note transcription with summary file generation in a nicely formatted PDF/Docx using local models. Codex is available if you don't want to do the inference on your laptop (I will sort claude out soon as well).

If it sounds like something of interest then feel free to send me a message. More than happy to send you a 30 day trial version.


This is something that interests me a lot. In my own personal experience, ChatGPT is awesome at troubleshooting, it's given me terminal commands that are perfect and use the exact flags needed to identify and then fix the problem.

Why is there this massive disparity in experience? Is it the automatic routing that ChatGPT auto is doing? Does it just so happen that I've hit all the "common" issues (one was flashing an ESP32 to play around with WiFi motion detection - https://github.com/francescopace/espectre) but even then, I just don't get this "ChatGPT is shit" output that even the author is seeing.


The author uses a free version of ChatGPT. Fails to mention that anywhere, but you can see from the screenshots.

And they don’t provide the prompt, so you can’t really verify if a proper model has the same issues.


> to do great things in the context of the right orchestration and division of effort

I think this has always been the case. People regularly do not believe that I built and released an (albeit basic, check the release date - https://play.google.com/store/apps/details?id=com.blazingban...) android app using GPT3.5. What took me a week or two of wrangling and orchestrating the LLM and picking and choosing what to specifically work on can now be done in a single prompt to codex telling it to use subagents and worktrees.


> We define an MAU as an authenticated Pinterest user who visits our website, opens our mobile application or interacts with Pinterest through one of our browser or site extensions, such as the Save button, at least once during the 30-day period ending on the date of measurement.

Wonder if we're going to get a MAHU (Monthly Active Human Users) stat in the future.


Looks simple but very useful. This came across my screen at just the right time, have a few ideas that could do with something like this.


Glad it came at the right time. Honestly, even though the tool seems simple, I've been thinking of using it not only for raw access control, but to evaluate how different models behave with risky tasks - e.g. give it access to malicious tool but guarding it with 'deny'.

I'm curious to hear what your workflow ideas are.


I meant simple in a good way.

I'm utilising codex within an application for some simple inference stuff in read-only mode, but this gives me a way to easily wrap it in some guardrails.


I don't think it's a case that no one liked it, there's just too much going on that it probably never came across the right eyes.

I linked one of my projects in a post and it got some really good responses. I did a bit more work on it and posted a Show HN thinking a few people might be interested but it got 0 traction.

I even made it a point to go on the new Show HN and checkout some peoples projects (how can I expect anyone to check mine out if I'm not doing the same) and it is hard to keep up.

I have another app that I've been working on for the past 3 months and whilst I want to do a Show HN to discuss how I built it, the moments I was banging my head on the wall working on a bug etc, I sadly wonder if there's any point.


I feel the same way, wondering if there’s a point.

That said, I shared a cool post I came across (not a Show HN) about drone hacking. It basically got 0 traction. Not even 10 hours later someone else shared the exact same link and got 100+ upvotes and decent discussion. Which seems to point to a giant element of luck and the right eyes seeing the thing shared.


If you're looking for free STT you can use Whistle across Windows/Mac/Linux and Android (iOS released soon)

https://blazingbanana.com/work/whistle


If you're looking for free STT you can use Whistle across Windows/Mac/Linux and Android (iOS released soon)

https://blazingbanana.com/work/whistle


I was using a custom skill to spawn subagents, but it looks like the `/experimental` feature in codex-cli has the SubAgent setting (https://github.com/openai/codex/issues/2604#issuecomment-387...)


Yes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: