Hacker Newsnew | past | comments | ask | show | jobs | submit | codingdave's commentslogin

> if AI is such a game changing platform

Again, you need to question the premise. Perhaps all the sales and hype you heard simply wasn't true?

In reality, many organizations have already implemented the AI-based improvements to their systems that they need. That work is done, people are enjoying it. The AI vendors want to take it farther. Some coders want to take it farther. Some leaders are pushing it due to FOMO. But "the masses" do not want more. Step outside of the tech silos, and you'll find that most people do not want more AI than we already have.


It sounds like you re-invented sendgrid, but are charging more money for it.

Did you do competitive research, and can you articulate what you do that is different than the other mail platforms, and why that justifies the higher pricing?


OP was just asking for help to learn the skills, dude. They weren't saying they expected to start having solo shows in prestigious galleries or anything.

Lotus Domino Designer, lol.

I am riding this tech into the ground and have been working since 2008, off and on, to shut down anyone who is using it and migrate them to modern platforms. And still getting contracts to do so! I have done your standard modern SaaS gigs as well, but these days I'm finding shutdown efforts of legacy tech is enjoyable work, while playing the startup/SaaS game is not.


It is equally important as the other low code solutions that have been floating around since the 80s. AI doesn't change the big picture, which is that when you empower non-technical people to create apps, they will do so. 9 out of 10 won't work well. Of the 10% that do, 9 out of 10 will be over-fit to the team that built it and not grow, even while solving a problem for that team. For the 1% that work, are broadly useful, and grow... they will fail at scale, and a professional team will then need to come in and smooth out the rough edges.

AI is a new tool to walk that same path. Maybe it will let people go father before needing help, maybe not. But if you are trying to run a low code platform, your focus should be at least partially on that last step of the path - how do you help people take their work farther before needing to call for help?


Fun - I wrote something similar with static HTML and vanilla JS many years ago - but it is always cool to see people bringing their own flavor to these kinds of projects.

Why would a standard git flow not work? They can vide code whatever they want, just review it before it is merged.

> I genuinely believed the safeguards Claude Code had built for me would be adequate and it was a serious miscalculation on my part.

I know I should have a better response than just maniacal laughter, but I really don't. This is what those of us who have not gone all-in on LLM coding have been saying... no matter how much it can write functional code, it still fails utterly at non-code concerns. In this case, security. It is insane that someone who worked on the product did not get that.


I fail to understand why people anthropomorphize LLMs. They are word calculators. Sure, impressive ones. Useful ones. But still just calculators. So it should be self-evident that limiting their output will change their output. It may be interesting to note how wide those changes are, but that is all it is.

Also, original title is: "Umwelt Engineering: Designing the Cognitive Worlds of Linguistic Agents". HN frowns on editorializing titles. From the guidelines: "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html


Fair point on the title. I've emailed to mods to see if they can update the title.

I don't think it's anthropomorphizing to study how vocabulary constraints change reasoning quality. The paper doesn't claim LLMs think. It measures accuracy on tasks with known correct answers under different constraints and finds structured patterns.

"Limiting output changes output" is true but undersells what's happening. If you removed random words from a calculator's input language you'd expect degraded or noisy results. Instead, removing possessive "to have" (so the model can't say "the argument has a flaw" and has to say "the argument fails because...") improves ethical reasoning by 19pp across all three models. Removing "to be" helps Gemini by 42pp on that same task but collapses GPT-4o-mini by 27pp on a different one. The cross-model correlation is r=-0.75, meaning the same restriction systematically helps one model and hurts another.

That's not just different output. The restrictions are forcing different reasoning paths depending on the task and the model. Why specific vocabulary removals produce specific, predictable accuracy changes is the question. Running a 15,600-trial follow-up now to dig into it further.


Oh, are automated job aggregators a thing again? We used to get a few posted every week. I thought everyone had given up on them because of the fake job thing, the inability to get through the AI screeners, etc.

I guess it being AI driven is... not the slightest bit different than prior automation techniques. Except that it apparently is is going to squash all content down into one wall of text on every job. As someone already pointed out the last time you posted this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: