Hacker Newsnew | past | comments | ask | show | jobs | submit | spondyl's commentslogin

Well, I assume this is all just generated with Claude Code, right? Whether there is much back and forth with the LLM is a valid question and nothing wrong with generating websites (I do it too for some side projects). Claude loves generating websites with a particular style of serif font. We also saw this with https://tboteproject.com/timeline/ and I've just generally seen it from various designs that coworkers have spit out over months using Claude defaults.

I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.

There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.

I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.


I really wish I had that clout-chasing gene - it doesn't even occur to me until I see someone else do it.

I'm serious. The hype chasing clearly clearly matters. .

things like this: https://github.com/instructkr/claw-code I mean ok, serious people put in years of effort for 100 of those stars ...

it's continually wild how extremely irrelevant hard effortful careful work is.

I think that's the game. Get up, look at the headlines, figure out how you can exploit them with vibe coding, do some hyphy project and repeat.

Maybe some lobster themed bullshit between openclaw and the claudecode leak.

I'm not being a cynic here, I'm just telling you what I'm going to do tomorrow.


We do need "hard effortful careful work" to keep planes flying, electrical grids running and medical devices safe. It's very relevant but very undervalued by our current economy.

That was the leaked code and now it's just some random dudes harness btw. He swapped it out. Did a sloppy find and replace for "claude" and made it claw.

It's sloppy work

Does not matter. Sloppiness is unimportant


here's my attempt: https://github.com/kristopolous/Claudette

My shit's always too complicated. let's see


This website has "Curation assisted by AI." at the bottom.

Personally, I don't think I will be putting any such disclaimers or disclosures on my work, unless I deem it relevant to the functionality.


Here are some relevant excerpts from an October 2025 article[1]:

> In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.

> The plan, he writes, is for GitHub to completely move out of its own data centers in 24 months. “This means we have 18 months to execute (with a 6 month buffer),” Fedorov’s memo says. He acknowledges that since any migration of this scope will have to run in parallel on both the new and old infrastructure for at least six months, the team realistically needs to get this work done in the next 12 months.

If you consider that six month parallel window to have started from the time of the October memo (written presumably at the start of October), then that puts us currently or past the point where they would have cut off their old DC and defaulted to Azure only.

Whether plans or timelines changed, I have no idea of course but the above does make for a convenient timeline that would explain the recent instability. Of course, it could also just be symptomatic of increased AI usage generally and the same problems might have surfaced at a software level regardless of whether they were in a DC or on Azure.

Putting that nuance aside, personally I like the idea that Azure is simply a giant pile of shit operated by a corporation with no taste.

[1]: https://thenewstack.io/github-will-prioritize-migrating-to-a...


>It’s existential for us to keep up with the demands of AI and Copilot

if by chance the CTO reads this, as a user of GitHub I would find it really existential if GitHub continues functioning as a reliable hub for git workflows (hence the name), and I have the strong suspicion nobody except for the shareholders gives a lick about copilot or 'AI' if it makes the core service the site was designed for unusable


AI and Copilot increase the load on git workflows.

>We are absolutely ramming AI and Copilot down people's throats

>We do not have enough capacity for AI and Copilot, basic functionality is falling apart

Is this sanity or something other than sanity?


You’re not supposed to do the math. You’re supposed to nod and say “oh, yes, that makes sense.”

Agree. I do not give a cat's whisker about AI for source control. 0.0%. Notta. Nothing.

For GitHub to remain profitable they have to appease those shareholders you mentioned.

Why? What is the correlation between profit and shareholder sentiment (besides the fact that shareholders want said profits)? They don't really influence the operation of the business meaningfully.

Growth chart gotta go up. Only chumps run a business that makes a steady return.

Sure, but I think it's the wrong way around. Appeasing shareholders doesn't make you profitable, being profitable appeases shareholders. I think there is a wealth of evidence that appeasing shareholders actually impedes profits overall.

Incorrect. They need to appease/trick/threaten/etc those that are paying for their services. Shareholders just demand they do so at the greatest (often short term) rate.

I'm not explicitly authorised to speak about this stuff by my employer but I think it's valuable to share some observations that go beyond "It's good for me" so here's a relatively unfiltered take of what I've seen so far.

Internally, we have a closed beta for what is basically a hosted Claude Code harness. It's ideal for scheduled jobs or async jobs that benefit from large amounts of context.

At a glance, it seems similar to Uber's Minion concept, although we weren't aware of that until recently. I think a lot of people have converged on the same thing.

Having scheduled roundups of things (what did I post in Slack? what did I PR in Github etc) is a nice quality of life improvement. I also have some daily tasks like "Find a subtle cloud spend that would otherwise go unnoticed", "Investigate an unresolved hotfix from one repo and provide the backstory" and "Find a CI pipeline that has been failing 10 times in a row and suggest a fix"

I work in the platform space so your mileage may vary of course. More interesting to me are the second order effects beyond my own experience:

- Hints of engineering-adjacent roles (ie; technical support) who are now empowered to try and generate large PRs implementing unscoped/ill-defined new internal services because they don't have any background to know is "good" or "bad". These sorts of types have always existed as you get people on the edge of technical-adjacent roles who aspire to become fully fledged developers without an internal support mechanism but now the barrier is a little lower.

- PR review fatigue: As a Platform Engineer, I already get tagged on acres of PRs but the velocity of PRs has increased so my inbox is still flooded with merged PRs, not that it was ever a good signal anyway.

- First hints of technical folk who progressed off the tools who might now be encouraged to fix those long standing issues that are simple in their mind but reality has shifted around a lot since. Generally LLMs are pretty good at surfacing this once they check how things are in reality but LLMs don't "know" what your mental model is when you frame a question

- Coworkers defaulting to asking LLMs about niche queries instead of asking others. There are a few queries I've seen where the answer from an LLM is fine but it lacks the historical part that makes many things make sense. As an example off the top of my head, websites often have subdomains not for any good present reason but just because back in the day, you could only have like 6 XHR connections to a domain or whatever it was. LLMs probably aren't going to surface that sort of context which takes a topic from "Was this person just a complexity lover" to "Ah, they were working around the constraints at the time".

- Obviously security is a forever battle. I think we're more security minded than most but the reality is that I don't think any of this can be 100% secure as long as it has internet access in any form, even "read only".

- A temptation to churn out side quests. When I first got started, I would tend to do work after hours but I've definitely trailed off and am back to normal now. Personally I like shipping stuff compared to programming for the sake of it but even then, I think eventually you just normalise and the new "speed" starts to feel slow again

- Privileged users generating and self-merging PRs. We have one project where most everyone has force merge and because it's internal only, we've been doing that paired with automated PR reviews. It works fairly well because we discuss most changes in person before actioning them but there are now a couple historical users who have that same permission contributing from other timezones. Waking up to a changed mental model that hasn't been discussed definitely won't scale and we're going to need to lock this down.

- Signal degradation for PRs: We have a few PRs I've seen where they provide this whole post-hoc rationalisation of what the PR does and what the problem is. You go to the source input and it's someone writing something like "X isn't working? Can you fix it?". It's really hard to infer intent and capability from PR as a result. Often the changes are even quite good but that's not a reflection of the author. To be fair, the alternative might have been that internal user just giving up and never communicating that there was an issue so I can't say this is strictly a negative.

All of the above are all things that are actively discussed internally, even if they're not immediately obvious so I think we're quite healthy in that sense. This stuff is bound to happen regardless, I'm sure most orgs will probably just paper over it or simply have no mechanism to identify it. I can only imagine what fresh hells exist in Silicon Valley where I don't think most people are equipped to be good stewarts or even consider basic ethics.

Overall, I'm not really negative or positive. There is definitely value to be found but I think there will probably be a reckoning where LLMs have temporarily given a hall pass to go faster than the support structures can keep up with. That probably looks like going from starting with a prompt for some work to moving tasks back into ticket trackers, doing pre-work to figure out the scope of the problem etc. Again, entirely different constraints and concerns with Platform BAU than product work.

Actually, I should probably rephase that a little: I'm mostly positive on pure inference while mostly negative on training costs and other societal impacts. I don't believe we'll get to everyone running Gas Town/The Wasteland nor do I think we should aspire to. I like iterating with an agent back and forth locally and I think just heavily automating stuff with no oversight is bound to fail, in the same way that large corporations get bloated and collapse under their own weight.


I think Claude Code's frontend design is quite a fan of serif fonts from what I've seen in the past.

They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...


This is a "productionisation" of the same content discussed here: https://news.ycombinator.com/item?id=47362528

I would caution readers to do their due dilligence as the presentation may be fancy but that should not immediately translate into a signal of quality in itself given the author has disclosed using Claude Code for a chunk of this work.

While I won't outright discount the findings (as there is "too much" to reasonably verify), there are a few oddities around the source repo such as errors where Claude has tried to access sources, been denied and then noted as much or where it has seemingly fetched incorrect files and tried to interpret them (https://github.com/upper-up/meta-lobbying-and-other-findings...)

I am not under the immediate impression that the author has done thorough due diligence rather than just offloading that to readers by saying "You can just check the sources yourself"


Also not really a fan of the 'what they're hiding from you' tone it takes (even if that's the subject), like saying that because a website was made less than 100 days before a bill was signed it was a '77-day pipeline' to the bill (which jumped out as a dramatized rephrasing and not present in the original Reddit post).

It also doesn't inline link sources, like the Bloomberg article it mentions (this[1]). A more impartial voice and linked citations to allow quick reference would raise fewer red flags, even if the goal is worthwhile.

[1] https://www.bloomberg.com/news/articles/2025-07-25/meta-clas...


This is effectively a duplicate of this post: https://news.ycombinator.com/item?id=47362528

I would also encourage taking a critical look at the underlying investigation as it seems mostly LLM generated without a huge amount of manual due dilligence


Ah, sorry, I missed that. Comments moved thither now. Thanks!


I also submitted https://news.ycombinator.com/item?id=47370954 because it was pointed out to me that a Reddit submission about the same story on r/linux had been taken down. If there was LLM content I suppose that might at least partially explain a moderator decision there... ?


No, the mods did not make a decision. It got flagged by an auto moderator bot, because of mass flagging. The mass flagging seems to be a brigade that happened on prior posts in that same subreddit discussing this topic of age verification. I don’t have any definite evidence, but it seems odd that a topic that is so relevant to that community would be flagged, so I assume it is a coordinated attack.


I’ve moderated on Reddit before - a mass report bot on r/linux specifically for age verification is too strangely niche. Also automod doesn’t remove flagged posts, unless it has been set up to do it.

It’s also very definitely ai generated, and makes several claims and implication. Users may have reported it as well.

I would hesitate to assume coordinated behavior at this stage.


Automod literally posted a message saying it removed it due to mass reporting of the post.


Thank you! Didn’t know that, and it changed my position


Maybe it’s a dupe but I think it’s an important topic to discuss. And even if it is mostly LLM generated, that doesn’t mean it is completely invalid. Some of the major points around Meta’s lobbying, and Anthropic’s donations, are seemingly valid.


Drop an email to the mods about both points! They can fix the dupe and may have an interest in the LLM point as well.


I came here to say that this is pretty much my view having poked around a little bit as well.

This file does not exactly fill me with confidence: https://github.com/upper-up/meta-lobbying-and-other-findings...

In one part of the report, there seems to be this implicit assumption that Linux and Horizon OS (Meta's VR OS) are somehow comparable and that Meta will be better equipped than Linux if age verification is required.

It doesn't explicitly say "This will allow Horizon OS to become the defacto OS and Linux will die out" but that seems to be the impression I'm getting which uhh... would make zero sense.

More broadly, this entire report (and others like it) are extremely annoying in that I've seen some Reddit comments either taking "lots of text" as a signal of quality or asking "Does anyone have proof that these claims are inaccurate" which is

a) Of course entirely backwards as far as burden of proof

b) Not even the right rubick because it's not facts versus lies, it's manufactured intent/correlations versus real life intent/correlations (ie; bullshit versus not)

All of this could be factually true without Meta being smart enough to play 5D chess


>taking "lots of text" as a signal of quality

Or of authority, when they're not equipped to evaluate the data first-hand.

The Gish gallop technique in debate overwhelms opponents with so many arguments that they're unable to address them all before the time limit. Reports presented like this are functionally that, but against reader comprehension and attention.

Similarly, being the first, loudest, or only voice claim is unreasonably effective at establishing perception of authority, where being unchallenged is tantamount to correctness. This also goes both ways; censorship in media, for instance, can be used to promote narratives by silencing competing views, like platforms selectively amplifying certain topics to frame them as more proven and widely supported than they might actually be.

It's unfortunate that inexpert execution often positions well-meaning and potentially correct arguments to be discredited and derided by prepared opponents before their merits can be established. In this case, it may be true that Meta may have organized a well-coordinated shadow campaign for legislation using technically legal channels, but I'm sure they've anticipated this at some point, or are relying on the inertia of the system and initial buy-in to force the course.


I was curious about this claim and dug up this article from (as far as I understand it), Israel's version of The Economist

https://www.calcalistech.com/ctechnews/article/hjggcekq11g


The name “Calcalist” is indeed a play on “Economist” (it is not a proper Hebrew word, but fuses the Hebrew word for economy “calcala” with the English suffix for a professional work “ist”.

However, it is just an expanded version of Ynet’s business/economy section, and Ynet is probably the closest equivalent to USA Today or The Sun.


Is it etymologically related to "calculate" or is it a coincidence?


Seems to be a coincidence - the Hebrew word comes from the Bible (old testament), and means "the feeding, and generally providing of needs".

The English word comes from "calculus", meaning, apparently, pebble, because original counting was done with pebbles.

(I had to look both up. Thanks for asking)


How can a word come from the Bible? It must have existed before the Bible in order to have a meaning inside of it. Or did you mean to write it came from Aramaic?


Hebrew is a reconstructed language. Whilst some roots will predate the Torah, most won't.

Several words, like the infamous "shibboleth" won't be inherited, or their meanings may wildly differ.


I mean that it already appears in the Bible, in old Hebrew (which is close to, but isn’t exactly Aramaic), with the meaning “to feed and provide” - and I did not find any documentation about how it formed (or came into) Hebrew.

Which means of course m, that it was already in use before the Bible was canonicalized.


This post raises a few flags in my mind that it was at least partly generated by an LLM? That isn't to suggest that this editor doesn't/won't exist, that the editor uses LLM-generated code (which is not a sleight) or that the claims are not truthful.

The main things that jump out are the inconsistency in writing style (sometimes doing all lowercase and no punctuation) but then the brief rundown is all perfect spelling and grammar with em-dashes.

The "Not just" parts stick out like "Not just play them back — edit them" as well as "This isn’t a proof of concept or a weekend project. It’s a real authoring environment."

Anyway, best of luck to the author with their project!



> This document has been through ten editing passes and it still has tells in it.

The big one it missed: the headers are mostly "The [Noun:0.9|Adjective:0.1] [Noun]". LLMs (maybe just Claude?) love these. Every heading sounds like it could be a Robert Ludlum novel (The Listicle Instinct, The Empathy Performance, The Prometheus Deception).


> This document was written by an LLM (Claude) and then iteratively de-LLMed by that same LLM under instruction from a human, in a conversation that went roughly like this

This is hilarious.


I don't like lists like these as I sometimes use half of the "signs" in my writing. And it would be trivial, feeding that list to a LLM and tell it to avoid that style.


You’d think; that was why I made the list. Unfortunately it doesn’t actually work. It rewrites to exclude one tell, and it includes 3 other ones.


Interesting, I guess I will try that out as well.


Huh. This page claims "This website requires JavaScript." at the top, yet I can read everything fine. TFA on the other hand is blank without JavaScript.


Not quite everything. Some of the page doesn't load and not all of the functionality works but it is nice enough to let you try to view the parts you can rather than either force it to not load or act like the entire page loaded fine.


Hmm. I wonder why a GAN can't remove these tells?


>That isn't to suggest that this editor doesn't/won't exist, that the editor uses LLM-generated code (which is not a sleight) or that the claims are not truthful.

If you look at the icons of the tools in the image they appear to have been generated using a LLM. So yeah it's probably vibecoded a lot, it would be cool if the author reports how much and how it was used but I don't think newgrounds would like it much.


Oh neat, I had come across the headless client yesterday (and submitted a now-fixed bug report for it after running into some issues).

Before opening HN this morning and seeing this post, I actually wrote a post about how I'm experimentally using headless to publish my blog: https://utf9k.net/blog/obsidian-headless/

Well, that post was my experiment but I'll be looking forward to trying it out going forward.

There are of course many alternatives and I'm sure this workflow may have its pains but for now, it feels like a lot less friction between actually writing and having it published.

I've used plain Git for many years of course but I've also tried other rube goldberg machines such as various Git-inside-Obsidian plugins and so on but there's always just a bunch of "stuff" between writing and putting it online.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: