> SCION right now provides the backbone for the Swiss financial network moving 200 billion CHF each day
This is a meaningless benchmark - for a small group of trusted big enterprises with insurance policies and mutually signed contracts you could've just as well used OSPF with zero filters.
The benchmark would be adoption by an actual large number of parties that don't/can't talk to eachother spread across the world. With a large chunk of them being malicious or incompetent to the point of being effectively malicious.
I'm not claiming that this shows SCION can replace the respective parts of the network stack right now, and you're right that at a global scale this is still an unproven technology. But I would argue that a technology needs a certain level of matureness / is not "snake oil" if it is deployed in a heavily regulated and comparatively conservative sector such as banking.
I gotta say some of the proposed use cases are things no one is looking/asking for. One I recall was having a network decide to reach another network by avoiding countries that aren't carbon neutral (which could take longer hops and use more infra / more energy...) feels like they're trying to say they're the green/environmental friendly protocol.
Why does a routing protocol matter for the banking sector? With proper encryption the route the packets of transaction data takes should not matter at all.
Aren't heavily regulated sectors the one where you usually encounter snake oil? Useless WAFs and other security snake oil products, Microsoft 'collaboration' jank like Teams and Sharepoint, MitM proxies, etc?
It really doesn’t matter anymore. I’m saying this as a person who used to care about it. It does what it’s generally supposed to do, it has users. Two things that matter at this day and age.
Users stick around on inertia until a failure costs them money or face. A leaked map file won't sink a tool on its own, but it does strip away the story that you can ship sloppy JS build output into prod and still ask people to trust your security model.
'It works' is a low bar. If that's the bar you set you are one bad incident away from finding out who stayed for the product and who stayed because switching felt annoying.
“It works and it’s doing what it’s supposed to do” encompasses the idea that it’s also not doing what it’s not supposed to do.
Also “one bad incident away” never works in practice. The last two decades have shown how people will use the tools that get the job done no matter what kinda privacy leaks, destructive things they have done to the user.
It may be economically effective but such heartless, buggy software is a drain to use. I care about that delta, and yes this can be extrapolated to other industries.
Genuinely I have no idea what you mean by buggy. Sure there are some problems here and there, but my personal threshold for “buggy” is much higher. I guess, for a lot of other people as well, given the uptake and usage.
Two weeks ago typing became super laggy. It was totally unusable.
Last week I had to reinstall Claude Desktop because every time I opened it, it just hung.
This week I am sometimes opening it and getting a blank screen. It eventually works after I open it a few times.
And of course there's people complaining that somehow they're blowing their 5 hour token budget in 5 messages.
It's really buggy.
There's only so long their model will be their advantage before they all become very similar, and then the difference will be how reliable the tools are.
Right now the Claude Code code quality seems extremely low.
And those bugs were semi-fixed and people are still using it. So speed of fixes are there.
I can’t comment on Claude Desktop, sorry. Personally haven’t used it much.
The token usage looks like is intentional.
And I agree about the underlying model being the moat. If there’s something marginally better that comes up, people will switch to it (myself included). But for now it’s doing the job, despite all the hiccups, code quality and etc.
This is the dumbest take there is about vibe coding. Claiming that managing complexity in a codebase doesn't matter anymore. I can't imagine that a competent engineer would come to the conclusion that managing complexity doesn't matter anymore. There is actually some evidence that coding agents struggle the same way humans do as the complexity of the system increases [0].
I agree, there is obviously “complete burning trash” and there’s this. Ant team has got a system going on for them where they can still extend the codebase. When time comes to it, I’m assuming they would be able to rewrite as feature set would be more solid and assuming they’ve been adding tests as well.
Reverse-engineering through tests have never been easier, which could collapse the complexity and clean the code.
All software that’s popular has hundreds or thousands of issues filed against it. It’s not an objective indication of anything other than people having issues to report and a willingness and ability to report the issue.
It doesn’t mean every issue is valid, that it contains a suggestion that can be implemented, that it can be addressed immediately, etc. The issue list might not be curated, either, resulting in a garbage heap.
For what one anecdote is worth: through casual use I've found a handful of annoying UI bugs in Claude Code, and all of them were already reported on the bug tracker and either still open, or auto-closed without a real resolution.
Do compilers care about their assembly generated code to look good? We will soon reach that state with all the production code. LLMs will be the compiler and actual today's human code will be replaced by LLM generated assembly code, kinda sorta human readable.
Team has been extremely open how it has been vibe coded from day 1. Given the insane amount of releases, I don’t think it would be possible without it.
It’s not a particularly sophisticated tool. I’d put my money on one experienced engineer being able to achieve the same functionality in 3-6 months (even without the vibe coding).
The same functionality can be copied over in a week most likely. The moat is experimentation and new feature releases with the underlying model. An engineer would not be able to experiment with the same speed.
I don't really care about the code being an unmaintainable mess, but as a user there are some odd choices in the flow which feel could benefit from human judgement
It'd dogfooding the entire concept of vibe coding and honestly, that is a good thing. Obviously they care about that stuff, but if your ethos is "always vibe code" then a lot of the fixes to it become model & prompting changes to get the thing to act like a better coder / agent / sysadmin / whatever.
It's impressive how fast vibe coders seem to flip-flop between "AI can write better code than you, there's no reason to write code yourself anymore; if you do, you're stuck in the past" and "AI writes bad code but I don't care about quality and neither should you; if you care, you're stuck in the past".
I hope this leak can at least help silence the former. If you're going to flood the world with slop, at least own up to it.
1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.
Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.
This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).
That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.
environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
Wait, is it expected for them to be able to change? According to this SO answer [0] it's only really possible through GDB or "nasty hacks" as there's no API for it.
I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.
But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.
useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...
Maybe, I do suspect _some_ parts are codegen or source map artifacts.
But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup
Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.
Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.
Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.
But the man's argument is that since he sees something a given way then it's the truth. What people are doing in return is showing that he can only do so because of who he is.
What's wild is that with a few minutes of manual editing it would give exponential return. For instance, a lead sentence in your section saying "here's why X" that was already described by your subheading is unnecessary and could have been wholly removed.
That’s pretty presumptive of how obviously the author could improve it. As someone who writes a lot of docs, I find feedback and preferences varies wildly. They may just have well made it “worse” to your preferences by hand editing it more.
Isn’t it? I mean 12 stage pipeline has a very specific meaning to me in this area, and is not a new way of describing something. The release notes description sounds like a multi stage pipeline.
Do you know this kind of area and are commenting on the code?
You can only pick the parts that you need and aren't now exposed to a supply chain attack. You can also easily adapt the code to your needs easily, especially as your needs change.
Yeah, anyone who says 'the government should be ran like a company' has likely never worked in a large corporation. It's full of meaningless work, bullshit jobs and red tape.
Works fine on my end. The HTTPS URL gives a 301 permanent redirect to HTTP, and then I ordered some boner pills and put my social security number to confirm.
Apparently it's not on by default, but all of my browsers do and also warn me whenever a site does not support HTTPS (and require me to explicitly click through to the unencrypted connection).
A client side option to force https might still be useful though. But I can imagine at least some enterprise webapp that would die horribly if you tried this.
This is a meaningless benchmark - for a small group of trusted big enterprises with insurance policies and mutually signed contracts you could've just as well used OSPF with zero filters.
The benchmark would be adoption by an actual large number of parties that don't/can't talk to eachother spread across the world. With a large chunk of them being malicious or incompetent to the point of being effectively malicious.
reply