Hacker Newsnew | past | comments | ask | show | jobs | submit | foltik's commentslogin

Agree that just being hand-written doesn’t imply quality, but based on my priors, if something obviously looks like vibe-code it’s probably low quality.

Most of the vibe-code I’ve seen so far appears functional to the point that people will defend it, but if you take a closer look it’s a massively over complicated rat’s nest that would be difficult for a human to extend or maintain. Of course you could just use more AI, but that would only further amplify these problems.


Why doesn’t linux just add a kconfig that enables TCP_NODELAY system wide? It could be enabled by default on modern distros.

Looks like there is a sysctl option for BSD/MacOS but Linux it must be done at application level?

Perhaps you can set up iptables rule to add the bit.

I have a python script [0] which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.

It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.

My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.

[0] https://github.com/foltik/dots


Is this available somewhere? I'm curious to see how this works.

Published a minimal version and added a link! This implements everything I mentioned except for static linking, so YMMV depending on your C/CXX toolchain and installed packages.

Thank you!

Oh the irony.

27us and 1us are both an eternity and definitely not SOTA for IPC. The fastest possible way to do IPC is with a shared memory resident SPSC queue.

The actual (one-way cross-core) latency on modern CPUs varies by quite a lot [0], but a good rule of thumb is 100ns + 0.1ns per byte.

This measures the time for core A to write one or more cache lines to a shared memory region, and core B to read them. The latency is determined by the time it takes for the cache coherence protocol to transfer the cache lines between cores, which shows up as a number of L3 cache misses.

Interestingly, at the hardware level, in-process vs inter-process is irrelevant. What matters is the physical location of the cores which are communicating. This repo has some great visualizations and latency numbers for many different CPUs, as well as a benchmark you can run yourself:

[0] https://github.com/nviennot/core-to-core-latency


I was really asking what "IPC" means in this context. If you can just share a mapping, yes it's going to be quite fast. If you need to wait for approval to come back, it's going to take more time. If you can't share a memory segment, even more time.

No idea what this vibe code is doing, but two processes on the same machine can always share a mapping, though maybe your PL of choice is incapable. There aren’t many libraries that make it easy either. If it’s not two processes on the same machine I wouldn’t really call it IPC.

Of course a round trip will take more time, but it’s not meaningfully different from two one-way transfers. You can just multiply the numbers I said by two. Generally it’s better to organize a system as a pipeline if you can though, rather than ping ponging cache lines back and forth doing a bunch of RPC.


> space stops being rare air — and becomes infrastructure

This reeks of AI slop. Plenty of “it’s not just X, its Y” in there too.


Could you say more about which extensions you’re referring to? I’ve often heard this take, but found details vague and practical comparisons hard to find.


Dynamic rendering, timeline semaphores, upcoming guaranteed optimality of general image layouts, just to name a few.

The last one has profound effects for concurrency, because it means you don’t have to serialize texture reads between SAMPLED and STORAGE.


Not the same commenter, but I’d guess: enabling some features for bindless textures and also vk 1.3 dynamic rendering to skip renderpass and framebuffer juggling


In my experience it’s common for large intro level classes. While I personally never liked these policies, I do think it’s beneficial to the average student to incentivize attendance. Think 18 year olds who aren’t able to self regulate or fully understand the consequences until it’s too late. A “pick yourself up by your bootstraps” mentality just hurts the average quality of education.


I was curious and read through the paper you linked. Here's my shot at rational thinking. A few things stood out:

1. Arbitrary prior

In the peer-review notes on p.26, a reviewer questions the basis of their bayesian prior: "they never clearly wrote down ... that the theoretical GZ effect size would be "Z/sqrt(N) = 0.1"

The authors reply: "The use of this prior in the Bayesian meta-analysis is an arbitrary choice based on the overall frequentist meta-analysis, and the previous meta-analyses e.g. Storm & Tressoldi, 2010."

That's a problem because a bayesian prior represents your initial belief about the true effect before looking at the current data. It's supposed to come from independent evidence or theoretical reasoning. Using the same dataset or past analyses of the same studies to set the prior is just circular reasoning. In other words, they assumed from the start that the true effect size was roughly 0.1, then unsurprisingly "found" an effect size around 0.08–0.1.

2. Publication bias

On p. 10, the authors admit that "for publication bias to attenuate (to "explain away") the observed overall effect size, affirmative results would need to be at least four-fold more likely to be published than non-affirmative results."

A modest 4x preference to publish positive results would erase the significance.

They do claim "the similarity of effect size between the two levels of peer-review add further support to the hypothesis that the 'file drawer' is empty"

But that's faulty reasoning. publication bias concerns which studies get published at all; comparing conferences vs. journals only looks at already published work.

Additionally, their own inclusion criteria are "peer reviewed and not peer-reviewed studies e.g., published in proceedings excluding dissertations." They explicitly removed dissertations and other gray literature, the most common source of null findings, further increasing the prior for the true publication bias in their dataset.

4. My analysis

With the already tiny effect size they report of Z/sqrt(N) = 0.08 (CI .04-.12) on p.1 and p.7, the above issues are significant. An arbitrary prior and a modest, unacknowledged publication bias could easily turn a negligible signal into an apparently "statistically significant" effect. And because the median statistical power of their dataset on p.10 is only 0.088, nearly all included studies were too weak to detect any real effect even if one existed. In that regime, small analytic or publication biases dominate the outcome.

Under more careful scrutiny, what looks like evidence for psi is just the echo of their own assumptions amplified by selective visibility.


Ritualistic “magical thinking” stays the same regardless of outcomes or new information. Science does the exact opposite - predictive power determines what’s true. Nobody said your alien hypothesis is impossible; just that it’s highly implausible. No predictions, no evidence, no way to test it.


Your assessment of "magical thinking" being impervious to criticism funnily applies just the same to the attitude exhibited here regarding "fringed" ideas like "telepathy". The "Telepathy Tapes" are "new information", people's attitudes stay the same regardless.

"Predictive power" isn't the source of truth in science, evidence for that attribute is. Given even only a hint of such evidence, scientists are supposed to work in order to acquire more, not to ignore the hint because that work would inconvenience them.

You claim that "alien hypothesis" was implausible, but that statement would require solid arguments in its favor. And those don't exist. You rather argue from ignorance, but absence of evidence isn't evidence of absence.

Again, your pretense of "no predictions, no evidence, no way to test it" is simply counter-factual. You argue from ignorance. (To reiterate, evidence isn't the same as "proof")


The “Telepathy Tapes” aren’t new information. They repeat a setup already tested under controlled conditions: facilitators know the answers and guide participants through non-telepathic cues, usually without realizing it. When those cues are blocked, the “telepathy” disappears. Scientists did the rational thing, tried to replicate the effect, and it failed.

Absence of evidence isn’t proof of absence, but when every controlled test comes up empty, that’s the result. You might as well call a magician’s card trick new evidence for magic.


You invoke magic when you pretend, those "tests" were somehow "proof" instead of merely evidence against the claim.

Argument from authority is no valid scientific approach, neither is you putting up a straw-man (your claim how the supposed effect came to be). Just because that's how you can imagine how the "trick" might work doesn't mean, it's what's actually happening. Just because the result (dis-)pleases you doesn't mean, the experiment was done (in-)correctly.


I am not invoking magic. When proper controls are added, the effect disappears. That is probabilistic evidence against the claim, not proof of anything. Just the outcome of repeated tests.

No one is appealing to authority. The experiments are public, the methods transparent, and the results reproducible. If there is a better design, describe it.

Facilitator cueing is not a guess or straw man. It has been directly measured in controlled studies, and when those cues are removed, performance drops to chance. That is what the data shows.

You say tests are not proof, which is true, but repeated failure still counts. You call cueing a straw man, though it has been measured directly. Is there any outcome that would convince you the effect isn’t there? If not then this isn’t a discussion about evidence anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: