Hacker Newsnew | past | comments | ask | show | jobs | submit | perching_aix's commentslogin

Which is heartbreaking (and I'd argue misleading too), but not the whole story.

You can only issue takedowns in relation with material that you have copyright over. At least one of these sites I know for a fact routinely scrubs FAKKU licensed content, and abides by takedown requests.


Moving to a Germany based host of all places, after being legally harassed over copyright, doesn't strike me as a particularly good idea. Aren't the local courts infamous for being awful to deal with?

Author must clearly never use porn sites like xvideos or PornHub, if they think YouTube's search is what "barely works".

> japan is going to shit because of all the immigrants

> japan is better than pussy thanks to it being strict on immigrants

pick whichever convenient


This should be the top comment :)

> A world where nobody has to work sounds horrific.

Why?


Because then the only way to obtain power and status is to wage war.

The author keeps repeating this idea of a "dedicated internet connection" (DIA), and it kind of just irks me. Not because the author is wrong in how they use it, but because the term itself I find misleading, and I hate to see it continue to poison the common discourse.

A dedicated last-mile connection gives you a dedicated link to your ISP’s edge network, not a dedicated path across the internet. You won’t compete with your immediate neighbors on a shared access network anymore, sure, but you’re still sharing the ISP core, peering links, and transit links with the rest of their customers.

In practice this usually works well enough, because ISPs engineer their core and peering capacity with low over-subscription, especially for business and DIA customers. So you can often push near line rate anyway, but not because you have a truly reserved slice of the internet. A Switzerland-sized country would need like petabit-scale connectivity to provide actually dedicated 25G links (or even just 1G links) to everyone.


Very true. The bottleneck isn't going to be the last mile nearly all the time. In any case, it's clear we're arguing with a ragebait article and a bunch of others who have basically no understanding of how the Internet (or networks in general) works.

This is the way. You can also just decide to not accept contributions and feedback in general.

Further downside is that your project will have a harder time becoming popular, and being popular is secretly (or not so secretly) the motivation for many in open source.

Will be a lot more honest though.


Empathy goes both ways. You can recognize them being unfair while still appreciating their reasons for being unfair.

People seem to have this notion that there's some theoretical possible world where everything is completely moral, and we're just failing to get there. But that is not true. You get locally moral and globally moral arrangements, and they're not necessarily going to mesh. It's just like any other large system.

Guy can be justified from their perspective, people can be justified for distancing themselves from him. That's life. Having a reason for something is further the bare minimum, not the endgame.


Entertaining perspective - I thought that the whole "it's not an outage it's a (horizontal or vertical) degradation" thing was exclusive to web services, but thinking about it, I guess it does apply even in cases like this.

Could you substantiate that a bit more? I don't see what'd be hard to understand about it at a skim.

Is it an arithmetic average of relative error over the given range? Because if yes then it can be misleading, and potentially a bad meshes to rank alternatives (though the HTML report includes a graph over the input range, which is quite nice, so I'm talking only about the accuracy number).

In the limit, an alternative with 10x better accuracy when x>10^150 and 10x worse in 1<x<10^150 would rank higher :) but more generally, not all inputs are equally important.

Furthermore, floats have underflow to 0 and overflow to infinity, which screw all this up because it can lead to infinite relative error.

Because of this you have some of the funny cases reported elsewhere in this thread :p

I'm not sure what would be a better approach though. Weigh the scores with a normal distribution around 0? Around 1? Exponents around 0?


Documented here but yes it's an average, of something similar to but not exactly the same as relative error: https://herbie.uwplse.org/doc/latest/error.html

It's true that averages can be misleading but we encourage users to think about it instead as a percentage of inputs. In practice the error distribution is very bimodal, the two modes being "basically fine" (a few ulps of error) and "garbage" (usually 0 instead of some actual value)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: