Hacker Newsnew | past | comments | ask | show | jobs | submit | thraway123412's commentslogin

How about a bank account and a credit or debit card tied to it?


I am not anal and would not put those into the same category as Car As A Service.


I think legislation is the only way to fix this :(

Require owner's/user's consent to connect. If no consent, no connect. Device's offline capabilities must still work and it must not nag you or otherwise use dark patterns to force a consent out of you.

That would be a relatively minor extension to gdpr.


Does yours beep if you don't open the lid in a minute after it turns off?

Mine does and that is super annoying. Annoying enough that I'm thinking of removing the buzzer or just buying another microwave. I do like the simple dual dial design though.


> Finally, dynamic loading isn't even the right surface for messing with the behavior of existing binaries.

Then what is? Dropping a custom library next to a binary tends to be way easier than modifying the binary.


I agree that shared libraries are mostly unneeded complexity, as far as free & open source programs are concerned.

I kinda like how they open up opportunities for fixing or tweaking the behavior of broken proprietary binary-only garbage without requiring you to RE and patch a massive binary blob. Of course, in a better world, that garbage wouldn't exist and we'd fix the source instead.


It's not that hard.

It just requires time, and I would be happy to spend that time (I love writing, whether docs or just thoughts). As long as there's no ticket for it (approved by a stakeholder and assigned to me by the micromanager), I can't clock hours on it, and if I'm not clocking hours on a ticket, I'll eventually get an angry call..

It's not just lack of incentives, it's disincentives.


I don't think I'm ever going to approve of any sort of automated copyright claim system but if Google wanted to make it one bit fair, they should use the same concept of three strikes against the accounts that make false claims and ban them.


Too easy for bad actors to create multiple accounts or game this system. I think this would end up hurting genuine claimants of actual pirated works more that it would help the wrongly accused.


If you're a genuine claimant, you have little to worry about because you don't make false claims.


That's putting a lot of trust in YouTube that they'll find every claim you make to be valid. Someone could make a dozen accounts with a dozen duplicate/similar versions of your song, you dispute all of them, are found in the right on 9 claims and in the wrong on 3, your account gets 3 wrong accusation strikes and you channel gets banned while the other person then marks one of those knockoffs as the official one.


For personal project my hard limit is 80 cols. For work stuff, mandated by employer, we do something like 120 cols with clang-format. Four or five columns of code side by side, so I can see hundreds of lines of code at once. I still prefer not to waste too much vertical space on fluff.

It doesn't sound like you understand the difference between block structured code with a function call syntax and assembly. Thanks for the attempted ad hominem btw ;-)

> Anyways, you might think you're making a point by repeating "it's all opinions anyways", but that doesn't mean your single opinion is equal to everyone else.

You keep missing the point. Did I say "that's just nasty and bad habits" about your preferred style? Nope!

All I'm saying is that it'd be nice if people stopped presenting their preference as objectively good style and telling that everything else is nasty and wrong. We don't need to share the same preferred color either.

The first comment in this thread would've been a lot more palatable if it just said "I don't like this style." That would also give the comment the right weight: it is just an opinion after all.

I don't think popularity of opinion should matter for discourse. Of course I know I'm in the minority, and that's why I think it's all that much more important to state my opinion. Or do you want to live in an echo chamber where only the popular opinion gets said and everything else is painted nasty and wrong and downvoted to hell? Well, that is pretty much what HN is today. I hope you're not contributing to that.


2GB Pi is $55 on amazon. 4GB version is $62. That'd be $450+ for the eight pies.

Ryzen 2700 launched at $300 (and tapered down to ~$200) and would pretty much run circles around such a Pi cluster.

Just saying. These Pi clusters can be a cool and fun thing to build but if you're looking for compute power, you'd be better served by a mid-range desktop.


Those 8 pis have all their hardware, though. The Ryzen needs all the rest of the hardware to finish off the computer.


Well, he also needed to add a cooler and a bunch of tubes & eight mounts to get coolant running on each Pi, eight ethernet cables, eight power cables, a power supply with enough outputs, a 16-port ethernet switch, and (correct me if I'm wrong) eight SD cards, if only to store a bootloader that can do pxe.

So eight Pies alone do not make a complete cluster, just as one Ryzen alone does not make a complete computer. A few times eight times cheap can turn out to be quite expensive, depending on how much cheap actually is.

I haven't paid much attention to component prices recently but my rule of thumb for budget builds is to start with $80 for each component that isn't a GPU or CPU. Go with stock cooler, grab 80 for PSU, 80 for mobo, 80 for RAM, 80 for storage, see where you end up. In the same ballpark, but the real computer will pack a whole lot more punch (even if you had less RAM total).


Jeff Geerling did a blog post[0] and video (series) on this very thing a while ago that goes into some of the hardware and costs. Whilst the costs can be negligible, it looks like a hella lot of fun!

[0] https://www.jeffgeerling.com/blog/2020/raspberry-pi-cluster-...


Don’t forget this quote from that page:

> It's slightly more cost-effective and usually more power-efficient to build or buy a small NUC ("Next Unit of Computing") machine that has more raw CPU performance, more RAM, a fast SSD, and more expansion capabilities.

But is building some VMs to simulate a cluster on an NUC fun?

I would say, "No." Well, not as much as building a cluster of Raspberry Pis!

---

You don’t build a RPi cluster for speed or cost or efficiency. You do it because it’s fun. And that’s okay.


2GB pi starts at $35 at actual Raspberry Pi distributors, 4GB at $55, although there will probably be added shipping cost.


Yeah, nice. I just assumed Amazon is representative of what most people would end up paying for them.

For my region, the actual distributors start at 44 EUR for 2GB and 64 EUR for 4GB.


Would it though? The cluster has more cores (each PI4 has a quad core), so for certain workloads it seems like it could realistically beat a single 8-core with higher clocks.

As always, a benchmark is the only thing that will prove it.


It absolutely would. These cores are in a completely different class.

If you care about benchmarks, a Pi4 will score around 200 points (1 core) or 550 points (4 core) at 1.5GHz on GB5. At 2GHz you can reach around 700 points multi-core.

Ryzen 2700 will easily go above 6000 points in multi-core bench without overclocking.

So even in a theoretical, embarrasingly parallel workload with minimal sharing between cluster nodes, the Zen will be faster than eight pies. It's not even a contest if you also need some I/O, shared memory, or heavy SIMD.


Where are you getting these "points" from? Without a real world benchmark I don't see why you're so confident about this.

The video in the OP showed 8 pis, and assuming 4 cores each that's 32 cores total.

A raytracing benchmark would be interesting because you could divide the work up-front and not have to worry about communication between nodes in the cluster, and each node doesn't need that much memory.

Maybe the ryzen 2700 beats the 8 PI cluster, but clearly there's an X number of PIs that will beat the ryzen. Maybe it's X = 10 PIs, or 20, or 200? Idk, but it could also just be 8. There's no way to know for sure without a benchmark.


GB5 = Geekbench 5. It is a real world benchmark.


In terms of computer per dollar yes. In terms of compute per joule, no.


I don't think that's true either. You can get 8-core desktop Ryzen parts that have a TDP of 65 watts. I have a passively cooled 8-core Ryzen system that uses a 240 watt power supply.

Also: by the time you've wired up 8 raspis you end up using quite a bit of power just to connect them all together with a switch.

Raspberry Pi 4s need a maximum of 15 watts each. So 120 watts just for the computers. Even if you discount the power consumption of the switch, my 240 watt Ryzen computer is still going to beat that joule-for-joule.

Edit: one more thing, that 240 watt system also powers a 75 watt GPU, so it's definitely more wattage than really required for the CPU alone.


You're calculating the raspi 4 power consumption based off of recommended USB power supply current rating. The actual expected load power consumption without peripherals is 1/5 to 1/3 of that (3-5 W rather than 15 W).

https://www.raspberrypi.org/documentation/hardware/raspberry...

https://www.pidramble.com/wiki/benchmarks/power-consumption

https://raspi.tv/2019/how-much-power-does-the-pi4b-use-power...


You can just undervolt the Ryzen and it will still run cycles around the Raspberry Pi while consuming less power if you are into that sort of thing.


That is not true. The energy efficiency will go down if you downclock because the static to dynamic power ratio will increase.


static to dynamic power ratio isn't efficiency though (and static power in modern desktop chips is tiny compared to the dynamic power). The dynamic power is not linearly related to processing speed (in fact it's much worse). If you downclock and undervolt a ryzen processor it will use much less power than the decrease in speed (e.g. from stock, if you drop performance by ~20% you might get a ~40% power decrease). Obviously at a certain point you will start to get worse again but most chips are not at peak performance/watt at their stock settings because raw performance and performance/cost also matters.


That scaling has a limited range before static power becomes dominant. Compute efficiency is compute / energy (total power * time). Total power includes static power. An SBC pulls far less from the wall than an x86 desktop could ever hope to when calculating the first million digits of pi.

As an example: even if I halted my desktop at 0 MHz and it still magically took the same time to calculate the first million digits of pi as a raspberry pi, it still would be using far more power.


Static power ratio is increasing in modern processing nodes, to the point where a "rush to idle state" strategy starts to make sense because you can power down subsystems in idle states but you can't do that if you lower processing speed. The tradeoff would of course be different in a chip that was architecturally designed from the ground up for low-performance use, but that would be from things other than just a lower frequency for the same chip.


> Theoretically the combined horsepower of all these [4] Pi's should stack up and deliver better performance than writing code on my M1.

If that is the case, then M1 should be the slowest CPU Apple has used in the past 10 years.


M1 is good at ops/watt and also ops/second. M1 probably will still beat the Pi cluster by having a tighter Pareto curve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: