Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You mean, you reimage? That is the slow step, you reimage, and plug the new server. Wait a bit, and your service has one more server.

No. When you /stop/ an EC2 instance, and /start/ it again, it moves. You do not need to reimage. This is even requested from AWS when they are having hardware failures and need to move customers off so they can decomission the hardware. They request you stop / start the instance, if you do not do it by the due date they do it for you.

> You take the disk out and plug a new one. You don't turn things off because of a disk.

If you have a storage array sure. But if you're getting bare metal hosting from a provider, you're not always getting hot swappable storage arrays.

> No doubt, those are costly. They are also rare (disk failure is less rare, but still rare).

It was 1 example, obviously there's many different hardware issues that can go wrong.



> If you have a storage array sure. But if you're getting bare metal hosting from a provider, you're not always getting hot swappable storage arrays.

If you have any server-level hardware bought in the last 20 years or so, it will have the drives in hot-swappable bays. If you then choose to not set it up in RAID, it's just incompetence.

> If you have a storage array sure. But if you're getting bare metal hosting from a provider, you're not always getting hot swappable storage arrays.

If you're getting bare metal hosting from anywhere including your own colo, you have failover and the ability to order replacements while your system is still running. This is only an issue if you're architecture is fundamentally flawed, in which case you're likely to mess things up whether you're on bare metal or in a cloud.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: