> For you to lose your OS image with a dedicated server only takes your HDD to die.
Only if you don't have a backup.
Why in the world do you think anyone would store their only copy of an OS image on a single server?
For systems I set up, to start with, the OS is mostly immutable, booted and updated transparently to match a master image. If it gets destroyed, we just image a new server. The applications all run in containers, based on images stored on replicated file servers. If they get destroyed, we just re-deploy on a different server (in fact, automatically redeploying is trivial).
Only the application data is unique to running servers, and that needs to be backed up just as much whether those containers run in a cloud environment or locally, and again it's trivial to have automation in place for the backup and re-deployment of that. Been there, done that many times.
> For you to lose your OS image on EC2 (where you made a snapshot of your volume in one-click) would take a lot of shitstorm to happen at AWS -- as I presume that they backup across sites.
For me to lose my data on any bare metal system I've run, multiple servers in at least two different data centres operated by at different companies would need to fail at the same time. This is not hard to set up, and it's a one of setup. You then need to test your backups, just as you need to with AWS - an untested backup is not a backup.
But your assumptions of failure scenarios is also flawed. You need to protect against e.g. disgruntled employees, hackers, bugs as well. If you rely on the same security to protect your backups as your main setup, you don't have a backup.
EC2 is great when you can justify the cost, but it does not remove the need for a proper backup policy and processes to test them.
Only if you don't have a backup.
Why in the world do you think anyone would store their only copy of an OS image on a single server?
For systems I set up, to start with, the OS is mostly immutable, booted and updated transparently to match a master image. If it gets destroyed, we just image a new server. The applications all run in containers, based on images stored on replicated file servers. If they get destroyed, we just re-deploy on a different server (in fact, automatically redeploying is trivial).
Only the application data is unique to running servers, and that needs to be backed up just as much whether those containers run in a cloud environment or locally, and again it's trivial to have automation in place for the backup and re-deployment of that. Been there, done that many times.
> For you to lose your OS image on EC2 (where you made a snapshot of your volume in one-click) would take a lot of shitstorm to happen at AWS -- as I presume that they backup across sites.
For me to lose my data on any bare metal system I've run, multiple servers in at least two different data centres operated by at different companies would need to fail at the same time. This is not hard to set up, and it's a one of setup. You then need to test your backups, just as you need to with AWS - an untested backup is not a backup.
But your assumptions of failure scenarios is also flawed. You need to protect against e.g. disgruntled employees, hackers, bugs as well. If you rely on the same security to protect your backups as your main setup, you don't have a backup.
EC2 is great when you can justify the cost, but it does not remove the need for a proper backup policy and processes to test them.