I was sysadmin for a small ISP about 15 years ago. I was on vacation about 1500 miles away when I called to check in, the owner gets on the phone and says, "I deleted the entire /bin directory on [the primary web server], is that bad?" I told him to not touch it til I got home, and whatever he did do NOT shut it off!
Thankfully we had two machines which were virtually identical OS-wise (RedHat 6 if memory serves). I was able to get everything put back from the twin machine and keep everybody happy.
Thankfully that server kept running with relatively little issue the entire time even with all those core OS files gone. I don't think any customers were at all aware.
Open processes will hold open their files. So long as it's an 'rm' that you've run (which merely removes directory entries) and not a destructive action on the disk contents themselves, it's often possible for things to continue in a startlingly unaffected manner. Though not always.
The extent to which new calls to deleted files are made will have a strong impact on this.
This is why every system I administer has 'rm' aliased to 'rm -i' (along with 'cp' and 'mv' just in case). I believe this is the default on RHEL/CentOS boxes. Certainly for root, but should be for every user. Sure, it can be a pain sometimes to have to confirm, but at least you get the chance....unless you add '-f'.
This is why every system I administer has 'rm' aliased to 'rm -i' (along with 'cp' and 'mv' just in case).
Glad I'm not the only one. :-)
However, that is rather a specific case, albeit a common one. I have lost count of how many times I've seen even very experienced sysadmins do something disastrous by accident that is entirely due to the poor usability of some Linux shell or other command line-driven software with a similar design style and culture.
I have seen someone nuke an entire system, with a shell script that failed at string interpolation and literally did an 'rm -rf /', after I explicitly warned them of the danger and they thought they'd guarded against it. That person was a very capable sysadmin with many years of experience, but expecting anyone to never make a mistake with that kind of system is like expecting a similarly experienced programmer to write bug-free code with nothing but an 80x25 terminal window and a line editor.
Nothing makes you appreciate "don't miss" like deleting /etc
on a live system. For a good few weeks after that I nearly introduced a peer review process to my own shell.
That being said, there's certainly something to that one event doing more to reform my being fast and loose with destructive commands than years of being told/telling myself to do so. (Something likely being that I'm apparently a slow learner.)
Thankfully we had two machines which were virtually identical OS-wise (RedHat 6 if memory serves). I was able to get everything put back from the twin machine and keep everybody happy.
Thankfully that server kept running with relatively little issue the entire time even with all those core OS files gone. I don't think any customers were at all aware.