In case you are not aware, here is the link.
Never use 'rm -rf'. Simple as that. Or alternatively do backup more often.
That a healthy dose of paranoia and OCD is a virtue.
EDIT: Also I saw this mentioned elsewhere, but having an actual physical checklist is vital.
Ouch! Backups are only good if you test them from time to time.
One of the developers in our team accidentally deleted our collaborative projects and got stern warning from our project lead.Luckily we were able to restore the data from one of the gitlab backups(oh the irony).Since then we use gvfs-trash command every time we have to delete anything from ubuntu server since it moves files to trash.You can write bash script or cron job to periodically empty the trash if disk space is an issue.
Moral of the story learn from your mistakes and use safe delete commands instead of misusing rm -rf
But we have to appreciate the efforts of gitlab to provide such a nice platform for developers and remember that at the end of the day it is still run by developers who might make mistakes just like all of us.
Always, have a beta environment and make sure your beta is exactly like your prod. It's very important to have an environment where you can test your product e2e before you take it to prod.
On a lighter note, be wary (very wary) of the rm -rf that caused the GitLab chaos.
That GitLab tackled it very professionally. You could consider a mirror in case it happens again. A great feature would be the ability to self host a replication server.
I considered switching to GitHub, but decided not too. I'm convinced they improve their backup strategies.
Ujjwal Kanth
Search @Unbxd
IMO, restricted environment access. A production machine should only be modified/manipulated by bots.