Is the strategic implementation of DevOps really necessary?
With properly implemented DevOps, enterprises can deploy apps and services at a high rate. According to an Upguard, 63 percent of respondents saw an increase in the frequency and quality of their software deployments. However, without proper implementation, faulty DevOps can bring catastrophic harm to your organization.
According to Gartner, by the end of 2022, 75% of DevOps programs would fail to reach their goals. There are several horror stories of enterprises that have faced severe consequences as a result of DevOps failure. Let's look at a couple of the case studies and see why they didn't work.
Knight Capital
“Knight Capital”, A real-time stock trading company in 2014 suffered an indescribable nightmare due to failed DevOps Implementation. Due to a poor deployment, the corporation was forced to pay $440 million. The company was leveraging an in-house application known as SMARS to handle buy orders in the stock market. The application's source had numerous outdated components. Power Peg, an out-of-date feature, was lying dormant in the codebase. When new code was added to the program, it accidentally triggered the Power Peg functionality, which resulted in billion-dollar buy orders being placed within 45 minutes. To make matters worse, when the staff was notified about it via email, they overlooked it and marked it as an urgent system alert.
Lesson learned:
- Automation is a powerful tool that should be utilized properly to avoid a severe disaster.
- To avoid conflicting interactions, old processes/features must be deleted before new code is introduced.
Gitlab
On the 31st of January 2017, Gitlab had a massive service outage. It was caused by the inadvertent removal of production data during database maintenance. In most cases, a simple backup restore may solve the problem in a matter of minutes. However, to Gitlab’s amazement, none of their previous back-ups worked during the hour of need. This was because Gitlab wasn’t paying any attention to thorough testing of their backups. As a result, the system failed. For all of the tech businesses, this was a massive wake-up call.
Lesson learned:
- Backups will only work if they are continuously monitored and tested regularly
- Make a habit of automating backup and restoration pipelines. Run the process once every day to make sure everything is in place.
Workflowy
Workflowy is a simple productivity tool that suffered performance issues while decompressing a single giant database into a network of small databases. The problem was discovered while the Workflowy staff was making architectural improvements to cope with their expanding firm. They discovered that decompressing databases slow searches and prevents users from accessing data. When they investigated the issue, they discovered that decompressing data consumes too many resources, causing all of the performance concerns.
Lesson learned:
- Database decompression can be a resource-intensive process. It can cause problems with performance or even outages.
- Avoid performing "slow queries" in the database while conducting such an operation on a live website.
Conclusion
There's no denying that properly integrating DevOps may be beneficial to your company. However, there are countless possibilities for things to go wrong. Whether it's databases, infrastructure, cloud suppliers, or old code, your company might suffer irreparable harm if you don't have a strong implementation plan in place. In this blog, we have listed down three nightmare case studies as a result of unsuccessful DevOps deployment. Do any of the stories strike a chord with you? We'd love to hear about your adventures!