Definition - What does System Design mean? System design defines system components, the different interfaces, and the data that goes through that system. It is meant to satisfy a business or organization's specific needs and requirements by engineering a coherent and well-running system. In order words, Systems design implies a systematic approach to the design of an application. It may use a bottom-up or top-down approach. Still, either way, the process is organized. It considers related components of a particular application that needs to be created - from the architecture to the required software, right down to the data and how it travels and transforms throughout its travel throughout the system.
Essential Things to always remember :
- System Design is not Product Design
- The system is purely an engineering task with support from other members of the product team
- System Design does not involve writing any form of codes
- It is ok to assume the programming stacks and engineering languages that would be used at the end of the design.
Why is understanding System Design Important? In today's digital and technology era, we are surrounded by various applications in various sectors like social media, Streaming platforms, eCommerce, and other services that serve billions of customers daily. At the peak of demand, they may be helping millions of requests per second. Some bright engineers build these applications and platforms at top tech companies.
Understanding System Design Fundamentals help to understand the requirements to build and scale an application from an architecture standup.
Who needs to understand System Design Fundamentals?
- Software Engineers ( Junior to the Most Senior Engineers)
- Engineering Managers
- Software Project Managers
- Product Managers
- Solution Architects
- C.T.O.s and
- Technical Founders
Importance of System Design Knowledge to a Product Manager The importance of System Design is that it involves identifying data sources, the nature, and the available type of data. For example, to design a car booking application, there is a need for using inputs, such as time of arrival, estimated time to location, rating System. This speeds up understanding what kind of data is available and by whom it is supplied to the system may be designed considering all the relevant factors and as a Product Manager. This helps information helps product managers plan and understand our products in better detail.
Concepts to understand to Crack System Design and Architecture Concept
- Load Balancing
- Caching
Database: The database is a collection of information that is organized so that it can be easily accessed, managed, and updated. Computer databases typically contain aggregations of data records or files containing information about transactions or interactions with specific customers.
SQL vs NoSQL Database and The ACID, BASE and C.A.P. Theory: S.Q.L A relational database such as SQL is a collection of data items organized in tables. A.C.I.D is a set of properties of relational database transactions. A transaction generally represents any change in a database. Consistency - Any transaction will bring the database from one valid state to another. Isolation - Executing transactions concurrently has the same results as if the transactions were executed serially. Durability - Once a transaction has been committed, it will remain as so
NoSQL (No)SQL is a collection of data items represented in a key-value store, document store, vast column store, or graph database. Data is denormalized, and joins are generally done in the application code. B.A.S.E is often used to describe the properties of NoSQL databases. BASE chooses availability over consistency.
In addition to choosing between SQL or NoSQL, it is helpful to understand which type of NoSQL database best fits your use case(s). We'll review key-value stores, document stores, wide column stores, and graph databases in the next section. Load Balancing explained in detail : Load Balancing means efficiently distributing the network traffic across multiple computers to balance out the load and prevent any hotspots. Load balancers distribute incoming client requests to computing resources such as servers and databases. In each case, the load balancer returns the response from the computing resource to the appropriate client. Load balancers are effective at: Preventing requests from going to unhealthy servers Keep track of servers that are not functional and avoid sending requests to those machines. Preventing overloading of Helping to eliminate a single point of failure
Additional benefits include: SSL termination - Decrypt incoming requests and encrypt server responses, so backend servers do not have to perform these potentially expensive operations. Session persistence - Issue cookies and route a specific client's requests to the same instance of the web apps that do not keep track of sessions.
The load balancer has become an essential component of any large-scale system as it helps to balance loads across multiple machines. As the systems become more complex, increasingly popular, and traffic volume surges, Load balancers act as traffic cop to route the loads systematically, preventing uneven loads and performance issues on the systems.
Additional Information: Frontend Layer: The most common use case of the load balancers is at the frontend of the application, between the clientside and the serverside interaction. This helps to increase the number of requests that the system can serve. Also, SSL Termination is done here to save the CPU cycles of the application server.
What does SSL mean? Secure Sockets Layer (SSL) is a standard security technology for establishing an encrypted link between servers and clients. Application Layer: Load Balancers are placed between the web server that takes the requests and the application servers that are doing CPU-intensive tasks. This helps in the proper utilization of the application servers without overloading them. Persistence Layer: Load Balancers are placed between the Application Layer and the Persistence Layer to serve more data requests without overloading Database servers.
THE NEXT EPISODE WOULD FOCUS ON CACHING: (A cache is like short-term memory, which has a limited amount of space. It is typically faster than the original data source. In other words, Caching is a technique that stores copies of frequently used application data in a layer of smaller, faster memory to improve data retrieval times, throughput, and compute costs. )