@shade
Senior Software Engineer @ Razorpay
Nothing here yet.
Nothing here yet.
No blogs yet.
This is a fantastic, deep-dive article, Saravana! I particularly appreciate your clear explanation of why audit logging is a distributed systems challenge and the necessity of moving beyond simple relational databases. I have a key point regarding your solution for event ordering using a Redis-based monotonic counter at the Ingestion Service layer. The issue is that applying the counter on the server side fundamentally fails to solve the causal ordering problem you described. As you noted, network calls are unpredictable: A user's Event A (e.g., Login) is generated on the client's server. A user's Event B (e.g., View Record) is generated immediately after on the same client's server. Due to network congestion or routing, the API call for Event B might arrive at your Ingestion Service before the API call for Event A. If your Ingestion Service assigns the monotonic counter upon arrival, Event B will be stamped with N and Event A with N+1. Your audit log will then incorrectly show the user viewing a record before they logged in, regardless of your sophisticated ordering mechanism. The monotonic counter (or even Lamport/Vector clocks) needs to be generated as close as possible to the source of the action—the client's server—to truly reflect the "happened-before" relationship. For a true global sequence, the ideal place for the initial monotonic sequence number is within the Client SDK/Middleware. This is where the events are first generated in a strict local order. While this adds complexity to the client-side implementation (e.g., handling persistent counters during client crashes and ensuring non-duplication), it's the only way to capture the reliable local ordering before unpredictable network factors corrupt the sequence. A potential architectural refinement would be to: Generate a local, monotonic counter (or a Lamport timestamp) on the client server/SDK for each event. Send this client-generated value with the payload to the Ingestion Service. The Ingestion Service would then use this client-provided sequence number as the primary ordering key, not the server-arrival time or a server-generated counter. This preserves the true order of events as they occurred within the originating system, which is the ultimate requirement for regulatory and forensic accountability.
Dumb Components Dumb components are also presentational components which rarely have states to manage since they're just to show data on DOM. You can consider any basic ui elements to be a dumb component. Such as, buttons, tabs, switches, etc. Smart Components Smart components are also called container components and they act as data warehouse or data store whose work is to provide data or behavior to the dumb components and therefore they have states to manage. You can read more about them in detail here. https://www.shade.codes/dumb-components-and-smart-components/