My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

How does caching performance scale over time?

Phillip Chan's photo
Phillip Chan
·Jan 22, 2019

Hello, I'm just learning about caching (Redis on top of Mongoose in Node) and am being impressed with the benefits of caching for faster data retrieval. My understanding is that using a cache will be faster than accessing the database because of a combination of 1) it is stored in RAM vs hard drive 2) there will usually be less cache documents than actual documents. If set up well, it seems that the the cache would always be faster than directly accessing the database.

I'm trying to extrapolate the caching benefits over time and am wondering...are there situations where due to the way the cache is setup, the cache will actually be slower than than directly accessing the database?

For instance, let's say we have a db of 1000 documents. And we set up the key-value pairs for Redis to be really specific to the point where the # of redis documents is actually larger than # of db documents (say 10,000). Or another example would be where the cache is just really old and the redis collection is much larger now than the # documents in the db.

Has anyone come across that situation and determined that caching is actually slower than db access? Or is it fair to say that if you set up the caching rules correctly, 99% of the time, caching will be faster than direct db access? Help me to understand architecturally how caching performance scales over time and what factors contribute to successful caching implementation.

Thanks!