Which one will be faster? Also in a distributed environment where we have multiple Node.js servers which one is the better way to do caching?
We are not caching our API responses (anymore), due to the fact that cache busting is an unresolved problem, still. We have tried Varnish cache and Redis as well as Couchbase (couchdb+membase). Varnish is the fastest one, Redis was the easiest one and Couchbase had best feature set.
But we couldn't solve the problem when cached entries in a distributed network need to be purged. After purging entries, we've experienced strange cache hits and misses regardless if the cache was cold still or warm already. Users had reported all kinds of unreproducible data issues. This caused us to decide to pull the plug off of API response cache stores.
Sure 13% of false positive cache hits/misses (very good 87% hits) is still better than 0% caching (for a bunch of web apps). But for financial systems even 1% wrong data is by far 1% too much of wrong data.
I suggest to start easy by using Redis. Was the easiest to install and configure. And all kind of tutorials these days are about Redis.
Edit: We do use e-tags for caching JSON response in the browser/client. Means that each client accessing the same URI will cause a re-computation on the backend. But each client will cache the response locally.
I think Redis will be much faster. Another benefit is that you can access Redis from the other Servers (make sure to encrypt Redis) but accessing a file system is hard.
NGNIX for static pages? Then Node.js on pass through for APIs in Redis + Couchbase?
Modulus.io article on > Supercharging site via NGNIX and Node.js pass through: blog.modulus.io/supercharge-your-nodejs-applicati…
Does anyone know of a good articles, resources and repos on test scenarios using these on AWS or other?
Thanks in advance.
I choose Redis and Couhcbase better than .json files. Just you need to be securable Redis or Couchbase.
Denny Trebbin
Lead Fullstack Developer. Experimenting with bleeding-edge tech. Irregularly DJ. Hobby drone pilot. Amateur photographer.
Jan Vladimir Mostert
Idea Incubator
You can always setup a ram disk on your Linux server and then simply have an API that you call that does the writing / reading to and from "disk". Then by simply adding a load-balancer in front of your caching API, you can easily load balance it.
By storing it in memory / ramdisk, you're effectively doing what Redis / Memcached is already giving you out of the box.