I have deployed Redis on my Linux machine. When I do INFO I see that used_memory is 90MB whereas used_memory_rss is just 2.7MB. I would like to know the difference between the two and why is there a huge gap?
Thanks!
used_memory_rss: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools such as top(1) and ps(1)used_memory: total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmallocIf I'm reading this correctly, the rss version shows the whole size of the program (external) and the regular is internal to redis (it doesn't know the size if the program that is running itself)
Jan Vladimir Mostert
Idea Incubator
Jan Vladimir Mostert
Idea Incubator
As far as I know, Redis allocates memory in blocks / pages, if only some of the keys are freed, the block of memory is still allocated in the OS. In order the free the whole block, all the keys needs to be freed. So what you're seeing is memory fragmentation, actual memory you're using according to the OS will be 90MB whereas data sitting in Redis memory will be 2.7MB.
Quote:
Jemalloc (and several other libraries) allocate arenas of sizes larger than an original request, then uses that arena to satisfy similarly-sized requests until the arena is all used. As memory is freed, portions of the arena are re-used as requested. If an arena is 100% empty, the Jemalloc will free it to the operating system. If an arena is 1% used, Jemalloc cannot free it to the operating system.From the official Redis Docs:
Redis will not always free up (return) memory to the OS when keys are removed. This is not something special about Redis, but it is how most malloc() implementations work. For example if you fill an instance with 5GB worth of data, and then remove the equivalent of 2GB of data, the Resident Set Size (also known as the RSS, which is the number of memory pages consumed by the process) will probably still be around 5GB, even if Redis will claim that the user memory is around 3GB. This happens because the underlying allocator can't easily release the memory. For example often most of the removed keys were allocated in the same pages as the other keys that still exist.redis.io/topics/memory-optimization
90MB is quite small, but it would be interesting to see what solutions are available to fix Redis memory fragmentation other than restarting Redis. My guess is that most people would simply restart Redis if they hit fragmentation issues, but I'm sure there are better solutions, maybe do a bit of Googling and post here once you find something that looks promising.