This page looks best with JavaScript enabled

Memory usage in Linux

 ·  🎃 kr0m

Memory management in Linux systems is a complicated process, and the data offered by the operating system’s own tools can lead to a misinterpretation.

First, we must have some concepts clear to understand the output shown by the operating system:

  • Page: Memory block used in memory management by Linux, a typical value in Linux is 4096 bytes.
  • Physical memory: Real RAM memory, pure hardware.
  • Virtual memory: Memory space shown to a process, which believes it is a continuous and isolated space from the rest of the memory.

In theory, a process should load all the necessary libraries to function when it loads, but in reality, this is not done. Only what is necessary to start is loaded, and when a library is needed, the MMU (Memory Management Unit) pauses the process, loads the library into RAM, and adds a reference (maps) in the virtual memory of the corresponding process to the address where the library data is located.

Ps aux offers us several data about RAM memory:

  • VSZ (virtual memory size): Memory required if the process needs to load ALL the libraries with which it was linked.
  • RSS (Resident Set Size): Real memory used by the process, but it should be noted that shared libraries between processes are being counted here.

There is another value that can be useful to us:

  • PSS (Proportional Set Size): This indicates the memory used by the software core, the minimum to start, and the n-th part of memory used by shared libraries.

If a library is shared between 3 processes, PSS will show what the core occupies + (the size of the library/3).

There is a tool that takes this into account:

We install the dependencies:

emerge -av dev-python/matplotlib

We clone it with Mercurial:

emerge -av dev-vcs/mercurial
cd /usr/src
hg clone
cd smem
python2.7 smem

 PID User Command Swap USS PSS RSS
2254 XX ck-launch-session dbus-laun 0 144 177 876

NOTE: In case the memory repo is gone, here is version 1.4.

Another aspect to consider is memory fragmentation:**

  • Internal: memory can only be allocated in values divisible by 4, 8, or 16 bytes. If you ask for 23, it will offer you 24, leaving one byte fragmented.
  • External: free memory is fragmented. If we have 3 fragments of 2 bytes and an app needs 3, we cannot load what it needs because none of the spaces independently fit.

For example, in Redis, there is often a discrepancy between RSS and used_memory. RSS will give us the RAM used by Redis, counting the fragmentation gaps, while Redis’s used_memory will give us the RAM that is actually useful to it.

Mitigating external fragmentation:

RAM is composed of two parts, Stack and Heap. This allocation is semi-dynamic, meaning it can be defined through certain OS configuration parameters.

If a variable is larger than a primitive type, malloc must be used, and instead of storing it in the stack, it will be stored in the heap. This way, a pointer will remain in the stack that points to the memory address in the heap.

Operating in this way, several pointers can be placed in the stack that point to several addresses in the heap whose sum conforms to the value of a variable, thus avoiding external fragmentation.

It could be said that the stack is the maximum size of RAM that the OS will allow us to use for primitive variables, while the heap is the rest of the RAM.

The size of the stack can be adjusted in the OS according to the software that will be run:

  • Less –> Few but very large variables
  • More –> Many small variables

NOTE: Variables in the stack are faster since there is no need to read the pointer to access the data, but rather it reads the value directly.

To define the size of the stack in Linux:

ulimit -a

stack size (kbytes, -s) 10240
If you liked the article, you can treat me to a RedBull here