Pacman run out of RAM (is it possible?)

Hello there,

I’m not here to report a problem (I guess?), but to tell a little story that made me think.

Today, I tried to install a WhatsApp terminal-based client named nchat, available on the AUR. Obviously, I simply run yay -S nchat in my terminal, and it started building everything. It had a total of 600+ steps, so we’re talking about a little time for a small utility weighting a bounch of MBs, but let it be. I have a widget on my desktop showing the amount of RAM and CPU used, and I could see that my processor was most of the time working at 100% (which is normal, since it was compiling things). But the strangeness came when I look at the RAM: I have a 16GB Lenovo PC, with no other program opened, and even like that RAM usage was nearly 90%.

Then, at more or less step [590/630], the whole PC froze, RAM usage jumped to 100%, and finally the installation shown an error (that you can see in the screenshot above).

In a desperate try to overkill it, I set up a 16GB Swap file (bringing my total RAM to 32GB, “magically”), and this method finally allow the installation to finish. Even like this, I saw Swap file being used at more than 10%: this means that the simple yay -S nchat instruction took nearly 20GB to complete (we’re talking RAM) !

How is this possible? Did I do something wrong? Is it about my configuration, or some missing flag during the installation? Or, maybe, did I do the only reasonable choice (which I am skeptic about)?

Depending what you are compiling, it can take a lot of ram. I’ve seen compiles take most of 64Gb.

1 Like

Update: the answer was in the Github’s README.

8. Build fails with c++: fatal error: Killed signal terminated program cc1plus?

This often means that OOM killer has terminated the compilation due to the system running out of free RAM.

If the system has less than 4 GB RAM, please refer to Building on Low Memory Systems.

If the system has 4 GB RAM or more, the problem can occur if parallelism is set too high, which is likely to be encountered when installing from the Arch Linux AUR package. A workaround for the AUR package is to manually restrict max number of parallel jobs to available RAM in GB divided by 4. For example a system with 8 GB would then need to use max 8 / 4 = 2 jobs:

CMAKE_BUILD_PARALLEL_LEVEL=2 yay -S nchat

Alternatively one can Build from Source using the make.sh script, which sets parallel job count based on the system capabilities.

So, apparently it’s a well-known problem in the AUR. And as it has been underlined by @MyNameIsRichard, compilation can take a huge amount of RAM.

So, the solution was: RTFM. It was just a bit strange to encounter a problem like that in an operation that seems so easy.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.