Entropy poolsize hard-coded?

I noticed in systemd-analyze blame that the systemd-random-seed.service takes way less time than before. It used to be 2 seconds for me and now is ~100 ms.

When I checked my entropy_avail it was 256 and this didn’t change no matter how much I mashed my keyboard or moved my mouse. No issues with boot time though.
Output of cat /proc/sys/kernel/random/poolsize is 256. So I did a search on the web and found this recent message:
https://lkml.org/lkml/2022/5/27/324

Does this mean that that for whatever reason the entropy is hard coded in the kernel to 256 or did I mess something up which I have a habit for and that is the reason that entropy is stuck at 256.

2 Likes

Thanks for posting this!
I have been also wondering about this but didn’t get around to post and ask.
I have the same entropy on a quite recent, not so tweaked at all Arch-btw install:

cat /proc/sys/kernel/random/poolsize 
256
systemd-analyze | grep seed
17ms systemd-random-seed.service
1 Like

On my Arcolinux installation the poolsize is also set to 256 but my Manjaro installation which is always lagging behind it is 3000+ :thinking:
The article I posted is way to technical for me to understand so hopefully someone else can make some sense out of it.

EDIT: Yes I still have a Manjaro installation, my first “Arch based” distro whicj I find difficult to replace…

1 Like

Have you something like haveged or rng-tools installed on your Manjaro?

https://wiki.archlinux.org/title/Haveged

Nope no haveged no rng kernel parameters nothing. Also no poolsize set in /etc/sysctl.d/99-sysctl.conf or /etc/sysctl.conf

1 Like

If I understand this correctly then you still have the same collection of the random bits to make the generator’s seed unpredictable. The thing that changed was what is then done with these collected bits to produce a random numbers.
Before it was feeded to LFSR function (linear feedback shift register) where size of the input matches size of the output. When you have N random bits then you can have 2^N-1 max random numbers.
Now the random bits are feeded to a hash function which always produces 256 bits (or bytes? not sure) number. Modern processors should be more optimised to compute the hash than before and therefore it should provide better performance and “unpredictability” of the generated number.

But then again I could be completely wrong. What do I know how this witchcraft works.

1 Like

Your explenation makes sense and is pretty much what I distilled from the article.