Warning when updating mkinitcpio 29-1

On 1: The change from "..." to (...) is simply a change from a string to an array.
IMO yes, you can and should change it, as the manpage also refers to arrays.

2 Likes

Yeah 19 is slow and bad defaults…The whole point of zstd to be fast for such operations :expressionless:

2 Likes

The only thing that changed in my files is: #COMPRESSION=“zstd”
This has been added and is also commented out.

Would it suffice to change the .conf.new files name to .conf and remove the old one?

If it wasn’t for this topic I would have done nothing…

can anyone remember when the syntax turned away from “” to ()?
i changed it for my conf, hoping to be surprised by less pacnews this way :wink:
on the other hand, “” still remains for the compression section anyway?

It seems to me that it doesn’t matter whether a string or an array is used.
There is a function called arrayize_config which has:

arrayize_config() {
    set -f
    [[ ${MODULES@a} != *a* ]] && MODULES=($MODULES)
    [[ ${BINARIES@a} != *a* ]] && BINARIES=($BINARIES)
    [[ ${FILES@a} != *a* ]] && FILES=($FILES)
    [[ ${HOOKS@a} != *a* ]] && HOOKS=($HOOKS)
    [[ ${COMPRESSION_OPTIONS@a} != *a* ]] && COMPRESSION_OPTIONS=($COMPRESSION_OPTIONS)
    set +f
}

and which is called in mkinitcpio.

2 Likes

thanks, good to know.
changing the syntax uniformly would not be a bad idea anyway?

:exploding_head:

Yep!

3 Likes

Just to be safe you can also rename the original to mkinitcpio.conf.bak and then rename mkinitcpio.conf.pacnew to mkinitcpio.conf. This way should an issue arise you can revert the change.

4 Likes

And if I wouldn’t do anything the old file would be used till the end of time?
Like there has been no update at all?

1 Like

This will always happen on any rolling-release distribution - configuration files will change over time.

For example, there was a period where Network Manager was changing its internal configuration storage (somewhere around 1.16->1.20) and you had to periodically remove and re-add connections to get them to work.

The other way to avoid this sort of thing is to reinstall periodically and start fresh each time.

2 Likes

I understand the compression reduction, but why did you switch from all cores to two cores?

1 Like

It would be enough for me if I knew how to deal with it in the future.

Because in my tests, there was barely any difference in compression speed. Maybe because the file to be compressed was too small to profit from many threads. I’ve seen similar behaviour with xz in the past.
YMMV :slight_smile:

2 Likes

You will; after you handle a few pacnew files, you’ll be a pro at it.

Compare the differences between the pacnew and the original file; merge any changes that you have specifically made in the old file into the new (sometimes it’s simpler/easier to merge the other way, too - and sometimes you can simply replace/overwrite the old file with the new one). You can use various tools to compare the files; meld is a popular one.

A lot of times, I tend to compare the files, then manually make the edits.

Note that there are a few files that you should always leave alone unless there are specific instructions to merge with a pacnew. These files include:

passwd
shadow
gshadow
group

(and probably a couple more that I can’t think of offhand)

The pacnews for the above files will basically just be empty defaults; the ones on your computer have your user and password information, so you never want to overwrite them. Unless specifically instructed (via the Arch news page) to do otherwise, the best action for these pacnew files is to simply delete them.

You can always ask about any pacnew files that you get here on the forum. Also, see @BONK’s post above. But after a while, you’ll get used to them.

11 Likes

Pretty good selection, sometimes people learn the hard way, so it’s very important to note :joy:

6 Likes

@jonathon: Tested again, on this 2C/4T laptop, using the uncompressed vmlinux kernel image from linux kernel, size ~ 47 MB:

-13 -T1 → 3.27 seconds
-13 -T2 → 1.99 seconds
-13 -T3 → 2.02 seconds
-13 -T4 → 2.01 seconds

As you can see, using more threads doesn’t necessarily result in improved speed.
Might look different with larger files and higher compression ratio.

3 Likes

That looks like more of an issue with your CPU’s hyper-threading than with using more threads. :wink:

But more seriously, point taken, there’s no easy way to distinguish between real cores and hyper-threaded ones. $(($(nproc)/2)) isn’t even useful as you could have a 2 or 4 core CPU without hyper-threading.

2 Likes

I can reproduce on Ryzen 3900X.
Everything from -T2 on results in about the same time :wink:
One could now try the same with disabled SMT…but I’m too lazy.

3 Likes