zstd compression algorithm and the dethroned old king xz

Here is the comparative numbers reported by Arch devs on which they based their decision to use this fast but resource hungry compression tool.  XZ still wins in size, loses on time, while ZSTD is a huge loser in memory use while compressing; decompressing is comparable and equally fast.  Zstd (gang) software also relies heavily on very current powerful server grade machines to provide the benefit of speed, to make up what it lacks in quality.   Compression software should primarily be judged on their ability to compress, and zstd fails miserably against this 45 year old trusty switchblade called xz.  So we can conclude that arch has an abundance of computing/building/packaging apparatus, with truck loads of spare ram to parallely process many packages.

Arch comparison test ZSTD vs XZMy article (a link to it) was removed from r/linux yesterday for no good reason, 100% linux related material, and as I complained I was permanently banned from posting there.

https://www.reddit.com/r/linux/comments/ejn5c5/arch_2020_welcomes_its_little_brothers_and/

In case you are wondering I was reporting that arch nearly silently started using this facebook compression algorithm on packaging and here is their own test data to support this decision:

https://lists.archlinux.org/pipermail/arch-dev-public/2019-March/029520.html

 

A different set of tests about more compression utilities

From this article that goes into a more general (many compression tools compared) but in more in-depth comparison (not the ideal for arch’s use tests run above, striving to make zstd look good) we isolated two tables on xz and zstd.

The next test was with a much larger file/archive – this time using linux 5.1-rc5

Note where on column 4 the % of cpu utilized (on a 4th gen. i7 8 thread machine) that the speed is due to multithreading.  So on a single or double core machine (single thread) the effects should be analogous to multiplying the speed by the inverse of your lack of threads.  2sec on an 8 core is 16sec on a single core, and the inverse for MB/s, 10MB/s is equal time to 80MB/s with multicore.  So on a lesser machine than the testers don’t expect the speeds to be as spectacular.  Why can’t conclude with zstd’s ram use deficiency would be on a single or double core machine.

About “free space to distribute software”

The mirror space and bandwidth to distribute those compressed packages are paid by others (us in most cases, public university servers).  While arch-devs and their super machine builders are all relieved from the burden of packaging speed, and the tons of additional memory required to do compression, which if I can interpret it correctly it counterbalances the multi-threading abilities, the increase in size and bandwidth to distribute packages is falling on the users and their corresponding mirrors feeding them.

On the question why did both r/linux and r/archlinux blocked content on the xz/zstd change:

As a late announcement in archlinux.org news, 8 days after the shift took effect, and AFTER our articles and banned posts on r/linux and r/archlinux, they made the following statement to cover their “posteriors”.

Don’t make it personal to r/linux and r/archlinux moderators. This is the real reflection of the status of linux and its evolution. A year or two ago Google took NSA’s speck cryptography algorithm and pushed linux to adopt it. Linux did. And many distros left it enabled to be used by unsuspecting users. A popular outcry was met by a silent decision to dump it eventually, so whining and cursing eventually works, or in the case of Linus should I call it whistle-blowing? I think it was around 4.17-4.18 that Linux had included speck. Arch switched it off after several other distributions had already done so, but still included the code into the kernel.

So it is not linux, it is not r/linux, or Arch-Linux, it is a problematic decision making fashion across most of linux.  What I find even more problematic is the passive audience “customers” who refrain from getting involved.  They just care about their “free as in beer software” filling the empty cells of disk space on their pc.  I would recommend that more people need to get involved and influence the decisions made and not allow Large Multinational Corporations to keep making all the decisions about their software and really corroding the nature of open and free software and the freedom of users/sysadmins to choose their tools.

Based on Fedora’s and Arch’s decision to switch package compression tools, without judgement and further research many more distributions will try to “catch up to the trend”.  Those who are limited by economic realities and rely on cheaper older machines in network to do packaging, will soon find out the burdens of using such a tool as zstd, despite of our value judgement to reject it based on its origins, and not on performance.

Like your mommy told you when you were young, don’t accept candy from a stranger, or a needle from a cheap pusher!  And facebook is and will always be a strnager to the real world of open and free software, not to say an offender of our intelligence to see it as a good willing contributor.

 

Enough?  We will add more data and sources as they come up from friends and activists against the hydra of corporatism and domination.