Garuda Linux is an Arch based distribution that provides an easy GUI installation of an Arch system with the widely used Calamares installer. But unlike the numerous other distributions that offer such an easy installation of an Arch system and some custom convenience tools such as a welcome application, Garuda Linux is more ambitious -- offering much more such, as installation on a Btrfs filesystem with a fully configured system snapshot and rollback capability and distribution developed GUI tools for system administration -- making it far more than the typical Arch based distribution. Its focus is to provide a high performance Linux distribution -- to that end using the Zen kernel and including other optimizations -- that is also beautiful.
This article reviews the 210621 ISO release of Garuda's Dragonized (stylized as Dr460nized) edition which features the Plasma desktop.
Garuda Linux differentiates itself from other Arch based distributions, and Linux distributions generally, with the following characteristics:
It is also differentiated from Arch based distributions, and especially from Arch itself, by being opinionated, not offering choice in certain aspects of the installed system. For example, as discussed in this article and the installation supplement, it only allows the Btrfs filesystem during installation -- probably to simplify user support, and less importantly, does not provide a light theme version of its beautiful dark theme, as requested by users in its forum.
This article specifically reviews the KDE Dragonized Edition (sometimes stylized as Dr460nized) -- one of the three highly customized Dragonized variants featuring the Plasma desktop out of a total of four Plasma editions and nine others that feature other desktop environments or window managers.
The KDE Dragonized Edition is extensively modified from the vanilla Plasma desktop compared to the slight customizations offered by other distributions. The most radical of the customizations is the replacement of the typical Plasmashell configuration of a single bottom panel encapsulating a task switcher, system tray, and launcher widget with a configuration that uses a top panel rendered by Latte in panel mode that contains the current Plasma default Application Launcher, as well as the Window Buttons, Window Title, Window AppMenu, the Plasma System Tray, and the Event Calandar widgets. Garuda's Plasmashell also includes a bottom Latte dock. The top panel is reminiscent of Unity in its look and behavior, while the overall look of the configuration is very similar to macOS. The configuration of the GUI shell happens to be similar to how I customize Plasma on any distribution, but I achieve the look of top panel with a native Plasmashell application menu panel and the Active Window Control and Global Menu widgets.
Other modifications to Plasmashell include additions to the screen edge gesture configurations such as a left edge gesture to switch from the first desktop named "Desktop" to the second of two preconfigured desktops named "Offscreen". The developers enable many animations (or Desktop Effects) provided by the Plasma window manager, Kwin, including for example, the Cover Switch ALT + TAB task switching animation, Zoom Desktop when logging in, Magic Lamp when minimizing windows, and Wobbly Windows among others, mimicking the macOS animations.
The appearance is defined by a modified Sweet global theme named Sweetified -- available in dark mode only, a matching Kvantuum application style theme, and BeautyLine icons, and a Sugar Candy based SDDM theme. The effort to enhance the beauty of the distribution extends to the GRUB theme and GRUB splash screen, which was improved between the 210406 and 210621 ISO releases, changing the already unique theme to an even more impressive one. The look of Garuda is illustrated in the following set of screenshots.
In fact, the aesthetic qualities are what prompted me to try Garuda Linux. At the time, I had grown tired of the Plasma desktop on the Fedora, Manjaro, and Endeavour OS installed on my secondary laptop, so I tried to use Budgie and Cinnamon for a change, but I couldn't be away from the flexibility and power of Plasma for more than a few minutes, so I looked around on the Internet and discovered Garuda. I found the theming of the Plasma desktop to be fresh and very aesthetically appealing, so I installed it using the 210406 ISO release over Fedora. The installation was not without its problems, but after I resolved the issues, I was so immediately impressed by the distribution, I installed it on my primary laptop.
I later installed the more recent 210621 ISO on my primary laptop in order to have a more recent impression of installing and using the latest Garuda for this review. Since the first installation, I have used Garuda almost exclusively on both laptops. In that time I have found that the distribution does achieve its goals of producing a distribution that is fast -- see Garuda Linux Benchmark Comparison Versus Solus for objective measures of its speed -- and beautiful, but it is not without problems and disappointments.
My experience with Garuda Linux, its features, and my impression of these features is described in the rather lengthy Review section below (followed by the Recommendation), but for those who find the Review portion of the article too long, a summary is provided in this section. Also articles that supplement the review are available:
Also, for those interested in installing a pure Arch system on a Btrfs filesystem and configuring system snapshot and rollback capability with Snapper, with all its capabilities intact, similar to openSUSE's configuration -- and not with the simplified Btrfs subvolume layout suggested in the Arch wiki that is incompatible with Snapper's advanced features -- see An Arch Linux Installation on a Btrfs Filesystem with Snapper for System Snapshots and Rollbacks.
The summary of the review:
When selecting a distribution for installation on my primary laptop, the most important considerations are reliability and quality. These characteristics are usually found on mature distributions with a corporate backing, of which I prefer openSUSE. I also have a high opinion of Arch and Debian, which, while not having the corporate backing that provides resources that help in developing a distribution with high quality and reliability, they are large communities with mature distributions. I made an exception for Garuda because I was intrigued by the look of the distribution and the possibility of an easily installed Arch system on a Btrfs filesystem with snapshot and rollback capability working out of the box.
Garuda greatly benefits from the design of its Arch base, from the simple and transparent configuration of Pacman which is used by Garuda to add its own mirrors and repositories, to the systemd unit file type syntax of Pacman hooks which allow integration of the Btrfs/Timeshift snapshot management during package management transactions. In areas of the distribution which were added by Garuda, it is clear that the distribution is an experimental, hobbyist distribution developed by Linux enthusiasts who make a distribution that suits their needs and sensibilities, while at the same time learning from the act of creating and experimenting with their distribution. The developers characterize themselves and the distribution as such in various Garuda form threads, such as the one titled 'zram-generator' incorrectly listed as hard dependency for 'garuda-common-settings'.
This nature of the distribution resulted in numerous changes to the distribution during the past four months -- such as the fundamental change in the login shell from fish to Bash, presumably to avoid the difficulties described in this review of using fish as the login shell -- which would not have been necessary if the distribution had been more mature. The distribution's nature as a hobbyist experimental project could also be the underlying cause of the unreliability of some of the Garuda tools and the unsophisticated implementation of the Garuda tools in that -- while they are in theory useful, and look good -- the GUI controls simply open a visible terminal and execute a command in the terminal. In certain cases, activating multiple GUI controls cause a conflict with each other and fail without notification to the user. The distribution's nature as one primarily developed for the developers themselves may be the underlying cause of some of the unreliability. It seems that the developers do not test the distribution on machines and use cases that is different from theirs, which I assume are desktop computers with only one OS installed. Perhaps if the developers tested the distribution on computers with multiple distribution's installed, the issue with the Garuda Boot Options would have been caught.
For me the most serious flaw was not the unreliability of the Garuda tools, but the disregard of the type of platform on which the distribution was being installed and the automatic selection of defaults more appropriate for desktop systems than laptops. To a much lesser extent was the lack of a light version of the Plasma theme, which I found to be necessary for working in bright environments, despite the insistence of one of the developers that a dark theme is always better for the eyes. I assume that these aspects of the distribution with which I was not happy are exactly the characteristics Garuda's target audience -- gamers using desktops, and themselves -- prefer.
Despite the issues, it is also clear that the developers are very ambitious and passionate about their distribution and open source software generally, and have a grand vision for their distribution which they pursue with dedication. This is evidenced by their provisioning of a distribution with a Btrfs filesystem and configuring system snapshots and rollbacks, a configuration that has thus far only been provided by the commercial SUSE Enterprise Linux and its community counterpart openSUSE. The desire for a comprehensive system administration GUI is also a reflection of their vision.
I am sure that in time the reliability and the backend implementation of the Garuda GUI tool will improve. And the issues that are the result of not being the target audience are already being addressed within the GUI, that allows disabling the default options desired by gamers on desktops and enabling the more balanced options desirable for laptop workstation users. My issues with the distribution notwithstanding, I am glad I tried Garuda, as I have learned more than I would have with some more typical distributions. For example, in examining how the performance options set by performance-tweaks work, I learned about the systemd-tmpfiles system works. I have also been exposed to more Linux tools than I would have with typical distributions -- such as the Fish shell, Zram, and many of the other items mentioned in the review -- inspiring me to configure my future Linux installations differently. I will continue to use Garuda on my secondary laptop, but it will soon be replaced with Fedora 35 on my primary laptop, where a pure Arch system installed using the process documented in An Arch Linux Installation on a Btrfs Filesystem with Snapper for System Snapshots and Rollbacks, and an openSUSE Tumbleweed installation also currently have a permanent home.
The pre-installation experience and the actual installation are described in Garuda Linux Review [KDE Dragonized (D460nized),210621] Supplement: Pre-Installation and Garuda Linux Review [KDE Dragonized (D460nized),210621] Supplement: Installation, respectively. But some notable items from those supplementary articles:
Garuda Linux Dragonized Edition is the most customized Plasma desktop environment of any distribution I have seen since Plasma 5 was released six years ago. Its primary change from the default upstream Plasma is not limited to the replacement of a single panel at the bottom of the screen to an application menu panel at the top of the screen and a dock at the bottom, but that instead of using the panels and dock-like panels provided natively by Plasmashell, it uses those provided by Latte.
While the shell configuration is similar to what I like in a Plasma desktop, the choice of using a Latte rendered top panel instead of a native Plasmashell may not be a good one, at least in terms of performance and stability, if not in control of the appearance. The effect is identical to that which can be achieved by using the native panel but introduces instability in two ways: in certain instances the interaction between the Latte panel and Plasmashell is not as seamless as when using the native panel; and Latte dock is not the most stable software, and as Garuda uses development versions instead of the stable release version, this may be particularly so.
The problems with Latte and Plasma include:
Other significant customizations by Garuda to the Plasma experience includes the effort to better integrate GTK applications to the Plasma configuration chosen by Garuda. One example of this is that the GTK based Firedragon browser -- forked from Librewolf, which itself is a fork Firefox -- provides the native KDE Frameworks/Qt file dialog when saving files from a webpage.[1]
Unfortunately, other GTK applications -- I tried Firefox, GIMP, Inkscape -- do not use the same native KDE Frameworks/Qt file dialogs. This may be understandable for GIMP and Inkscape which may offer options during save/open, but not on Firefox which should have the same requirements and limitations of Firedragon in terms of file operations.
Another effort in improved GTK application integration in Garuda's Plasma includes the enablement of GTK applications' menus in the top panel's Window AppMenu widget as well as those of KDE/Qt applications. Both Inkscape and GIMP application menus appear in the Window AppMenu widget of the top panel instead of the applications' windows. One of these GTK applications (I don't remember which) until a very recent update broke this feature also had its menu rendered in the Global Menu widget in my installation of openSUSE Tumbleweed. But in Garuda the menus of these applications continue to be rendered in the top panel.
However, this feature is still somewhat inconsistent. At some early point in my time with Garuda, Firedragon had its menu drawn in the top panel but does not anymore. Other important GTK applications also do not have their menus rendered in the top panel. Most notable of these is Firefox, which while not installed by default, does incorporate other Garuda customizations but not one that allows its menu in the top panel.
As mentioned before, the look of Garuda Linux Dragonized Edition is defined by the Sweetified theme, a matching Kvantuum application style, and the Beautyline icons. This combination looks very good but is only usable in darker environments. On my primary laptop, when working in a bright room, even with the screen brightness set at 100%, it is extremely difficult to read any text displayed on the screen.
Unfortunately, when users request a light theme in the Garuda forum, the developers insist that it looks better and that it is actually easier on the eyes. Unlike me and the users making the request, the developers must work (or game) in darkened rooms or have very bright screens.
The provided system icons are also not visible in some applications such as Inkscape and even in Qt (but not KDE Frameworks) based applications such as TeXStudio.
Garuda Linux has adopted what some view as the filesystem destined to be the default Linux filesystem, Btrfs -- as did openSUSE many years ago and Fedora very recently -- as its default file system. Among the benefits of Btrfs> compared to the file system that is generally considered as the current de facto standard file system, ext4, is the ability to create snapshots of the system, i.e. duplicate certain configured branches of the filesystem hierarchy to preserve its state. The snapshots enable the capability of booting into an earlier version of the system and rolling back the system to a previous state.
The images in the following set show the GRUB menu and the Btrfs snapshot related items. The first image shows the main GRUB menu which has as one of its items, "Garuda Linux snapshots", which when selected lists the available snapshots (Image 2). When one of the available snapshots is selected, another menu which allows selection of the kernel to load, if more than one is available.
The Garuda Linux implementation of the Btrfs filesystem works reliably in conjunction with the Timeshift GUI backup program to create snapshots during each system update and in conjunction with GRUB to make the created snapshots available to boot from the GRUB menu. Potential users concerned about the reliability of the Btrfs filesystem should not be. I have been using Btrfs on openSUSE Tumbleweed for almost three years without a problem. I also used it for a time earlier in its history as openSUSE's default filesystem, when -- although it impressively was able to roll back the system, even after large changes -- it did have issues such that my installation became unusable because the area reserved for metadata in the filesystem was filled or, the snapshots used all available space. Since then openSUSE has added utility services that maintain the filesystem and also made changes to limit the number of snapshots that are kept, eliminating the early issues. It seems other distributions like Garuda have benefited from openSUSE's work as these issues are not present.
(It should be noted that as the filesystem does require more space than traditional filesystems more storage space should be allocated to Btrfs partitions. I typically allocate twice as much storage space for Btrfs partitions as I do for ext4 partitions, possibly the reason that my more recent, but longer duration, use of Btrfs in openSUE has been more reliable than my earlier initial experience.)
While Garuda's configuration of the Btrfs filesystem works reliably, some users who have used openSUSE with Btrfs might prefer a different set of subvolumes and different branches of the filesystem hierarchy included in the subvolumes than as configured by Garuda. The configuration of subvolumes and the mount point of the main subvolume is different from that recommended by -- perhaps for compatibility with Snapper, the program used to manage snapshots -- openSUSE, arguably the authority on Btrfs use in Linux based on its long history with the filesystem.
Also, the Timeshift GUI provided by Garuda (see next subsection) while very easy to use, is very limited compared to openSUSE's Snapper related YaST modules, which among its other powerful features, allows users to browse within snapshots and view specific changes in files between different snapshots, at a level of detail provided by diff, by using a built-in Snapper capability.
The Btrfs filesystem is more complex than ext4, not just in the details of its implementation as a filesystem, but in system administration. For a general description of the filesystem see the Wikipedia article, for an accessible tutorial that illustrates how subvolumes and snapshots are used, see this tutorial by an Arch user, and for all of the details see the official documentation, especially the sysadmin guide. But a description of its configuration in Garuda is provided below.
Although the Btrfs filesystem has its own set of tools that includes creating and managing snaphsots (see man btrfs.8), snapshots are typically managed by an external program such as Snapper, the default tool for managing snapshots in openSUSE. Garuda Linux uses the simpler Timeshift program to manage snapshots. Timeshift is capable of making snapshots of subvolumes specifically named /@ and /@home.
The subvolumes created by the installer when formatting the partition with the Btrfs filesystem, and mounted through /etc/fstab are shown below.
╭─brook@g5 in ~ ╰─λ mount | grep /dev/nvme0n1p11 /dev/nvme0n1p11 on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=302,subvol=/@) /dev/nvme0n1p11 on /root type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=257,subvol=/@root) /dev/nvme0n1p11 on /srv type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=258,subvol=/@srv) /dev/nvme0n1p11 on /var/cache type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=259,subvol=/@cache) /dev/nvme0n1p11 on /var/log type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=260,subvol=/@log) /dev/nvme0n1p11 on /var/tmp type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=261,subvol=/@tmp)
The /@ subvolume, mounted at / is the one that encapsulates the parts of the filesystem important for system rollback, such as /etc and /usr, and is one of the two specifically named subvolumnes of which Timeshift can create backups. The other parts of the filesystem hierarchy that should not be included in snapshots are encapsulated in the subvolumes /@root, /@srv, /@cache, /@log, and /@tmp. The paths in the filesystem hierarchy used by the kernel, such as /proc, /dev, /sys, etc., because of their special relationship to the kernel, can not be included in any subvolume.
The other specifically named subvolume that can be snapshotted by Timeshift is /@home, used to contain the /home directory, but this subvolume is not created by the Garuda installer (at least if /home is on a separate partition) and the Timeshift setting to include /@home has no effect. The rationale for not creating a /@home subvolume is presumably the same as that in openSUSE: changes in the /home directory and other directories that contain user data, for example, website data on a server -- typically in /srv -- should not be rolled back. But unlike Garuda, in openSUSE, when /home is on a separate partition, it is not formatted as a Btrfs filesystem, and when it is in the same partition, a separate subvolume that is not included in snapshots can be created for it by the installer, if specified by the user.
Whenever the Timeshift application has been started in the current session, or a package management transaction has started Timeshift or when access to it is attempted, the main top-level subvolume is also mounted by the systemd unit run-timeshift-backup.mount. The last line of the following listing, showing the output of the same command as that shown in the previous listing, displays this subvolume, whereas in the previous listing it was not shown.
╭─brook@g5 in ~ took 6ms ╰─λ mount | grep /dev/nvme0n1p11 /dev/nvme0n1p11 on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=302,subvol=/@) /dev/nvme0n1p11 on /root type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=257,subvol=/@root) /dev/nvme0n1p11 on /srv type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=258,subvol=/@srv) /dev/nvme0n1p11 on /var/cache type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=259,subvol=/@cache) /dev/nvme0n1p11 on /var/log type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=260,subvol=/@log) /dev/nvme0n1p11 on /var/tmp type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=261,subvol=/@tmp) /dev/nvme0n1p11 on /run/timeshift/backup type btrfs (rw,relatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=5,subvol=/) ╭─brook@g5 in ~ took 5ms ╰─λ
Accessing the mount point of this top-level subvolume (subvolid=5), which is always created automatically when a Btrfs filesystem is created, and contains all other subvolumes subsequently created, is one way of accessing all subvolumes and files in snapshots, which are also, technically, subvolumes. The subvolumes will appear as directories within this directory along with other directories within this path. The following listing shows the contents of /run/timeshift/backup. The subvolumes represented in this directory are those that are shown as mounted in the previous listing.
╭─brook@g5 in ~ took 3ms [🔴] × ls -l /run/timeshift/backup drwxr-xr-x - root 22 Jul 22:01 @ drwxr-xr-x - root 15 Jul 23:23 @cache drwxr-xr-x - root 17 Aug 19:37 @log drwxr-x--- - root 16 Aug 20:41 @root drwxr-xr-x - root 21 Jun 05:03 @srv drwxrwxrwt - root 17 Aug 19:37 @tmp drwxr-xr-x - root 17 Aug 19:32 timeshift-btrfs
The timeshift-btrfs directory is not a subvolume, but only an ordinary directory that contains all of the snapshots created during package updates and by Timeshift scheduled snapshots organized by the type of snapshot. The following listing shows the contents of the directory.
╭─brook@g5 in ~ took 16ms ╰─λ ls -l /run/timeshift/backup/timeshift-btrfs/ drwxr-xr-x - root 17 Aug 19:32 snapshots drwxr-xr-x - root 17 Aug 19:32 snapshots-boot drwxr-xr-x - root 17 Aug 19:32 snapshots-daily drwxr-xr-x - root 17 Aug 19:32 snapshots-hourly drwxr-xr-x - root 17 Aug 19:32 snapshots-monthly drwxr-xr-x - root 17 Aug 19:32 snapshots-ondemand drwxr-xr-x - root 17 Aug 19:32 snapshots-weekly ╭─brook@g5 in ~ took 2ms ╰─λ ls -l /run/timeshift/backup/timeshift-btrfs/snapshots drwxr-xr-x - root 18 Aug 00:00 2021-08-12_12-29-53 drwxr-xr-x - root 18 Aug 00:00 2021-08-12_13-05-41 drwxr-xr-x - root 18 Aug 00:00 2021-08-12_14-32-25 drwxr-xr-x - root 18 Aug 00:00 2021-08-13_19-11-58 drwxr-xr-x - root 18 Aug 00:00 2021-08-15_19-00-01 drwxr-xr-x - root 18 Aug 00:00 2021-08-16_19-00-01 drwxr-xr-x - root 18 Aug 00:00 2021-08-17_18-55-31 drwxr-xr-x - root 18 Aug 00:00 2021-08-17_19-32-34
Whenever a snapshot of the /@ subvolume (the subvolume mounted at / in the filesystem hierarchy and encapsulates the important parts of the filesystem hierarchy for snapshotting) is created, it is copied to an appropriately named -- indicating the snapshot creation date and time -- subdirectory of one of these directories, within another directory named @, i.e., /run/timeshift/backup/timeshift-btrfs/snapshots/2021-08-13_19-11-58/@. And whenever the system is rolled back, one of these snapshots is essentially copied back to the /@ subvolume.
The scheduled snapshots created by Timeshift and stored in the directories indicating schedule are actually symlinks to snapshots saved in the snapshots directory.
╭─brook@g5 in ~ took 3ms ╰─λ ls -l /run/timeshift/backup/timeshift-btrfs/snapshots-daily/ lrwxrwxrwx 32 root 17 Aug 19:32 2021-08-12_13-05-41 -> ../snapshots/2021-08-12_13-05-41 lrwxrwxrwx 32 root 17 Aug 19:32 2021-08-15_19-00-01 -> ../snapshots/2021-08-15_19-00-01 lrwxrwxrwx 32 root 17 Aug 19:32 2021-08-16_19-00-01 -> ../snapshots/2021-08-16_19-00-01 lrwxrwxrwx 32 root 17 Aug 19:32 2021-08-17_18-55-31 -> ../snapshots/2021-08-17_18-55-31 ╭─brook@g5 in ~ took 2ms ╰─λ
The snapshot chosen from the GRUB menu, or the default one, if not explicitly selected, activates one of the snapshots contained in one of the subdirectories of /run/timeshift/backup/timeshift-btrfs/snapshots named after snapshot creation time.
Snapshots are created in two ways in Garuda, either through the Timeshift GUI to create snapshots on demand and automatically at intervals specified in Timeshift settings, or during system upgrades by the /usr/bin/timeshift-autosnap script which is activated by /usr/share/libalpm/hooks/00-timeshift-autosnap.hook, a pacman hook, before the package management transaction. Immediately after the new snapshot is created, a GRUB update is performed that adds the new snapshot to the existing snapshots in the GRUB menu through a pacman hook that ultimately leads to the sourcing of the GRUB configuration file /etc/default/grub-btrfs/config by grub-mkconfig. If the kernel is updated during the upgrade, GRUB is updated again during post transaction processing, where the new snapshot is detected again.
At the next boot after upgrade, the snapshot created during the upgrade becomes available under the Garuda Linux Snapshots GRUB menu entry. The following listing shows the parts of the pacman output relevant to the snapshot creation, after packages have been downloaded and verified. (The line "First run mode (config file not found) appears if the Timeshift GUI has not yet been executed, and the user has not answered the prompt to configure it during first execution.)
:: Running pre-transaction hooks... (1/6) Creating Timeshift snapshot before upgrade... First run mode (config file not found) Selected default snapshot type: BTRFS Using system disk as snapshot device for creating snapshots in BTRFS mode Mounted '/dev/nvme0n1p11' at '/run/timeshift/backup' Creating new backup...(BTRFS) Saving to device: /dev/nvme0n1p11, mounted at path: /run/timeshift/backup Created directory: /run/timeshift/backup/timeshift-btrfs/snapshots/2021-08-11_14-45-00 Created subvolume snapshot: /run/timeshift/backup/timeshift-btrfs/snapshots/2021-08-11_14-45-00/@ Created control file: /run/timeshift/backup/timeshift-btrfs/snapshots/2021-08-11_14-45-00/info.json BTRFS Snapshot saved successfully (0s) Tagged snapshot '2021-08-11_14-45-00': ondemand ------------------------------------------------------------------------------ Generating grub configuration file ... Found theme: /usr/share/grub/themes/garuda-dr460nized/theme.txt Found linux image: /boot/vmlinuz-linux-zen Found initrd image: /boot/intel-ucode.img /boot/initramfs-linux-zen.img Found fallback initrd image(s) in /boot: intel-ucode.img initramfs-linux-zen-fallback.img Found linux image: /boot/vmlinuz-linux-lts Found initrd image: /boot/intel-ucode.img /boot/initramfs-linux-lts.img Found fallback initrd image(s) in /boot: intel-ucode.img initramfs-linux-lts-fallback.img Warning: os-prober will be executed to detect other bootable partitions. Its output will be used to detect bootable binaries on them and create new boot entries. Found Windows Boot Manager on /dev/nvme0n1p1@/EFI/Microsoft/Boot/bootmgfw.efi Found Arch Linux on /dev/nvme0n1p10 Found Manjaro Linux (20.2.1) on /dev/nvme0n1p7 Found Solus (4.3) on /dev/nvme0n1p8 Found openSUSE Tumbleweed on /dev/nvme0n1p9 Adding boot menu entry for UEFI Firmware Settings ... Detecting snapshots ... Info: Separate boot partition not detected Found snapshot: 2021-08-11 14:45:00 | timeshift-btrfs/snapshots/2021-08-11_14-45-00/@ Found snapshot: 2021-07-29 23:59:48 | timeshift-btrfs/snapshots/2021-07-29_23-59-48/@ Found snapshot: 2021-07-21 15:46:23 | timeshift-btrfs/snapshots/2021-07-21_15-46-23/@ Found snapshot: 2021-07-21 14:54:41 | timeshift-btrfs/snapshots/2021-07-21_14-54-41/@ Found snapshot: 2021-07-15 13:50:55 | timeshift-btrfs/snapshots/2021-07-15_13-50-55/@ Found 5 snapshot(s) ... truncated ... (15/34) GRUB update after transactions... Generating grub configuration file ... Found theme: /usr/share/grub/themes/garuda-dr460nized/theme.txt Found linux image: /boot/vmlinuz-linux-zen Found initrd image: /boot/intel-ucode.img /boot/initramfs-linux-zen.img Found fallback initrd image(s) in /boot: intel-ucode.img initramfs-linux-zen-fallback.img Found linux image: /boot/vmlinuz-linux-lts Found initrd image: /boot/intel-ucode.img /boot/initramfs-linux-lts.img Found fallback initrd image(s) in /boot: intel-ucode.img initramfs-linux-lts-fallback.img Warning: os-prober will be executed to detect other bootable partitions. Its output will be used to detect bootable binaries on them and create new boot entries. Found Windows Boot Manager on /dev/nvme0n1p1@/EFI/Microsoft/Boot/bootmgfw.efi Found Arch Linux on /dev/nvme0n1p10 Found Manjaro Linux (20.2.1) on /dev/nvme0n1p7 Found Solus (4.3) on /dev/nvme0n1p8 Found openSUSE Tumbleweed on /dev/nvme0n1p9 Adding boot menu entry for UEFI Firmware Settings ... Detecting snapshots ... Info: Separate boot partition not detected Found snapshot: 2021-08-11 14:45:00 | timeshift-btrfs/snapshots/2021-08-11_14-45-00/@ Found snapshot: 2021-07-29 23:59:48 | timeshift-btrfs/snapshots/2021-07-29_23-59-48/@ Found snapshot: 2021-07-21 15:46:23 | timeshift-btrfs/snapshots/2021-07-21_15-46-23/@ Found snapshot: 2021-07-21 14:54:41 | timeshift-btrfs/snapshots/2021-07-21_14-54-41/@ Found snapshot: 2021-07-15 13:50:55 | timeshift-btrfs/snapshots/2021-07-15_13-50-55/@ Found 5 snapshot(s)
The screenshot below provides one last illustration of the structure of the Garuda Linux Btrfs configuration represented by the output of btrfs-list. It shows the main Btrfs subvolume mounted at /run/timeshift/backup and each of the subvolumes and their mountpoints including the subvolume that is snapshotted, @. It also shows all of the preserved snapshots of @. The output seems to reflect (in lines 4-5 of the output) one detail of subvolumes and snapshots in that snapshots are subvolumes that share metadata and data.
Some Linux distributions provide Timeshift for making backups of certain branches of the root filesystem hierarchy using one of the available backends, rsync, but not the other backend, btrfs. But because Garuda forces use of the Btrfs file system, the filesystem's advanced snapshotting capability can be used by Timeshift to manage rolling back the core parts of the filesystem hierarchy to a previously created bootable snapshot.
This snapshotting and rollback capability, was only provided by openSUSE. (Fedora also started using Btrfs by default, but automatic snapshotting by either Snapper or Timeshift does not seem to be configured automatically.) This capability is one of the unique and attractive features of Garuda.
I used the capability repeatedly on my first installation from the 210406 ISO on my secondary laptop when the first update broke the system (for reasons mentioned above). The software is very simple and easy to use compared to openSUSE's command line snapper tool or the YaST component for managing the snapshots and rollback, although the openSUSE system is much more advanced, which, for example allows the granular browsing of snapshot contents and viewing differences between files of different snapshots from within the YaST component and the snapper command line program. A properly configured Btrfs/Snapper system also allows a rollback to be initiated from a running system, such that on next boot, the system has been reverted to a previous state, as shown in Testing Snapper Rollbacks: Part II of An Arch Linux Installation on a Btrfs Filesystem with Snapper for System Snapshots and Rollbacks)
The following screenshots show the tool in use after booting into one of the snapshots, where the first five images show the initial configuration steps upon first execution of the GUI.
The only problem that resulted from a system restore was an attempt to generate the prompt in the terminal panel in Dolphin resulting in an error because the fish function being accessed in fish's directory in /usr/share was not available after the restore. Updating the system after rollback resolved this issue.
One of the interesting choices made by the Garuda developers is to use the fish shell. In the installations of Garuda using the 210406 ISO release, the login shell and the interactive shell used by Konsole was set to fish. In the installation using the 210621 ISO, the login shell was changed to Bash but the interactive shell, the one used in Konsole remained the fish shell. This change was a good decision, since fish, by design, is meant to be an interactive shell, is not POSIX compliant, has very different syntax from Bourne like shells, such as Bash, and because many applications and many other components in a GNU/Linux system rely on the way Bash and its related shells work and use the configuration files sourced by Bash and related shells for setting their environment variables. It is also not the shell the majority of Linux users are accustomed to using.
I assume the change was necessitated by problems arising from these characteristics, and possibly because it alienated users. In briefly investigating how fish works as a login shell, I myself did not appreciate its use as a login shell because of the disadvantages mentioned above.
The process of setting environment variables illustrates the drastic difference between the two shells. The process begins similarly for both (and any other) shells; for virtual terminal console logins the systemd service getty@.service executes agetty which in turn executes login. This command, among other things, starts the shell (login shell) specified for the user in /etc/passwd, and depending on its configuration, sets basic environment variables such as HOME, USER, SHELL, and the initial default PATH for the user which can be modified later in the process through configuration files.
In Bash and related shells the configuration files read, where the PATH can be modified and other environment variables set, are the system-wide /etc/profile -- which sources other files in /etc/profile.d -- and then the first on found of ~/.bash_profile, ~/.profile, and ~/.bash_login. For non-login shells the configuration file is ~/.bashrc. In fish, the system-wide configuration read by all fish instances whether the login shell started by login or an interactive shell started by a terminal emulator like Konsole is /etc/fish/config.fish, or alternatively files in /etc/fish/conf.d/ with a .fish extension. Because all fish instances read these files environment variable settings in these files for login shells only must be wrapped by
if status is-login ... end
and similarly for only interactive shells by
if status is-interactive ... end
That the same file is read by both login and non-login shells and that the same file is used to set environment variables for both login and interactive shells, with the test for interactive or login shell, is a difference between fish and Bash and similar shells. Also, the test shows the very different syntax of the fish language.
Per-user configuration and variable setting is performed in ~/.config/fish/config.fish which is read by all of the user's fish instances, and similarly to the system-wide configuration, in files with a .fish extension in ~/.config/fish/conf.d. The contents of the files in ~/.config/fish/conf.d can override those in /etc/fish/conf.d/.
Another location containing fish configuration files used by all fish instances is /usr/share/fish containing, among other items, a subdirectory, /usr/share/fish/vendor_conf.d, where third-party vendors can install their configurations which set variables. This is equivalent to /etc/profile.d/ class Bash and similar shells. The Arch Perl package, for example, installs a fish configuration file in this directory for use by fish, but installs sh and csh configuration files in /etc/profile.d/ for use by Bourne like like shells, such as Bash, and the C Shell.
The way that environment variables are set is also different. In Bash and similar shells a variable is created and set, either in a terminal or in one of the files read when it starts, with an assignment as in
variable-name=variable-value
To make the variable an environment variable making it available to all shells in the current session, the export command is used as in:
export variable-name
A shorthand can also be used to combine the assignment and the export. Placing the setting and the exporting in one of the files read when the shell is executed makes it persistent across boots.
In fish a set command -- not to be confused with the same command in Bash like shells with POSIX compatibility, which does something different -- is used to set a variables, as in:
set variable-name variable-value
An example of the use of the set command is in the fish configuration file installed by the Arch Perl package, /usr/share/fish/vendor_conf.d/, perlbin.fish, shown in the following listing. The file contains logic applicable to login shells that adds various directories containing Perl executables to the PATH variable. Because the logic is contained within an if conditional block that tests the status of the shell as a login shell, the variable modifies the default PATH environment variable set during login, whether it is set by the login process described previously for virtual terminal consoles, or the process described below for logging into graphical environments.
# Set path to perl scriptdirs if they exist # https://wiki.archlinux.org/index.php/Perl_Policy#Binaries_and_scripts if status --is-login for perldir in /usr/bin/site_perl /usr/bin/vendor_perl /usr/bin/core_perl if test -d $perldir; and not contains $perldir $PATH set PATH $PATH $perldir end end end
The options --local
(-l
), --global
(-g
), and --universal
(-U
) to the set command specify the created variable has one of three scopes, local, global, or universal, respectively. Local variables apply only to the current block, global variables can exist outside the current block, and universal variables are shared between all fish instances in the user's current session and are made persistent across boots.
Another important option is the --export
(-x
) option which causes the variable to be exported to the shell's child processes, making it available to subsequent commands executed in the same shell instance. So, using
set -x variabale-name variable-valuein the fish per-user configuration file is the equivalent of
export variable-name=variable-valuein the ~/.bashrc unless
set
is inside some block, such as a function, in
which case
set -gx variable-name variable-valuewould be necessary
set -Ux variable-name variable-valuein a fish instance running in a terminal will cause the the variable to behave like an environment variable set and exported in ~/.profile
The fact that the fish shell is not suitable as a login shell, as used in the earlier version of Garuda I tried, is particularly evident in the login process when using a graphical environment in the way it ensures environment variables available to Bash are made available to fish. The login process in a graphical environment does not use the login command as in the process for virtual terminal console logins described above. Instead, the display manager and greeter take the place of the login command. In Plasma, SDDM performs the tasks performed by login. It sets the initial default PATH environment variable for all users that are logging in to a graphical environment started by SDDM to /usr/local/bin:/usr/bin:/bin, a value specified in its default configuration file, /usr/lib/sddm/sddm.conf.d/defualt.conf. It then sets other environment variables for the particular login shell specified for the user in /etc/passwd using /usr/share/sddm/scripts/Xsession, the script executed when starting the desktop session. The logic in the script that does this is shown below.
case $SHELL in */bash) [ -z "$BASH" ] && exec $SHELL $0 "$@" set +o posix [ -f /etc/profile ] && . /etc/profile if [ -f $HOME/.bash_profile ]; then . $HOME/.bash_profile elif [ -f $HOME/.bash_login ]; then . $HOME/.bash_login elif [ -f $HOME/.profile ]; then . $HOME/.profile fi ;; */zsh) [ -z "$ZSH_NAME" ] && exec $SHELL $0 "$@" [ -d /etc/zsh ] && zdir=/etc/zsh || zdir=/etc zhome=${ZDOTDIR:-$HOME} # zshenv is always sourced automatically. [ -f $zdir/zprofile ] && . $zdir/zprofile [ -f $zhome/.zprofile ] && . $zhome/.zprofile [ -f $zdir/zlogin ] && . $zdir/zlogin [ -f $zhome/.zlogin ] && . $zhome/.zlogin emulate -R sh ;; */csh|*/tcsh) # [t]cshrc is always sourced automatically. # Note that sourcing csh.login after .cshrc is non-standard. xsess_tmp=`mktemp /tmp/xsess-env-XXXXXX` $SHELL -c "if (-f /etc/csh.login) source /etc/csh.login; if (-f ~/.login) source ~/.login; /bin/sh -c 'export -p'>! $xsess_tmp" . $xsess_tmp rm -f $xsess_tmp ;; */fish) xsess_tmp=`mktemp /tmp/xsess-env-XXXXXX` $SHELL --login -c "/bin/sh -c 'export -p' > $xsess_tmp" . $xsess_tmp rm -f $xsess_tmp ;; *) # Plain sh, ksh, and anything we do not know. [ -f /etc/profile ] && . /etc/profile [ -f $HOME/.profile ] && . $HOME/.profile ;; esac
The first conditional execution block is executed if the user's login shell is bash, where the Bash mode is set to POSIX, then /etc/profile is sourced, activating the setting and exporting of the variables in the file, after which the user's specific login shell variables are activated for the session by the sourcing of either -- in order of preference -- ~/.bash_profile, ~/.bash_login, and ~/.profile.
The penultimate conditional execution block is executed if the user's login shell is fish. In this block, a temporary file is created and a Bourne shell command (provided by Bash):
$SHELL --login -c "/bin/sh -c 'export -p' > $xsess_tmp"
is executed by fish to list all environment variables set in the Bash configuration files, and redirect the listed variables to the temporary file, which is then sourced by fish, ensuring that environment variables set for Bash are also available to the fish login shell.
The choice to keep fish as the default interactive non-login shell in the later version of Garuda may be a benefit for some users who value its convenience features when used on the command line. Convenience features include:
For an essential introduction to Fish use and features, see the Fish Tutorial.
Although, in my use case, I might execute bash in a terminal to temporarily switch to a Bash shell from fish and use only Bash for scripting, two of fish's convenience features have caused me to consider using fish as my default interactive shell in other distributions. These are the impressive autosuggestion feature which automatically renders and continuously updates-- in grayed out text -- the most likely completion of the user's entry as the user types a command, requiring only hitting Right Arrow for the suggestion to be rendered as if the user had typed it; and the tab completion feature, with which a user can start entering a command and then press Tab to cause a list of possible completions to be displayed under the command entry line, pressing Tab cycles through the possible completions, and when the desired completion is displayed in the command, the Enter key can be pressed to execute the command, as normal.
Garuda adds to the friendliness of the fish shell itself by defining in its per-user configuration file, ~/.config/fish/config.fish, many command aliases intended to make users' interactions with the terminal more convenient and the output more interesting, informative, or beautiful. Unfortunately, the behavior of the commands by specifying options to be used with certain commands, and in some cases the complete replacement of a command with another, may not be desirable to some users. For example, the ls command is completely replaced by exa, an alternative directory listing command.
## Useful aliases # Replace ls with exa alias ls='exa -al --color=always --group-directories-first --icons' # preferred listing alias la='exa -a --color=always --group-directories-first --icons' # all files and dirs alias ll='exa -l --color=always --group-directories-first --icons' # long format alias lt='exa -aT --color=always --group-directories-first --icons' # tree listing alias l.="exa -a | egrep '^\.'"
From my perspective, there doesn't seem to be a real practical benefit to this replacement -- but it's there for users who want it. I initially thought the -T option of exa, which produces a tree output, would be helpful. But its performance is not good in large directories with many nested subdirectories. It also lacks a way to specify the depth of recursion; the tree command which does have an option to specify depth is more useful than exa -T. The one benefit, in terms of making the output of a directory listing more interesting, is exa's --icons option, which is specified in the alias definition of ls, causing a glyph representing the file type to be displayed next to the file name in the listing, as shown in the following set of images.
Another important command replacement through aliasing is the replacement of cat with bat which adds syntax highlighting and Git integration. In addition to this and other numerous aliases, other useful capability is added to fish through functions called as appropriate during interactions in the shell. One such function /usr/share/fish/functions/fish_command_not_found.fish, produces useful output indicating the pacman package that contains a command that is not found by using pkgfile on the back end.
Besides providing convenience through the choice of shell and additions to the default fish configuration, Garuda, in keeping with one of the goals of the distribution, adds elements to make the shell interface beautiful. This is not only through the Konsole color scheme that looks good and is consistent with the overall Plasma appearance (but only readable at night), but by using, in the earlier release, Paleofetch -- a rewrite of Neofetch in C to improve performance -- and then Neofetch itself in the later release. The command prompt is also made more interesting and informative by using Starship, a utility that provides an interesting, colorful, and informative shell prompt, similar to Powerline and Liquid Prompt, but without the Vim integration of Powerline.
Starship is activated in the per-user fish configuration file, ~/.config/fish/config.fish by the lines
## Starship prompt if status --is-interactive source ("/usr/bin/starship" init fish --print-full-init | psub) end
The prompt, with the default configuration provided by Garuda, displays information in two lines, the first indicating the typical user and host information and the present working directory, and the time the previous command took to execute. If the present working directory is located in a Git repository, it will display the name of the repository, or if the present working directory is at some depth below the repository root, a relative path to the root of the repository, and the name of the branch, if any, as well as a symbolic representation of the state of the repository. The second line begins with a symbolic representation of the exit status of the previous command.
Starship is extremely configurable and all aspects of the default configuration provided by Garuda can be modified, and additional components not enabled in the configuration can be added to the prompt. (To see my configuration of Starship on my new Arch Btrfs/Snapper installation, visit An Arch Linux Installation on a Btrfs Filesystem with Snapper for System Snapshots and Rollbacks.) The configuration, specified in the file ~/.config/starship.toml, is much less complex than Powerline but more complex than Liquid Prompt, due to Starship offering more functionality.
Both Starship and the aliasing of the ls command to the exa command rely on another component, Nerd Fonts, sets of fonts suitable for terminals and text editors, patched to include glyphs from the Font Awesome, Devicons, and Opticons font sets, among others. The distribution includes the Fantasque Sans Mono Nerd Font, and is set as the default fixed width font in Plasma's System Settings. It would have been useful for the distribution to package more of these fonts and include them in the Chaotic-AUR repository for users who might want a different Nerd Font.
Garuda's shell, terminal, and related customizations may be appealing to some users, but they are not without their issues. One of these is the usability of the color scheme of the Konsole theme chosen for the default profile, Sweetified. At a minimum terminals make eight standard foreground colors and eight background colors available -- as ANSI escape codes -- to applications running in a terminal to distinguish between certain types of text in the output. Most terminals will also support an additional eight foreground and background colors as bright variants of each set of original eight colors. Konsole also supports an additional eight colors available for both foreground and background as faint variants. Terminal themes map the sets of eight colors as ANSI escape codes to the colors displayed in the terminal. (See Terminal Colors.) In Garuda's default Konsole theme, the mappings are such that there is no variation in the intensities of any of the colors. Also, the same color is assigned to multiple color identifiers, and in cases where different colors are assigned to different identifiers, the colors are very similar. The first image below shows the Sweetified color scheme on the right and the standard Breeze Konsole color scheme; notice the variation in the colors of in the Breeze scheme, and very little in the Sweetified scheme. The problem in practice with the default Garuda Konsole theme is evident when, for example, using the Git command
git status
With the default scheme the coloration of the list of the modified files is the same for those that have changes not staged for commit as those that have changes staged for commit -- both are red, but with the Breeze scheme, the former is displayed in red text while the latter is displayed in green text.
Besides the practical issue of the Konsole color scheme, I experienced several issues with the Garuda terminal related customizations. The first was due to Starship configuration that included references to Starship modules that made references to programming languages and development environments that were not installed on the system, so resulted in the errors immediately following the Neofetch informational output, as shown in the first image in the following set. This was corrected in the Garuda installation made with the 210621 ISO release, but in the installation made with the 210406 ISO release, it was necessary to remove references to the uninstalled languages and environments in the Starship configuration.
The second issue also involved Starship configuration in that invalid keywords were used in the Battery module configuration. Namely, the disabled keyword in the [battery.display] sections of the configuration cause errors when in the main section for the Starship battery information module -- [battery]. Removing the keyword from the [battery.display] sections removed the error.
The third issue involved the informational output produced by either Neofetch or Paleofetch when starting a terminal. At some point in my installation made with the earlier release, I was suddenly confronted with the error shown in the second image below, which indicated that the necessary package for the informational output was not installed. I overwrote the fish configuration in my home directory with /etc/skel/.config/fish/config.fish and also installed Neofetch as suggested by the error, after which an informational output was again produced, but it included a different ASCII art representation of the distribution logo (see third image below). When I reinstalled the 210621, release, this particular issue was not present, but the Starship configuration issue was.
Another minor issue related to the terminal customizations is the lack of a better selection of Nerd Fonts on the system by default. Many popular fonts used in terminals and text editors have patched Nerd Font versions, all of them available in the AUR, but Garuda doesn't make these fonts available by default, nor do include any of the other Nerd Fonts besides Fantasque Sans Mono available in their AUR derived Chaotic-AUR repository.
One of the goals of Garuda Linux is to produce a high performance Linux distribution, a goal that has been accomplished as evidenced by the Phoronix Test Suite benchmark comparison against Solus performed as part of this review (see Garuda Linux Benchmark Comparison Versus Solus). One of the enhancements that the distirbution makes to increase performance is using the Zen Kernel (linux-zen), one of the kernels supported by Arch, instead of the default Arch kernel.
According to Liquorix, packagers of the kernel for Debian, it provides "the best configuration ... for desktop, multimedia, and gaming workloads". Also according to Liquorix, the kernel provides enhanced performance by implementing the following features.
- Zen Interactive Tuning
- Tunes the kernel for responsiveness at the cost of throughput and power usage.
- MuQSS Process Scheduler:
- Fair process scheduler for gaming, multimedia, and real-time loads.
- Preemptible tree-based hierarchical RCU
- RCU implementation for real-time systems.
- Hard Kernel Preemption:
- Most aggressive kernel preemption before requiring real-time patches. Guarantees responsive system under high intensity mixed workload scenarios.
- Budget Fair Queue
- Proper disk scheduler optimized for desktop usage, high throughput / low latency.
- TCP BBR2 Congestion Control
- Fast congestion control, maximizes throughput, guaranteeing higher speeds than Cubic.
- Compressed Swap
- Swap storage is compressed with LZ4 using zswap.
- Multigenerational LRU
- Alternative LRU algorithm that performs better under high memory pressure and uptimes
- Mainline LRU patched with le9
- When using mainline LRU, cache is protected under high memory pressure at 256mb and less.
- Minimal Debugging
- Minimum number of debug options enabled to increase kernel throughput.
I would venture to guess that the results of the benchmark comparison against Solus resulted in Garuda's favor was primarily due to the use of this kernel. (See the following set of images for kernel related results of the benchmark comparison. For the full set of results see Garuda Linux Benchmark Comparison Versus Solus.)
Unfortunately, as discussed below, use of this kernel was may have been a major cause of the extremely short battery life with Garuda Linux -- approximately two hours versus five hours -- compared to my Arch installation.
Another major modification to a typical Arch system, and a difference compared to every other distribution I have used, is Garuda's use of zram (now also used in Fedora 35 by default) which creates a compressed block device in RAM for use as swap space. This is used, as discussed in the Arch Wiki page, Improving Performance, to increase the speed of swap transactions, which has the effect of improving performance if a system often swaps due to memory constraints. As a secondary benefit zram reduces read/write cycles, increasing the longevity of an SSD if swap is on an SSD.
The compressed block device is created and managed by the zram kernel module, which reads the value in /sys/block/zramX/comp_algorithm to set the compression algorithm used to compress the block device, and the value in /sys/block/zramX/disksize to set the size of the block device. Any number of zram block devices can be created, each with their own compression type and size setting, so the X in the above paths identifies a particular zram block, where it can be 0, 1, 2 ... N. /sys/block/zramX/ contains other files used internally by the module. On my Garuda 210621 installation, only one zram device was created equal in size to the amount of physical RAM on the system. The values in the two relevant paths are shown in the listing that follows.
╭─brook@g5 in ~ took 1ms [🔴] × cat /sys/block/zram0/comp_algorithm File: /sys/block/zram0/comp_algorithm lzo lzo-rle lz4 lz4hc 842 [zstd] ╭─brook@g5 in ~ took 47ms ╰─λ cat /sys/block/zram0/disksize File: /sys/block/zram0/disksize 24893194240 ╭─brook@g5 in ~ took 45ms ╰─λ
Garuda installs the package zram-generator which provides the systemd unit generator of the same name that creates the systemd-zram-setup@zramX.service service, the status of which for the single zram device is shown in the next listing. The service reads its distribution or administrator created configuration -- if they exist, they don't in Garuda (see man zram-generator.conf.5), makes the zram block device available at the appropriate systemd target using the systemd infrastructure, writes values to the paths used by the kernel, and loads the zram kernel module creating the devices.
╭─brook@g5 in ~ took 15ms ╰─λ sudo systemctl status systemd-zram-setup@zram0.service ● systemd-zram-setup@zram0.service - Create swap on /dev/zram0 Loaded: loaded (/usr/lib/systemd/system/systemd-zram-setup@.service; static) Drop-In: /run/systemd/generator/systemd-zram-setup@zram0.service.d └─bindsto-swap.conf Active: active (exited) since Wed 2021-08-25 16:23:19 EDT; 3 days ago Docs: man:zram-generator(8) man:zram-generator.conf(5) Process: 548 ExecStart=/usr/lib/systemd/system-generators/zram-generator --setup-device zram0 (code=exited, stat> Main PID: 548 (code=exited, status=0/SUCCESS) CPU: 21ms Aug 25 16:23:19 g5-garuda systemd[1]: Starting Create swap on /dev/zram0... Aug 25 16:23:19 g5-garuda zram-generator[558]: Setting up swapspace version 1, size = 23.2 GiB (24893190144 bytes) Aug 25 16:23:19 g5-garuda zram-generator[558]: LABEL=zram0, UUID=3d436b7c-bac4-4af0-8a60-e5b13c700e8d Aug 25 16:23:19 g5-garuda systemd[1]: Finished Create swap on /dev/zram0. lines 1-15/15 (END)
Swap is activated by the associated systemd unit dev-zramX.swap. The status of this unit for Garuda with the single zram device, after it is activated as swap, is shown below.
╭─brook@g5 in ~ ╰─λ sudo systemctl status dev-zram0.swap [sudo] password for brook: ● dev-zram0.swap - Compressed Swap on /dev/zram0 Loaded: loaded (/run/systemd/generator/dev-zram0.swap; generated) Active: active since Wed 2021-08-25 16:23:19 EDT; 3 days ago What: /dev/zram0 Docs: man:zram-generator(8) man:zram-generator.conf(5) Tasks: 0 (limit: 28430) Memory: 316.0K CPU: 7ms CGroup: /system.slice/dev-zram0.swap Aug 25 16:23:19 g5-garuda systemd[1]: Activating swap Compressed Swap on /dev/zram0... Aug 25 16:23:19 g5-garuda systemd[1]: Activated swap Compressed Swap on /dev/zram0. ╭─brook@g5 in ~ took 3s ╰─λ
Although the zram system may be beneficial for certain low memory systems which may swap frequently, it is unnecessary in systems with a large amount of RAM for the workload. In my Dell G5 with 24 GB of RAM, I rarely have less than 12 GB of RAM available, even taking into account many gigabytes occupied as buffer/cache. This doesn't really matter; the real issue for me was that zram is not able to be used as a resume device for hibernating the system, and the Garuda installer doesn't allow specifying a traditional swap partition to be mounted so that the swap partition can be used for hibernation for users that want this capability, while not preventing the zram device from being used as swap space during normal operation. In one of the supplements to this review, Garuda Linux Review [KDE Dragonized (D460nized),210621] Supplement: Fixes and Enhancements, I describe the process I used to add a swap partition to and enable hibernation in Garuda.
All of the performance enhancements included in Garuda Linux, including the Zen Kernel and zram are those suggested in the Arch Wiki article Improving performance, which covers a broad range of options for improving all aspects of system performance.
One of the performance enhancements made by Garuda is the favoring of, by default, performance at all times in a certain set of configuration items which represent the trade-off between less power use and increased performance of the CPU, the currently active graphics card, and a rotational disk, if present. This is done through the default installation of the performance-tweaks package, which places the configuration files shown in the following listing in /usr/lib/tmpfiles.d/.
╭─brook@g5 in ~ took 190ms ╰─λ sudo pacman -Ql performance-tweaks [sudo] password for brook: performance-tweaks /usr/ performance-tweaks /usr/lib/ performance-tweaks /usr/lib/tmpfiles.d/ performance-tweaks /usr/lib/tmpfiles.d/cpu-governor.conf performance-tweaks /usr/lib/tmpfiles.d/energy_performance_preference.conf performance-tweaks /usr/lib/tmpfiles.d/pcie_aspm_performance.conf performance-tweaks /usr/lib/tmpfiles.d/power_dpm_force_performance_level.conf performance-tweaks /usr/lib/tmpfiles.d/power_dpm_state.conf performance-tweaks /usr/lib/udev/ performance-tweaks /usr/lib/udev/rules.d/ performance-tweaks /usr/lib/udev/rules.d/30-amdgpu-pm.rules performance-tweaks /usr/lib/udev/rules.d/30-radeon-pm.rules performance-tweaks /usr/lib/udev/rules.d/69-hdparm.rules ╭─brook@g5 in ~ took 3s ╰─λ
The systemd services systemd-tmpfiles-setup.service and systemd-tmpfiles-setup-dev.service, and the associated command systemd-tmpfiles cause the actions specified in the configuration files, according to the format described in man tmpfiles.d to be performed.
For example, the file /usr/lib/tmpfiles.d/cpu-governor.conf contains the single line
w /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor - - - - performance
where the w specifies the action "write to a file, replacing existing contents", the path following the w specifies the file to which to write, the four "-" each represent an item not used in this case, but could be octal permissions of a created file, user, user group, and a time duration after which systemd-tmpfiles-clean.service and its associated program systemd-tmpfiles remove the file.
Examining the contents of the configuration files, as depicted in the following image, we see that CPU frequency governor, PCIe ASPM (Active State Power Management), and video card energy settings will always be set to "high" or "performance" upon boot. These settings make the system's maximum performance available to the user at the expense of battery life. That these performance tweaks are set by default without regard to whether Garuda is installed on a laptop or desktop is one of the biggest disappointments of the distribution. Newer versions of the Garuda Assistant, however allow easily disabling the performance tweaks and enabling power saving optimizations (discussed later in this article).
Garuda also makes use of several utilities and daemons available in the AUR to improve system responsiveness. A description of these follows.
According to this program's GitHub page
Ananicy (ANother Auto NICe daemon) — is a shell daemon created to manage processes' IO and CPU priorities, with community-driven set of rules for popular applications (anyone may add his own rule via github's pull request mechanism). It's mainly for desktop usage.
Process priority is related to management of CPU resource allocation to processes, known as process scheduling. As processes are started the kernel assigns a time slice of CPU use to each process in the order the processes are started. If a process runs past its allotted time slice, it is placed in a wait queue based on its priority.
Each processes has, as one of its attributes, a nice value which is a measure of its priority relative to other processes. Possible nice values for a process are between -20 and +19, where the lowest nice value has the highest priority. The default nice value for a process is 0. Generally only privileged processes can assume a negative nice value or assign a negative nice value to other processes. Unprivileged processes can increase their own nice value, i.e, lower their priority, relative to other processes.
Users can also modify the nice value of a process by invoking its program with the the nice command. Ananicy automates the setting of non-default nice values according to user contributed rules stored in files with the extension .rules in /etc/ananicy.d/ and in its configuration file, /etc/ananicy.d/ananicy.conf. Generally, rules identify a process and specify the change in the nice value from the default. Because IO priority, in addition to CPU time priority, depends on the nice value, the rules can also specify IO priority as one of the IO scheduling priority classes mentioned in man ionice: idle, best-effort, or realtime. The rules mechanism allows the grouping of processes by type, where the members of a type can be defined in any of the .rules such that nice values can be assigned to a group of processes. For example, the processes named mkinitcpio, makepkg, and pacman are assigned to the group "BG_CPUIO", in the file /etc/ananicy.d/00-default/archlinux.rules, shown below.
# Some rules for Arch Linux specific tools { "name": "mkinitcpio", "type": "BG_CPUIO" } { "name": "makepkg", "type": "BG_CPUIO" } { "name": "pacman", "type": "BG_CPUIO" }
The following listing of /etc/ananicy.d/00-types.types shows some rules that affect the nice values of groups of processes.
# Type: Game # Use more CPU time if possible # Games do not always need more IO, but in most cases can be hungry for CPU { "type": "Game", "nice": -5, "ioclass": "best-effort" } # Type: Player Audio/Video # Try to add more CPU power to decrease latency/lags # Try to add real time io for avoiding lags { "type": "Player-Audio", "nice": -3, "ioclass": "realtime" } { "type": "Player-Video", "nice": -3, "ioclass": "realtime" } # Must have more CPU/IO time, but not so much as other apps { "type": "Image-View", "nice": -3 } { "type": "Doc-View", "nice": -3 } # Type: Low Latency Realtime Apps # In general case not so heavy, but must not lag { "type": "LowLatency_RT", "nice": -10, "ioclass": "realtime" } # Type: BackGround CPU/IO Load # Background CPU/IO it's needed, but it must be as silent as possible { "type": "BG_CPUIO", "nice": 19, "ioclass": "idle", "sched": "idle", "cgroup": "cpu80" } # Type: Heavy CPU Load # It must work fast enough but must not create so much noise { "type": "Heavy_CPU", "nice": 19, "ioclass": "best-effort", "ionice": 7, "cgroup": "cpu90" } # Type: Chat {"type": "Chat", "nice": -1, "ioclass": "best-effort", "ionice": 7 } # Type: Adj OOM Score { "type": "OOM_KILL", "oom_score_adj": 1000 } { "type": "OOM_NO_KILL", "oom_score_adj": -1000 }
In the file, the group "BG_CPUIO", thus the processes assigned to the group, is assigned a nice value of 19. The listing above also shows that the Garuda rules prioritize processes defined in the groups "Player-Audio", "Game", and some others by decreasing their nice values, and adjusting their IO priority. In the case of the Player-Audio group, the IO priority is assigned to the highest priority class of "realtime".
This daemon monitors memory usage in order to make a certain amount of memory always available by swapping out processes when nearly all of the available memory has been used. According to this program's GitHub page
the control groups specified in the config (user.slice and system.slice) are swapped out when MemAvailable is low by reducing memory.high (values change dynamically). memavaild tries to keep about 3% available memory.
Its configuration is located at /etc/memavaild.conf with a default configuration at /usr/share/memavaild/memavaild.conf. Some of the most important parameters in the configuration specify the amount of available memory at which processes are swapped and the amount of available memory at which point it stops swapping processes. Another configuration item is the definition of the memory characteristics of a cgroup which affect the swapping.
This is another daemon taken from the AUR to improve memory related performance and prevent a state where the system becomes unresponsive under low memory conditions. The project's GitHub page states
prelockd is a daemon that locks memory mapped executables and shared libraries in memory to improve system responsiveness under low-memory conditions.
The configuration for the daemon at /etc/prelockd.conf identifies the executables and shared objects that should be locked in memory
From the li
╭─brook@g5 in ~ took 13s ╰─λ systemd-cgls /user.slice/user- 1000.slice/user@1000.service/session.slice Control group /user.slice/user-1000.slice/user@1000.service/session.slice: ├─pipewire-pulse.service │ └─5983 /usr/bin/pipewire-pulse ├─pipewire-media-session.service │ └─5982 /usr/bin/pipewire-media-session ├─at-spi-dbus-bus.service │ ├─24852 /usr/lib/at-spi-bus-launcher │ ├─24858 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address> │ └─24860 /usr/lib/at-spi2-registryd --use-gnome-session └─pipewire.service └─5981 /usr/bin/pipewire
The critical processes specified by name are all of those that involve the user interface, which for Plasma are: plasmashell, plasma-desktop, kwin_wayland, kwin_x11, kwin, kded4, knotify4, kded5, kdeinit5.
nohang is a package that provides a daemon to prevent an OOM (Out of Memory ) condition -- thereby preserving system responsiveness in low momory situations, created by the same developer as prelockd and memavaild. From the project's GitHUb page,
OOM conditions may cause freezes, livelocks, drop caches and processes to be killed (via sending SIGKILL) instead of trying to terminate them correctly (via sending SIGTERM or takes other corrective action). Some applications may crash if it's impossible to allocate memory.The daemon, a systemd service named nohang.service for servers or for nohang-desktop.service terminates programs gracefully in preventing an OOM condition sending a SIGTERM signal to processes and the SIGKILL signal as a last resort. Three of the most interesting configuration items, specified in the daemon's configuration file, nohang /etc/nohang/nohang-desktop.conf for desktops and nohang /etc/nohang/nohang.conf for servers are
Garuda Welcome provides a central hub for accessing various Garuda developed system administration and configuration utilities, third party utilities, and links to various support resources, contact channels, and the online services provided by Garuda. This is a vary nice looking tool, and if all the administration components worked as intended, a very useful tool. The individual components are not full-fledged GUI applications, but when activated by the GUI controls actually open a visible Alacritty terminal emulator window to execute a command in the terminal (as shown in the sixth image of the following set of screenshots), executing a single CLI command, even if the particular configuration task should perform some follow-up actions. Compared to YaST, and even a less comprehensive tool such as Mageia's Control Center, it lacks maturity and sophistication, below the surface.
More importantly, some of its components, as discussed below, do not work correctly. The worst example of this is the Garuda Boot Options component, which resulted in a misconfigured GRUB when making the simplest modification of specifying a different default boot menu item. Despite its current flaws, it, as a whole, and the Garuda Assistant has great potential for typical users who do not need the advanced enterprise capabilities of a tool like YaST.
Garuda Assistant is one of the components of Garuda Welcome, which like the other components can also be launched independently outside of Garuda Welcome as well as from within it. Its intent is to allow users to easily maintain and configure their systems without a terminal, by running the appropriate CLI commands, based on the GUI's elements, in a background terminal. It organizes its capabilities in a series of seven tabs, each one containing related controls, as shown in the following set of screenshots.
systemd-analyze blamewhich lists units in descending order of the time it took them to start, and
systemd-analyze critical-chainwhich lists the hierarchy of the critical chain of units and the time each unit in the chain became active and how long it took to start.
This particular tool seems to be under very heavy active development. Since installing Garuda from the 210621 ISO, two new tabs have been added, and one of the tabs split into two separate tabs. Additional items have also been added to each tab. Other very recent additions to Garuda System Maintenance, based on the name of the tools' notification and window titles, is a hotfix capability and an announcement capability, both shown in the set of images below. The hotfix tool displays a pop-up dialog informing the user that a hotfix is available and asking wheather it should be applied (Image 3, below). The tool doesn't actually work; when the user clicks "No", the tool doesn't accept the response, seemingly proceeding to apply the hotfix as indicated by the notification pop-ups shown in Image 4, although it doesn't really apply the hotfix. When the user clicks "Yes" the same notifications appear, but again, nothing is applied.
More impressive is the announcement notification, shown in the first image below. It provides useful warnings and notices to users when an intervention is necessary. The notification includes a button to open a Garuda forum post elaborating on the subject of the announcement.
The Garuda Settings Manager component seems to be derived from the Manjaro Settings Manager.
The Garuda Gamer component is impressive in terms on the number of tools in its two tabs. It seems to have a comprehensive set of tools that will meet the needs of those who want to game on Linux, the most important target user of Garuda Linux.
The Garuda Network Assistant component of Garuda Welcome has tools related to networking organized in three tabs: "Status", "Linux drivers", and "Windows drivers". The "Linux drivers" tab (shown in the second image, below) is intended to show the network hardware on the system and the driver kernel modules associated with each hardware item, as well as managing the driver kernel module. While the tool accurately display the network hardware, it was not able to list the associated drivers, and as listing the modules is necessary to use the management tools, was not able to perform any of the management tasks.
I found the "Status" page interesting because among its many tools is one to create a WiFi hotspot (shown in the third image, below). When I tried this tool, which is actually Linux WiFi Hotspot embedded into Garuda Network Assistant, it seemed to successfully create a hotspot on the 2.4Ghz WiFi radio while there was a WiFi internet connection on the 5 Ghz WiFi radio. The Plasma network system tray applet also has a similar capability and very simply creates a hotspot, but does not have the capability to create a hotspot on one band while another band is providing an internet connection to the hotspot host, as the Garuda tools seems to have. Unfortunalely, it didn't work in this way, as I couldn't see the SSID from a different device, nor could I connect by manually entering the SSID I chose in the tool.
The Garuda Boot Options component tool was one I was happy to see because Garuda GRUB configuration automatically sets the default selection in the GRUB boot menu, "Garuda", to load the Zen Kernel, but I prefer to almost always load the default Arch kernel, which I installed at some point to save battery life. I specified my preferred selection in the tool by changing the "Boot to" dropdown menu selection from "Garuda on linux-zen" to "Garuda on linux" (see the second image in the following set of screenshots). The result was not as expected and not as it should have been; it did not make my specified choice the default, keeping the previous default, and worse it added a duplicate set of GRUB menu items for the Garuda installation (see the fourth image in the following set of screenshots, compare to the image of the GRUB boot menu shown in one of the screenshots in the Introduction section, above).
One of the most interesting aspects of Garuda in terms of default applications intended for the end-user -- not system related software -- is that the set of software installed on the system is essentially customizable. Upon first boot Garuda Welcome's Setup Assistant, after various prompts regarding whether to update and upgrade to an ultimate version, prompts the user whether to install additional packages. This tool not only allows users to relatively quickly add software, but aids in discovery of programs of which users may not be aware, displaying available choices by category. While very useful, I didn't like the workflow of the tool in that it required going through many steps. For each category of software it was necessary to first answer a prompt asking whether to install software from a particular category, then checking boxes in a dialog boxes for the category. One dialog box with tabs for each category would have been better. (See Garuda Linux Review [KDE Dragonized (D460nized),210621] Supplement: Installation to see a set of screenshots from the first boot Setup Assistant process.)
More interesting is the additional repository, named Chaotic-AUR, that Garuda provides to supplement the Arch repository from which Garuda's own utilities are installed. It also includes many of the packages from the AUR as pre-built binary packages, saving users the time that would normally be spent compiling or converting non-Arch-native packages to Arch's format when installing packages from the AUR.
Unfortunately, the number of packages installed from this repository when there are versions in the normal Arch repositories is concerning as they may not be as high quality. Also, many packages that affect how the core system behaves are installed from this repository, with the ultimate source of the packages being the AUR. This is particularly concerning because, in my experience, and as in the experience of long-time Arch users on the internet, when a package affecting the core system is installed from the AUR the system may break, sometimes because the AUR packages are unmaintained or generally not of sufficient quality.
Another issue I have related to this repository and how it is used is that, besides serving the utilities developed by Garuda, it contains many meta packages that are used to install other packages as dependencies. The meta packages may be a convenience that allows installation of all packages related to a certain other package in order to ensure a particular capability is enabled in the installed system, but sometimes the meta packages cause the installation of not strictly necessary for functionality as desired by the user.
An example of these meta packages is networkmanager-support, which causes many Network Manager plugins to be installed as well as neard, a package for supporting NFC devices -- unnecessary on both of my laptops because they don't have NFC hardware. Another example bluetooth-support, which as a meta package causes the installation of necessary packages for Bluetooth functionality, but also causes a packages from the AUR (as pre-built packages from chaotic-aur) -- a possible source of instability, as mentioned above -- bluetooth-autoconnect, that is not actually necessary for Bluetooth capability.
The unnecessary packages installed by the meta packages is representative of some of the compromises between installing an Arch system the Arch way, resulting in a system that the user knows well with only the components that they want, contributing to a simple and light system, and the ease of installing an Arch based system where everything is pre-configured by the distribution.
GUI package management in the 210406 release was provided by Pamac, installed by default. It wasn't installed by default in the 210621 release. But when going through the Finalize Installation phase of installation of that release (see Garuda Linux Review [KDE Dragonized (D460nized),210621] Supplement: Installation), among the categories of software for the installation of additional software was package managers and software stores. This particular selection dialog allows users to choose any number of GUI package managers for inclusion in the system.
CLI package management is of course through pacman. But the Garuda developers enable certain options not found in the default pacman configuration, among them the ability to perform parallel downloads of packages. They also add the packages which provide the necessary hooks for managing the creation of new Btrfs snapshots which are then added to the GRUB menu, as mentioned above. CLI package management can also be performed by Paru, a , the Pacman wrapper that also supports the AUR. This great convenience for installing packages from the AUR, is a rewrite of Yay the previously dominant AUR helper.
Garuda also includes many aliases in the fish user configuration to make CLI package management more convenient for users who take the time to study the many configured aliases. The commands that alias package management commands are shown in the following listing.
╭─brook@g5 in ~ took 7s ╰─λ cat ~/.config/fish/config.fish | grep pamac alias aup="pamac upgrade --aur" ╭─brook@g5 in ~ took 45ms ╰─λ cat ~/.config/fish/config.fish | grep pacman alias fixpacman="sudo rm /var/lib/pacman/db.lck" alias rmpkg="sudo pacman -Rdd" alias upd='sudo reflector --latest 5 --age 2 --fastest 5 --protocol https --sort rate --save /etc/pacman.d/mirrorlist && cat /etc/pacman.d/mirrorlist && sudo pacman -Syu && fish_update_completions && sudo updatedb' alias gitpkg='pacman -Q | grep -i "\-git" | wc -l' # List amount of -git packages alias mirror="sudo reflector -f 30 -l 30 --number 10 --verbose --save /etc/pacman.d/mirrorlist" alias mirrord="sudo reflector --latest 50 --number 20 --sort delay --save /etc/pacman.d/mirrorlist" alias mirrors="sudo reflector --latest 50 --number 20 --sort score --save /etc/pacman.d/mirrorlist" alias mirrora="sudo reflector --latest 50 --number 20 --sort age --save /etc/pacman.d/mirrorlist" alias apt='man pacman' alias apt-get='man pacman' alias cleanup='sudo pacman -Rns (pacman -Qtdq)' ╭─brook@g5 in ~ took 44ms ╰─λ
Gruda provides some applications and tools to enhance users' privacy, as evidenced by the default browser provided by the distribution, as well as some of the extensions and settings.
The distribution's default browser is Firedragon, forked from Librewolf, itself a fork of Firefox by dr460nized, one of the developers of Garuda -- and ostensibly, based on the name of the istribution dedition, the primary developer of Garuds's Plasma edition. Librewolf removes telemetry, adds privacy conscious search engines, and includes uBlock Origin by default, among other features.
Among the privacy enhancing and other related feature differences between Firedragon andLibrewilf, as listed on its GitHub page, are:
Other differences from Librewolf, also as stated on its GitHub page are:
Whoogle is a self hosted Google search proxy that can be installed locally or on a remote server. Its privacy related features, which are available whether it is installed locally or on a remote server, as stated on the project's announcement on Reddit and on its GitHub page, include:
When installed on a remote server that processes the search requests as a proxy, the IP address of the computer where the search request originated is not tracked. If installed on the local computer, in which case only the privacy features listed above would be available, IP tracking would still occur -- unless the browser sends requests through a VPN, Tor, or a similar network configuration.
The Garuda distribution operates a Whoogle instance on their own servers. The default browser in Garuda Firedragon is configured to use it as the default search provider.
Garuda also hosts an instance of searx along with Whoogle and is one of the available search engine options in Firedragon. Like Whoogle, searx is a metasearch engine that relays search requests to search engines. Also like Whoogle it respects its users' privacy by NOT:
But unlike Whooge it can relay requests to as many as seventy search engines and not to just Google. Also, like Whoogle it is installed on a remote server to get the most privacy benefits but can also be installed on a local computer. I installed it on my first installation of Garuda 210406 on my secondary laptop, following the Step by step installation instructions on the project's home page (except using the package from the Arch repositores and not by cloning the Git repository). It was an impressive tool, not only in the privacy aspects but the available search results. Unfortunately, the configuration is somewhat complicated and I didn't redeploy it after having to reinstall Garuda 210406 or after installing Garuda 210621 on my primary laptop.
In addition to the privacy related tools mentioned above, Garuda provides several services, listed below, for user convenience made from self hosted cloud applications.
Garuda Cloud is a a NextCloud instance hosted by the distribution that provides a small amount of cloud storage to users. One of the developers explains its purpose in the Garuda Forum post:
It is a self-hosted instance of NextCloud which can be used for a lot of things actually, I'm using it for syncing contacts, calendar and dotfiles for example. You get 250mb of storage which should be sufficient for these things.. It's meant as a test for us to find out how many people actually want to use this kind of service to maybe expand it in the future if there is demand - but its also serving as an example for you to find out how open source alternatives to proprietary ones work for you. E.g, one possible setup is connecting davx5 on android with the NextCloud to serve as alternative to Google contacts/cal sync.
Bitwarden is a multi-platform password manager that allows many additional cloud powered features, such as information sharing, among different Bitwarden clients. Nitwarden Inc. can provide hosting for the services for a fee, but allows organization to self host. Garuda hosts an instance to provide the program's services to its users.
Garuda specific documentation is provided in the form of a wiki accessible from a link in Garuda Welcome and also from an obscure menu item on the main Garuda web page. Currently, the documentation is very minimal, only providing information on the most essential of topics, one of which is on performing a system rollback. Perhaps, as the distribution becomes more mature, the documentation will become more comprehensive, covering Garuda's own tools at a level that even includes background on its underlying operations.
Support to the users on the forum is very good. Technical issues presented by users seem to be addressed quickly, with solutions provided. Less importantly, non-technical issues, such as the request for a light theme are also addressed, but some users will not be satisfied with the answers.
The method used by Garuda to enable this seems to be compiled in -- as is the case with the Firefox distributed by openSUSE, as the changes to the application's .desktop file are not modified to execute with the setting of the environment variable GTK_USE_PORTAL. But it still apparently relies on two packages -- xdg-desktop-portal and xdg-desktop-portal-kde -- which are preinstalled. These packages are also the basis of a method to enable the native file dialog in GTK applications which was discussed in Enabling a Native File Dialog for Firefox on Plasma Desktop.
↩