It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
HeresMyAccount: Actually that's a really good point, but it also seems to me like if a few distributions took basically all of the best parts of all the different ones, but with an easy way to customize them into something seeming more like whichever one you want, in terms of how it looks and works, and what you can do and how you do it, etc., then wouldn't that be the best thing?
Yup, but in practice the larger the project, the more there's going to be politics and bullshit involved. And "too many cooks", if that phenomenon makes any sense to you (https://en.wikipedia.org/wiki/Design_by_committee). That means both a lack of vision and a lack of sense of direction, but also (at the same time) people with too much political power pushing things further in their direction while making excuses why others' ideas cannot be accommodated.

The end result is a steady decline towards mediocrity, which seems to be the destiny of every large project that has a significant userbase of non-technical users. Likewise, arguments are always going to go "most users this, most users that, so we can't/shouldn't/won't blablabla for you." An alternative formulation is the passive-aggressive "we're happy to include this feature it you do the work", where it's rarely mentioned that they constantly break your work and just generally make it damn hard to do.

It's much easier to realize an alternative vision of how things should be when the project has a strong leader and a small enough following. Alternatively, a very community-driven approach (without strong leaders) works as long as the project is small enough that it doesn't turn into total chaos. But Linux distros aren't born in a vacuum and as you've seen, most of them use tools and applications common to each other. As long as these softwares' developers respect the diversity of the ecosystem, all is well.. but there have been some rather sad developments. I'm not going into it right now though.
Post edited October 26, 2020 by clarry
avatar
HeresMyAccount: I don't know what putting an exclamation point before a word would do - isn't it like in programming where it means "not", so it would exclude search results that find it? But if that were the case then I don't see why you'd have used it, so I really don't know what it does.
avatar
clarry: DuckDuckGo has search shortcuts for thousands of sites. These all start with a bang. !debman (also !dman) is the bang for manpages.debian.org searc. !we (also just !w) is for Wikipedia (English). !gog for game search on GOG. And so on. https://duckduckgo.com/bang
There's also:
!g for Google
!ddg for DuckDuckGo (can't think of a use for this, but it's there)
avatar
HeresMyAccount: Yeah, sadly I realize that you can't count 100% on one distributing having the same things as another one, even if one evolved from the other. I do like Linux a lot but that seems to be a significant disadvantage and potential inconvenience.
avatar
clarry: It's also a major advantage. If anything, I lament the fact that mainstream distros are becoming more and more alike each other while distros that offer something unique seem to become more and more niche. It's becoming an uphill fight to be different, specially when we're seeing more and more developers who think in a very polarizing "our way or highway" manner. I have to say the community was much more appreciative of differences and choice back 15 years ago.
I happen to like distros that differ from the mainstream distros at a fundamental level. For example, here are some of the more interesting ones:
* Tiny Core Linux: Runs entirely from the initramfs, and the packages are squashfs images that are mounted when needed. Does not use systemd. Notable for being small while still having a GUI.
* Alpine Linux: Uses musl instead of glibc. Does not use systemd (which I believe only supports glibc anyway); openrc is used instead. This results in a smaller system that's often used for container environments (think Docker, which is like a chroot but fancier), and is also well-suited to compiling statically linked executables.
* Gentoo Linux: A source-based distribution. This allows you to control compilation options, and as a result is more customizeable. You can choose between openrc and systemd here (openrc was developed for Gentoo, I believe). The biggest downside is that it takes longer to install or update software because it has to be compiled, with a few packages (like chromium) being particularly bad about this.
* buildroot: This tool allows a custom Linux system to be built from scratch. You can customize various low level aspects of the system, like C library, device management, and init system (systemd is an option, but actually isn't the preferred choice here). The resulting system can be quite small, suitable for devices with limited RAM and storage, but it will not include a compiler and is typically not upgradeable. (Interestingly enough, this might serve the usecase that HeresMyAccount is looking for, namely running a specific program, but I would consider this to be a more advanced task, so I wouldn't recommend it at this point.)

avatar
HeresMyAccount: Actually that's a really good point, but it also seems to me like if a few distributions took basically all of the best parts of all the different ones, but with an easy way to customize them into something seeming more like whichever one you want, in terms of how it looks and works, and what you can do and how you do it, etc., then wouldn't that be the best thing? I mean, I guess there might be circumstances when too many things conflict and it can't all be available at once, at least not in a convenient and compatible way, but it seems like with enough forethought, if enough people were to work together to make it happen, something like that could pretty much exist, couldn't it? Then you wouldn't have to worry about, "Does this one do what I want, or should I go with that one instead? But if I go with that one then I won't be able to easily do something else that I also want to do...", and so on.
Well, if your host distribution doesn't let you do what you want, you could use a VM or container (such as a chroot or Docker).

Also, Bedrock Linux exists, though I haven't tried it.
Post edited October 26, 2020 by dtgreene
Since the topic is "Best kind of Linux?"... I'd want to use a 64bit Linux on my Raspberry Pi4 and I bumped into a Youtube video claiming that the very best two Linuxes (for PCs) are Pop!_OS and Manjaro. (Pop!_OS doesn't really interest me as it is apparently yet another Ubuntu derivative, just like Linux Mint which I already use.)

I also recall being mentioned that Manjaro is indeed one 64bit Linux option for Raspberry Pi, so I decided to install it on a VirtualBox (on my Windows PC) to see how it feels, and whether I'd want to replace the 32bit Raspbian with it. I was interested also because I think it is my first try with an "Arch"-Linux, unless I have accidentally operated some such Linux at my previous work. (I first thought I have used Manjaro over a decade ago or so, but that was actually Mandrake/Mandriva, which is apparently based on RedHat family and not related to Arch...).

Anyway, for basic usage, so far it doesn't really feel different from e.g. Linux Mint (both use XFCE). Usually the first hurdle for me with a totally new Linux distro (which is not based on either Debian/Ubuntu or RedHat) is "so how do I run the system update on this?". Oh well, on Manjaro the command seems to be "pacman", so reading the man pages what is the correct command to get everything updated... (apparently the correct command is "sudo pacman -Syu")

Not sure what is supposed to make Manjaro so much better than e.g. Mint, but opinions are like a-holes etc. Just yesterday I happened to also run into a discussion where someone was wishing that Manjaro would adopt e.g. "Timeshift" (which is the automatic "backup" or snapshot utility for Linux Mint) from Linux Mint, so there's that.

Anyway, if the Raspberry Pi version of Manjaro is similar, I guess it is fine. Then again I also read that apparently the 64bit Raspbian OS is in alpha stage now so not sure if I should wait for it instead, as it is the official OS for Raspberry Pi. Just thinking whether it supports Raspberry hardware better than Manjaro, or has more Raspberry software (including games, console emulators etc.) for it than what Manjaro does.

What I'd want the Linux distribution to have are:
1. openfortivpn (a VPN client that I need for my work)
2. Remmina (for Remote Desktop)
3. Skype
4. MS Teams

Linux Mint has all four.
32bit Raspbian has the first two but not 3 or 4, I presume for being 32bit.
Manjaro x86 seems to have the first three, but unsure whether MS Teams works on Manjaro, especially the ARM (Raspberry Pi) Manjaro.

The nice thing also is that nowadays (with a firmware upgrade) it seems to be possible to boot Raspberry Pi4 completely from an USB HDD or memory stick (without a SD card), so I guess I could easily try out or even run different Linux distributions side by side, just switching which HDD or memory stick is connected to the USB.
Post edited October 26, 2020 by timppu
One thing that interests me on Manjaro though is its rolling update model, so now I am trying to figure out what, if any, drawbacks there are to rolling updates compared to "standard releases". I think Debian uses rolling updates too?

I just... well, when I considered updating my Linux Mint 19.3 to 20.x with a release upgrade, all official instructions seem to warn "Do you really want to do it? Don't do it! Something might become broken!". I guess a similar case like I rather not update e.g. Windows 7 to 10, but clean-install 10 from scratch. Seems just more idiot-proof that way.

Also in the past at work, I once did a do-release-upgrade from Ubuntu 14 to 16 (server), and ended up in a non-booting system at a GRUB_RESCUE prompt. Fortunately I had a fresh snapshot of that server so I went back and after some reading and trying, I figured out two possible fixes to the problem:

1. Before do-release-upgrade, replace grub2 with (older) grub. For some reason the older grub doesn't seem to exhibit the same problem, but then maybe it is not a good idea to downgrade grub?

or

2. After do-release-upgrade and BEFORE booting the upgraded system, run "sudo grub-install /dev/sda". Then reboot. So it seems during the release upgrade grub2 didn't get installed or upgraded on the disk. Googling for it, it seems lots of people have had this same issue when doing a release upgrade from e.g. Ubuntu 18 to 20. Maybe it can also be fixed afterwards from the GRUB_RESCUE prompt.

So, all in all, I feel I would prefer a rolling update model where, instead of big release upgrades every few years that I'd want to clean-install just to be sure, the upgrades would come in smaller pieces. Now trying to educate myself what, if any, drawbacks there are to rolling updates...

https://averagelinuxuser.com/rolling-vs-fixed-linux-release/
Post edited October 26, 2020 by timppu
avatar
timppu: One thing that interests me on Manjaro though is its rolling update model, so now I am trying to figure out what, if any, drawbacks are there to rolling updates compared to "standard releases". I think Debian uses rolling updates too?
If Squeeze, Wheezy, Jessie, Stretch, and Buster ring a bell, you've heard of Debian releases. So no, it is not a rolling release distro. (Except maybe if you follow Sid)

So, all in all, I feel I would prefer a rolling update model where, instead of big release upgrades every few years that I'd want to clean-install just to be sure, the upgrades would come in smaller pieces. Now trying to educate myself what, if any, drawbacks there are to rolling updates...
Both rolling and release based systems can break. In theory, having actual releases means they can test the update procedure and make sure nothing breaks (it's a bit harder to do such focused testing on a constant ripple of updates). In practice, few mainstream distros have competent release engineering or thorough testing, so yeah updates are always scary.

FWIW I just updated my arch laptop when I needed a custom kernel to demonstrate AstralWanderer why they're wrong about SD cards' write protection. It's been more than a year since the previous update (I think I was on 5.2 kernel) and as expected, pretty much the entire system got updated, and as expected, there were some errors I had to look up and fix. But no major breakage. This has been my experience with arch; if you don't update it regularly, then you're more likely to run into little issues, but usually they're trivial to fix or work around.

Still I think it's less a function of the release model and more a function of how the project is run (how much churn? how much complexity? how much testing? competent developers or not? dogfooding much or not? most devs sitting on a stable branch waiting for somebody else to fix all the bugs in testing? any commercial conflicts of interest? etc.). As in, OpenBSD updates have been rock solid for me, every single time, and they stick to a 6-month release cycle. Ubuntu updates have been a major pain in the arse and have broken pretty bad more than a few times. Fedora's been so-so; I think my father used to have some issues after updates a few years back but it's been OK for me.
Post edited October 26, 2020 by clarry
In other Linux news: I've finally set up my first BTRFS archive (on an external USB HDD), yay!

Like I've mentioned before, I'd like to use either BTRFS or OpenZFS filesystem (instead of ext4 or NTFS or whathaveyou) for my personal file archives because these filesystems take e.g. data integrity seriously, trying to make sure files don't become corrupted, keeping checksums for all the data all the time and tools to check data integrity at any time. (They have lots of other advanced features as well but that is currently the main reason for me to want to use either one.)

Anyway, in Linux btrfs seems to be common already so I went ahead with it. The instructions I read before made it seem very complicated with RAID this and scrub that, but it was mainly because those instructions were apparently meant for more advanced RAID setups etc.

For me all that was really needed was (/dev/sdc is my empty external HDD without any existing partitions, and I will label it as "BTRFS_ARCHIVE"):

sudo apt install btrfs-progs (I think it was earlier called "btrfs-tools"?)
sudo mkfs.btrfs -m single -d single -L BTRFS_ARCHIVE /dev/sdc

And that's it. I used the "single" profile meaning that btrfs can't fix file errors by itself (only detect them), but then that is why I have the secondary backup on another USB HDD, from which I will copy and replace any files which ever might become corrupted. I keep it simple for now.

This is a good place for btrfs usage information:

https://btrfs.wiki.kernel.org/index.php/Main_Page
https://btrfs.wiki.kernel.org/index.php/FAQ
https://btrfs.wiki.kernel.org/index.php/UseCases

Next step: testing how well the Windows BTRFS-driver (WinBtrfs) works:

https://github.com/maharmstone/btrfs

It is enough for me if those BTRFS archives are readable in Windows, but if I can also change the archives from Windows as well securely, all the better.

I wonder when Microsoft will introduce a similar, more advanced, filesystem? Even Apple apparently already has AFS ("Apple File System") with similar, btrfs/OpenZFS-kind, advanced features.
Knowing Microsoft though, I wouldn't be surprised Windows 10 Home users would be left without the new filesystem, just like e.g. BitLocker (encryption) is apparently not available for Home users. Peasants don't need such "advanced" features! Or if they do, pay up extra!

Linux rambling mode off, for now.

avatar
timppu: One thing that interests me on Manjaro though is its rolling update model, so now I am trying to figure out what, if any, drawbacks are there to rolling updates compared to "standard releases". I think Debian uses rolling updates too?
avatar
clarry: If Squeeze, Wheezy, Jessie, Stretch, and Buster ring a bell, you've heard of Debian releases. So no, it is not a rolling release distro. (Except maybe if you follow Sid)
Oh ok, then I apparently misunderstood something. I ran both vanilla Debian and Linux Mint Debian on VirtualBox for awhile, just to see how they felt compared to Ubuntu-based Mint.
Post edited October 26, 2020 by timppu
avatar
clarry: FWIW I just updated my arch laptop when I needed a custom kernel to demonstrate AstralWanderer why they're wrong about SD cards' write protection. It's been more than a year since the previous update (I think I was on 5.2 kernel) and as expected, pretty much the entire system got updated, and as expected, there were some errors I had to look up and fix. But no major breakage. This has been my experience with arch; if you don't update it regularly, then you're more likely to run into little issues, but usually they're trivial to fix or work around.
Hmmm... Too bad then they don't have some "checkpoints" where it knows it is secure to update to that point first, before updating further. A bit like it seems to be with gitlab-ce that I need to keep updated on one server, whenever the major version changes, it first updates only to the last minor version of the current (older) major version, and from there to the next major version... So you just have to update gitlab-ce many times in such cases.

Then again I guess it wouldn't be a big step to start calling those "checkpoints" as "releases"... :)
avatar
timppu: Like I've mentioned before, I'd like to use either BTRFS or OpenZFS filesystem (instead of ext4 or NTFS or whathaveyou) for my personal file archives because these filesystems take e.g. data integrity seriously, trying to make sure files don't become corrupted, keeping checksums for all the data all the time and tools to check data integrity at any time. (They have lots of other advanced features as well but that is currently the main reason for me to want to use either one.)
you could also use dm-integrity+ext4, or even more complex setups like
https://insanity.industries/post/preserving-data-integrity/
avatar
timppu: One thing that interests me on Manjaro though is its rolling update model, so now I am trying to figure out what, if any, drawbacks are there to rolling updates compared to "standard releases". I think Debian uses rolling updates too?
Drawback: A program that you use gets an update that changes the interface in a way that breaks your workflow; now you need to relearn how to use the software. Or that script you wrote relies on some behavior of systemd (for example), and systemd makes a change that causes your script to no longer work, so you now have to fix it. For distros that just use standard releases, you only need to worry about this when upgrading to a new release. There are also distros that have long term support releases, like CentOS. CentOS 6, released back in 2011, is still getting "maintenance updates" until the end of November. with CentOS 8 planned to keep getting them until 2029.

Debian stable doesn't use rolling updates; the only updates you see are security and other critical updates (not counting backports). For example, Debian 10 was released in July of 2019, and it went in a freeze before hand, so you would expect to see software from 2018, albeit with security patches, in that distro.
dtgreene, thanks for the info.

timppu, what's all this stuff that I'm hearing about Raspberry Pi? What do people actually use that for, anyway?
avatar
HeresMyAccount: timppu, what's all this stuff that I'm hearing about Raspberry Pi? What do people actually use that for, anyway?
https://en.wikipedia.org/wiki/Raspberry_Pi (I have Pi4 with 4GB RAM)

It is quite popular among enthusiasts who like to create all kinds of internet-of-things devices and stuff with it, adding all kinds of hardware thingamajings on it and then controlling them with the RPi. One of them that really interested me was making a cheap surveillance camera system with it where RPi controls the camera and saves either photos or video on either locally to a hard drive or I guess you can feed it to internet as well.

However, I bought it for a low-power (= uses little electricity) and silent (= it needs no fans for cooling) general-use computer which I keep on 24/7. I can use my TV as its monitor with HDMI, and control it with a wireless mouse and keyboard, or even control it with my other computers by using either TeamViewer or AnyDesk (which are remote desktop software; they work also on Raspberry Pi). It is also my "multimedia PC" ie. it can display HDTV videos just fine on the TV (that is mainly why I've connected it to my TV and not a generic computer monitor).

Oh, and it is also very cheap considering it is pretty much a full computer (e.g. around $55 in the US for the 4GB RAM base model; you need to add the price of a SD card, USB-C charger, keyboard + mouse and other accessories to the price though). I can do pretty much anything with it that I'd normally do with a PC, except playing (heavy) PC games. RPi has some games of its own (not commercial, freeware) and it can run many emulators too to play e.g. old SNES or Amiga or whatever games on it. Either way, I didn't buy it for gaming.

It can't run PC (x86) software though as it is based on an ARM CPU.
Post edited October 26, 2020 by timppu
avatar
timppu: Like I've mentioned before, I'd like to use either BTRFS or OpenZFS filesystem (instead of ext4 or NTFS or whathaveyou) for my personal file archives because these filesystems take e.g. data integrity seriously, trying to make sure files don't become corrupted, keeping checksums for all the data all the time and tools to check data integrity at any time. (They have lots of other advanced features as well but that is currently the main reason for me to want to use either one.)
avatar
immi101: you could also use dm-integrity+ext4, or even more complex setups like
https://insanity.industries/post/preserving-data-integrity/
Hmm, that started going over my head quite soon. :) It seemed to be a more advanced setup that I had in mind with several physical disks and RAID. I try to read it more carefully later.

For now I just mainly want two separate USB HDDs which most of the time have the same data (ie. from time to time I "rsync --delete" any changed data from the master archive to the secondary archive), and I want to use a filesystem like btrfs or OpenZFS which has the ability to check that none of the data has become corrupted at some point (and with a DUP or RAID setup could even fix the problems automatically, but as said to me it is currently enough that I can detect such corruption; then I can simply replace the corrupted file(s) from the secondary backup if needed).

Before this I used mere NTFS for the two archive HDDs and used e.g. rhash and dvdsig tools to try to keep track that files remain ok, but those checksumming tools don't really help if I move or change some files. reorganizing the archive (because I normally don't re-create the checksums right away for each file). So if I've been moving around stuff and adding and deleting stuff from the main archive, how can I be sure nothing has become corrupted in the meantime, including files I haven't touched for a long time? There is no meaningful way to check that.

btrfs and OpenZFS fix that as they recreate the checksums automatically whenever I move or change data. I guess there is still a theoretical possibility that something becomes corrupted anyway during file operations (due to bad RAM or USB behaving badly or whatever), but with those things I can try to cope with different ways, like using rsync -c twice for file copying etc.

While not relevant for my HDD archives, I've been under the impression that btrfs would be quite good for SSD drives as well because the filesystem tends to "circulate" all the time where it writes changed or new data, using the whole SSD evenly. Maybe modern SSD controllers do that by default anyway (I've understood that at least professional server-level SSDs do), but I guess it doesn't hurt that btrfs does that too, not trying to write to same spot all the time when you e.g. change one file repeatedly. It always creates a new "version" of that file, writing it to somewhere else instead. Or at least that's how I've understood...
Post edited October 26, 2020 by timppu
Well I guess that Raspberry Pi thing could be useful for those kinds of things, but all I want to do is normal computing, so I doubt I'd ever need one.
Well, I've installed Porteus onto a USB stick but I can't get it to boot at all, so I'll give Mint a try. Wish me luck, because I'll probably need it. First I have to figure out how to install it without having it mess up GRUB.