Posted September 16, 2021
Darvond: This is the wrong approach. You shouldn't be doing anything involving partitioning in a production system. (That is to say, within it.)
Now while there are a lot of strange and arbitrary rules regarding partitioning, there shouldn't be any restrictions on restructuring if you approach them from "orbit", so to speak.
So you'd boot into a live USB, and manage the partitioning (VERY CAREFULLY) from there.
Messing with partitions and mountpoints after it's in place, i agree you probably shouldn't do without careful consideration. Although if say the SquashFS module is built into the kernel, you could compress directories and then delete them and have them automatically mounted as read-only partitions. It may actually be a hair faster using gzip or lzo compression. Now while there are a lot of strange and arbitrary rules regarding partitioning, there shouldn't be any restrictions on restructuring if you approach them from "orbit", so to speak.
So you'd boot into a live USB, and manage the partitioning (VERY CAREFULLY) from there.
---
A few thoughts; I've been planning out how to do a Linux cluster-like group, where each computer adds it's resources, but i don't need to recompile the software to use MPI or something like that. (Nice idea but i doubt i want to modify a bunch of sources and recompile)
So one idea is most of the systems will have extra ram. To try and pool it to create a ramdrive for work. Maybe not the best idea as networking will become the bottleneck but i don't need hard drives for anything but the master, good for liveCD's cluster setup.
So... create a /tmp/cluster-ram/block file, and dump a file the size it can safely spare. (Say, all but 512Mb ram... at least for slaves. So say i have oh, 4 old laptops with 2Gb each, that would give an extra 7Gb /tmp filespace).
Second, map/mount all /tmp/cluster-ram directories via NFS, append them together using losetup and adm tools, format and mount as a public nfs and all slaves mount that directory via NFS.
Now for this to really make sense, on the master you'd extract files to the shared ramdrive drive and then give commands to the slaves to work on file(s) only within the shared ramdrive..
That's the basic idea, using xargs primarily for splitting work on lots of little files. Doesn't seem rsh/ssh would work the best in that instance since different machines may be faster/slower and better if they pick up jobs when they are free instead.