It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
Darvond: This is the wrong approach. You shouldn't be doing anything involving partitioning in a production system. (That is to say, within it.)

Now while there are a lot of strange and arbitrary rules regarding partitioning, there shouldn't be any restrictions on restructuring if you approach them from "orbit", so to speak.

So you'd boot into a live USB, and manage the partitioning (VERY CAREFULLY) from there.
Messing with partitions and mountpoints after it's in place, i agree you probably shouldn't do without careful consideration. Although if say the SquashFS module is built into the kernel, you could compress directories and then delete them and have them automatically mounted as read-only partitions. It may actually be a hair faster using gzip or lzo compression.

---

A few thoughts; I've been planning out how to do a Linux cluster-like group, where each computer adds it's resources, but i don't need to recompile the software to use MPI or something like that. (Nice idea but i doubt i want to modify a bunch of sources and recompile)

So one idea is most of the systems will have extra ram. To try and pool it to create a ramdrive for work. Maybe not the best idea as networking will become the bottleneck but i don't need hard drives for anything but the master, good for liveCD's cluster setup.

So... create a /tmp/cluster-ram/block file, and dump a file the size it can safely spare. (Say, all but 512Mb ram... at least for slaves. So say i have oh, 4 old laptops with 2Gb each, that would give an extra 7Gb /tmp filespace).

Second, map/mount all /tmp/cluster-ram directories via NFS, append them together using losetup and adm tools, format and mount as a public nfs and all slaves mount that directory via NFS.

Now for this to really make sense, on the master you'd extract files to the shared ramdrive drive and then give commands to the slaves to work on file(s) only within the shared ramdrive..

That's the basic idea, using xargs primarily for splitting work on lots of little files. Doesn't seem rsh/ssh would work the best in that instance since different machines may be faster/slower and better if they pick up jobs when they are free instead.
avatar
_Auster_: I think it depends on the person.
Windows to me feels more simplified/streamlined in the usual processes, but if the user wants more control over a process or wants to go deeper in the technical part, at least in my experience, Linux tends to be easier to learn and use. Also, if something breaks on Windows, at least I find it much harder to find a working solution, while the solutions for Linux tend to work, but are far more technical (so there's a bigger learning curve).
And I can't comment on the MacOS because the most recent version I've used so far is the Macintosh 68K.
avatar
dtgreene: Actually, I think it may be easier to tell someone how to do certain tasks on Linux versus Windows.
* Windows: You need to tell the user which buttons and menu options to select, where they are, and all that sort of stuff. Worst case, a registry edit might be needed, and the registry can be tricky to navigate.
* Linux: Just give the user some commands to type, or in some cases, make an edit to a text file. Text files are much easier to deal with than the registry, and typing commands doesn't rely on having to find the command, if someone knows what to type (whereas knowing which menu option to choose isn't enough, as you still need to find that option).
Auster have a point. Technical areas in Windows feels way more complicated than in Linux kernel. At least it's my impression as a rookie here.
Having a background* knowledge about terminals and logic programming, using Linux is invigorating.
Post edited 5 hours ago by .Keys