It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
kbnrylaec: It make me think about RAM Doubler (early 1990s) and SuperStor (1990), and many other later products.
avatar
Serren: Or Stacker, which is what we used back in the 80s.

Microsoft was later successfully sued for infringing Stac's disk compression patents when they introduced DoubleSpace/DriveSpace in MS-DOS 6.
Early products of Stac Electronics use hardware chips.
Stacker was released in 1990. I am not sure is it earlier or later than SuperStor.
avatar
kohlrak: EDIT: Another cool trick is a physical medium. If you're using a compressed image or partition, anyway, you aren't too worried about speed. Use a few cheap USB drives you have. Just be careful with disk hungry games like Skyrim. Viola, game is now portable if it isn't heavily registry reliant.
avatar
timppu: When the internal (2.5" PATA) HDD on one of my old laptops got fried and I didn't have a replacement HDD (as all my other 2.5" HDDs are SATA, not PATA), I set it up to boot from USB, and connected an USB HDD to it. I installed Linux Mint on it and ran it that way.

It was slow on heavy file operations especially as it was only USB2.0, but at least it worked fine, completely running from an external USB mass storage device.
Linux constantly loads files, 'cause of the whole UNIX ideal of make simple programs and do them well, so you have a bunch of programs loading other programs loading other programs...

But at least it worked.

avatar
kohlrak: Halo 1 had a very, very unique way of making levels load faster than it's contemporary competitors, and it was well known for this. Basically, the maps were made in the very format they were to be stored in RAM. In other words, Halo could simply memory map the files or just read them into RAM space and point the pointers where they belonged, 'cause the arrays and everything were in place. No CPU was really necessary to de-compress or anything.
avatar
dtgreene: This sounds a lot like the way the dynamic linker works. Perhaps a programmer could get this same performance by converting their assets into C or Rust files (Rust lets you use the include_bytes! macro here), compiling them as shared libraries, and then using dlopen() (or the Windows equivalent) to load them at run-time. Rather than the programmer having to do the work, they can just let the dynamic linker do the hard work!
That might've actually been what they did for Halo 1. I remember another game (from which the name "kohlrak" came) where "missions" were actually just DLL files. This can be terribly inefficient, though. Frankly, i don't think it's really necessary to make it all that complicated to simply load it yourself if all you're doing is exporting data structures. Effectively, with this method you would have an "editor mode" as a developer and simply place objects or whatever, and simply save and load the data without any compression just as it appears in ram. Then to load, just load that data, verbatim, with no initialization or anything necessary. The data must be consequtive, but that's not really a hard problem to solve, either, especially as you kinda already want to do that sort of thing anyway to further optimize the code.
(One could even use an assembly language file for this purpose; if it contains no instructions (only data and assembler directives), it might even be portable!)
You'd be surprised what you can do with it. On my todo list when i'm not playing video games all the time is to finish my cross platform (for running, not target obviously) assembler project and coupling it with an interpreter style VM and possibly even just porting it to the VM. I think VMs are under-utilized as methods of extending games. Imagine if these pixel games had open source enemy mods where you could just pre-compile new enemy types into a self-contained file containing sprites, non-default behaviors, SFX, whatever.

Anyway, since half the games these days are made with an engine that actually utilizes a VM, anyway, even code shouldn't really matter.
avatar
W3irdN3rd: Also, go to the store and buy a 128GB microSD card for $20.
avatar
dtgreene: Problems:
* MicroSD card access is slower than SSD access, and may (depending on access pattern) be slower than spinning hard disk access.
* MicroSD cards are easy to lose.
* Not every computer is able to read MicroSD cards. (For example, my desktop can't unless I find a card reader or use a bit of a hack with my Raspberry Pi Zero (that requires booting the Pi without a card, inserting the card, then running a program on the host computer; only *then* can I mount it).
The advice was fairly specific to the system the topic start described: a laptop with a 32GB or 64GB SSD. Those laptops often don't have room for a hard disk and the SSD may not be (easily) replaceable, but usually do have a card reader. I would also put some tape over the card reader if you go with my method as you would leave the card in there permanently anyway, so you won't lose it. USB sticks and external hard disks are less practical for those systems.

If you have room for (another) HDD/SSD that's definitely the better option.
Back when I ran Linux on a 143MB hard drive, or ran Linux on a PDA, I used compression like that. In fact, the compression I used on my 143MB drive saved my data when I accidentally formatted the root partition (while running; at the time, mke2fs refused to format a mounted partition, but only if it wasn't the root partition), as it used gzip and I could identify and verify gzip files among the raw data. The biggest issue with your suggestion is that you're suggesting compressing the games, rather than the OS. The games will likely have a very low compression ratio for the bulk of their data, so you're wasting effort. In any case, an external USB drive will provide much greater benefits with far less effort. In either case, actually using the data will result in decompression, so you better have a lot of RAM and/or swap.

I don't see any advantage at all to compressing portage, since you'd have to regenerate after every sync, but whatever. My /usr/portage is huge, but distfiles (33G) and packages (28G) are the biggest space hogs (702M+27M+129M without them, including my local stuff and overlays), and those two are all already compressed files. When I insanely installed gentoo on machines with small hard drives, I used NFS to deal with portage, instead; it's not like /usr/portage needs to be there during normal operation. Even searching portage can be done using eix without portage being present. Nothing short of rewriting emerge from scratch will get me emerge -auNDv world times lower than 15 minutes (not counting the actual compile times), anyway. Gentoo sucks.
avatar
darktjm: In either case, actually using the data will result in decompression, so you better have a lot of RAM and/or swap.
Eh? If the implementation is halfway sane, decompression has virtually no effect on RAM use.
I don't see any advantage at all to compressing portage, since you'd have to regenerate after every sync, but whatever.
Regenerate what?
avatar
clarry: Eh? If the implementation is halfway sane, decompression has virtually no effect on RAM use.
So in what magical universe can you keep disk buffers for the uncompressed data and disk buffers for the compressed data without actually using any more space than just the uncompressed data alone? Or maybe your idea of virtually no effect is to take a major performance hit as fewer buffers are kept around in case of reuse. I realize that the compressed data buffers can be reused, but that just increases the need for uncompressed buffers to avoid constant re-decompression. Nothing is free.

avatar
clarry: Regenerate what?
Portage has a lot of churn. In addition to the massively long sync, you then have to also recompress all the changed files. In particular, squashfs is a read-only filesystem, so you'd have to regenerate the file sytem after every update (which is why I used that word: dtgreene suggested using squashfs for portage). There are read-write filesystems that support compression, but in the end, one way or another, you still have to recompress all the files. Updating portage is a daily thing for some people. If I wait more than a week for updating portage, I end up with over 100 files to update at once, which annoys me. Portage has a lot of churn, as I said.
avatar
dtgreene: Problems:
* MicroSD card access is slower than SSD access, and may (depending on access pattern) be slower than spinning hard disk access.
* MicroSD cards are easy to lose.
* Not every computer is able to read MicroSD cards. (For example, my desktop can't unless I find a card reader or use a bit of a hack with my Raspberry Pi Zero (that requires booting the Pi without a card, inserting the card, then running a program on the host computer; only *then* can I mount it).
avatar
W3irdN3rd: The advice was fairly specific to the system the topic start described: a laptop with a 32GB or 64GB SSD. Those laptops often don't have room for a hard disk and the SSD may not be (easily) replaceable, but usually do have a card reader. I would also put some tape over the card reader if you go with my method as you would leave the card in there permanently anyway, so you won't lose it. USB sticks and external hard disks are less practical for those systems.

If you have room for (another) HDD/SSD that's definitely the better option.
Actually, when I looked online (before buying the computer IIRC), apparently this laptop model *does* have room for an M.2 SATA disk, so if I really wanted to, I *could* add a fast SSD to the system in question.

The question, of course, is whether this is worth doing; it would be a bit silly to spend more on an SSD than on the system itself! (Also, other systems at that price point might not have this slot.)

avatar
darktjm: The biggest issue with your suggestion is that you're suggesting compressing the games, rather than the OS. The games will likely have a very low compression ratio for the bulk of their data, so you're wasting effort.
That may often be the case, but it isn't always. In particular, the example I gave (Hollow Knight) is one where I actually got good compression.

One way to tell if it's worth it is to compare the size of the installer to that of the installed game. If the installed game is much bigger than the size of the installer, chances are compression will give significant space savings. (I note that, in this case, the installer and the compressed filesystem are about the same size (1.3G according to ls -lh), so I am suspecting that the installer uses gzip compression, since that's what I used for the squashfs filesystem.)
Post edited December 26, 2018 by dtgreene
avatar
clarry: Regenerate what?
avatar
darktjm: Portage has a lot of churn. In addition to the massively long sync, you then have to also recompress all the changed files. In particular, squashfs is a read-only filesystem, so you'd have to regenerate the file sytem after every update (which is why I used that word: dtgreene suggested using squashfs for portage). There are read-write filesystems that support compression, but in the end, one way or another, you still have to recompress all the files. Updating portage is a daily thing for some people. If I wait more than a week for updating portage, I end up with over 100 files to update at once, which annoys me. Portage has a lot of churn, as I said.
It turns out that there is a way to make the squashfs filesystem act as though it were writable: Use a union or overlay filesystem. This way, you have the squashfs filesystem as a base, and any changed files are stored elsewhere (uncompressed, most likely) on a writable filesystem (possibly even tmpfs, if you have enough RAM and want to save on disk writes). Then, if you need to feel the need to recompress them, you can just run mksquashfs on the mounted filesystem and have a new compressed image built. Since the portage tree is not needed except when updating, you can then unmount the old image and mount the new one in its place (after you destroy the old overlay).

One notable characteristic of the portage tree is that it consists of lots of small files; hence, the effects of the filesystem's block size becomes magnified, and it is possible to run out of inodes without actually running out of disk space on that filesystem. squashfs, I believe, avoids those issues (even compressing metadata); if you are using a traditional filesystem, you might want to choose one that handles those cases well (possibly using a utility like tune2fs to increase the number of inodes, if necessary).

By the way, this whole squashfs + overlay approach is commonly used on live CDs; since the boot medium is read-only and Linux systems generally expect a writable root filesystem, using the overlay allows us to make the filesystem writable. This allows one to even install new programs on the running system, or update those installed, despite not being able to write to the CD. (Another approach, more commonly seen in smaller liveCDs, is to just load the entire OS into ram.)

avatar
clarry: Eh? If the implementation is halfway sane, decompression has virtually no effect on RAM use.
avatar
darktjm: So in what magical universe can you keep disk buffers for the uncompressed data and disk buffers for the compressed data without actually using any more space than just the uncompressed data alone? Or maybe your idea of virtually no effect is to take a major performance hit as fewer buffers are kept around in case of reuse. I realize that the compressed data buffers can be reused, but that just increases the need for uncompressed buffers to avoid constant re-decompression. Nothing is free.
Once the uncompressed data is in RAM, the compressed data is no longer needed there; if the OS needs to free up memory, then, the compressed data can be evicted from RAM without any penalty.

RAM spent on cached data that is no longer needed is effectively free RAM.
Post edited December 26, 2018 by dtgreene
avatar
darktjm: So in what magical universe can you keep disk buffers for the uncompressed data and disk buffers for the compressed data without actually using any more space than just the uncompressed data alone?
You don't need to buffer both. Squashfs decompresses data into page cache. After that, it's all the same. Obviously you *do* need to keep compressed disk blocks & uncompressed pages around during decompression, but the block sizes you would use for these fast compressed filesystems are a few hundred kilobytes max. So your memory requirements go up by about that much. If you've multiple concurrent readers, you could need a few megabytes of extra RAM to avoid starvation. These things are used on embedded systems with rather tight memory too. Please prove me wrong.

For playing PC games made past 1995, it is literally nothing.

Or maybe your idea of virtually no effect is to take a major performance hit as fewer buffers are kept around in case of reuse. I realize that the compressed data buffers can be reused, but that just increases the need for uncompressed buffers to avoid constant re-decompression. Nothing is free.
"Fewer buffers", by however much a few megabytes of RAM amounts to. It's nothing.

Game will store its assets in RAM anyway (unless it's being fancy with mmap). So you can just measure the throughput of the block layer & fs & decompression and then forget it, RAM use is solely dictated by how the game handles its assets. Any extra RAM use is, as dtgreene said, in the page cache, which can be discarded under pressure.
Post edited December 26, 2018 by clarry
avatar
darktjm: I don't see any advantage at all to compressing portage, since you'd have to regenerate after every sync, but whatever. My /usr/portage is huge, but distfiles (33G) and packages (28G) are the biggest space hogs (702M+27M+129M without them, including my local stuff and overlays), and those two are all already compressed files. When I insanely installed gentoo on machines with small hard drives, I used NFS to deal with portage, instead; it's not like /usr/portage needs to be there during normal operation. Even searching portage can be done using eix without portage being present. Nothing short of rewriting emerge from scratch will get me emerge -auNDv world times lower than 15 minutes (not counting the actual compile times), anyway. Gentoo sucks.
in my experience having portage on a locally mounted squashfs easily beats portage over NFS as far as performance goes when on a low end computer. The compression isn't even that important. But avoiding the performance hit for traversing/accessing the gazillion directories & files from the portage tree on a slow medium (network share/sd card) is often a noticable improvement. Obviously you want to separate distfiles and packages from the actual portage tree when doing that. The default layout of /usr/portage is mostly a historical artefact anyway, that is only being kept that way for the sake of compatibility . It's usually the first thing I change :)

Seriously, 15min for calculating a world update? :o. I don't think even the gentoo on my raspberry pie takes that long.
Though I agree that in general the slowness of portage is definitely a weak point for gentoo.

recreating the squashfs image after an emerge --sync is a pretty fast operation (if you do it on a normal desktop/server pc). Usually faster than the whole rsync process for me :).
I never felt the need to invest in some more complex solution like with overlay/union fs as dtgreene mentioned above.
avatar
dtgreene: Anyway, here's the procedure:
1. Install the game. (This means that you will need to have the space to install it somewhere.)
2. Using the mksquashfs utility, create a squashfs filesystem from the directory the game is in.
...
ahhhh... love SquashFS. I did this back in 2000 or so when it was v1 and v2. I did it to the /usr directory. And it did work quite well. This was back when i had... i don't know, 32Gb hard drive? On the other hand, since everything in the directory was read only and treated as a CD-ROM, it was more secure from viruses and the like too.

One problem though. If the game saves are in the same local directory as the game, well, it will break. If the game writes a log it needs, it will break the game. There's several instances where this won't work no matter how you try. Though in those cases you COULD mount a writable drive or ramdrive on those directories it uses and thus get around that limitation, maybe make a script that mounts everything, drops you in a bash or starts the game, then cleans everything up afterwards too. Wouldn't be hard at all.

There's other filesystem options too. A compressed ext2 modification, doing a module like Slax, zlib compressed iso's. Depends on what you need. If you have enough ram, extracting and running from the ramdrive might work too. (I used to do that a lot with Windforge)
avatar
rtcvb32: One problem though. If the game saves are in the same local directory as the game, well, it will break. If the game writes a log it needs, it will break the game. There's several instances where this won't work no matter how you try. Though in those cases you COULD mount a writable drive or ramdrive on those directories it uses and thus get around that limitation, maybe make a script that mounts everything, drops you in a bash or starts the game, then cleans everything up afterwards too. Wouldn't be hard at all.
Perhaps I should introduce you to another feature of Linux: mount namespaces.

On Linux, if, as root, you do the command "unshare -m", you will enter a new mount namespace. In this namespace, anything you mount will not be visible outside the namespace. Furthermore, when there are no more programs running in that namespace, the kernel will automatically clean it up for you.

There are other namespaces as well. For example, network namespaces might be useful if you want to limit how certain programs access the Internet (for example, maybe you want to force programs to go through a VPN instead of directly accessing the physical interface). Or, you could use a PID namespace to prevent a program from knowing about other programs running on the system.