darktjm: Portage has a lot of churn. In addition to the massively long sync, you then have to also recompress all the changed files. In particular, squashfs is a read-only filesystem, so you'd have to regenerate the file sytem after every update (which is why I used that word: dtgreene suggested using squashfs for portage). There are read-write filesystems that support compression, but in the end, one way or another, you still have to recompress all the files. Updating portage is a daily thing for some people. If I wait more than a week for updating portage, I end up with over 100 files to update at once, which annoys me. Portage has a lot of churn, as I said.
It turns out that there is a way to make the squashfs filesystem act as though it were writable: Use a union or overlay filesystem. This way, you have the squashfs filesystem as a base, and any changed files are stored elsewhere (uncompressed, most likely) on a writable filesystem (possibly even tmpfs, if you have enough RAM and want to save on disk writes). Then, if you need to feel the need to recompress them, you can just run mksquashfs on the mounted filesystem and have a new compressed image built. Since the portage tree is not needed except when updating, you can then unmount the old image and mount the new one in its place (after you destroy the old overlay).
One notable characteristic of the portage tree is that it consists of lots of small files; hence, the effects of the filesystem's block size becomes magnified, and it is possible to run out of inodes without actually running out of disk space on that filesystem. squashfs, I believe, avoids those issues (even compressing metadata); if you are using a traditional filesystem, you might want to choose one that handles those cases well (possibly using a utility like tune2fs to increase the number of inodes, if necessary).
By the way, this whole squashfs + overlay approach is commonly used on live CDs; since the boot medium is read-only and Linux systems generally expect a writable root filesystem, using the overlay allows us to make the filesystem writable. This allows one to even install new programs on the running system, or update those installed, despite not being able to write to the CD. (Another approach, more commonly seen in smaller liveCDs, is to just load the entire OS into ram.)
clarry: Eh? If the implementation is halfway sane, decompression has virtually no effect on RAM use.
darktjm: So in what magical universe can you keep disk buffers for the uncompressed data and disk buffers for the compressed data without actually using any more space than just the uncompressed data alone? Or maybe your idea of virtually no effect is to take a major performance hit as fewer buffers are kept around in case of reuse. I realize that the compressed data buffers can be reused, but that just increases the need for uncompressed buffers to avoid constant re-decompression. Nothing is free.
Once the uncompressed data is in RAM, the compressed data is no longer needed there; if the OS needs to free up memory, then, the compressed data can be evicted from RAM without any penalty.
RAM spent on cached data that is no longer needed is effectively free RAM.