It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
TrueDosGamer: Yes I was looking for a Linux flavor that uses native FAT32 or NTFS from the start for installation. From what you wrote it sounds like legally there is no version out there that can do it until after installation? Curious why hasn't someone like Redhat or other distro made a deal with Microsoft to license NTFS natively into their OS. exFAT would be an alternative to FAT32 due to the updated file size and partition size limitations. But as for encryption I'm not really requiring that. A lot of times if somehow the hard drive is corrupted and also encrypted it is going to be tougher to restore the data off. NTFS is still good because it can handle larger file sizes and you can disable any permissions so any computer can access the files.

Most likely Windows will never grab a large market share with Mobile. I think another issue would be their DRM which makes Android more attractive. However even Android is starting to clamp down and becoming more like Apple by removing microSD card slots and sealing off the battery compartment with the latest Galaxy Note 5. Eventually they will be mirror images of each other and it will be a choice of interface. Samsung made phones with bigger screens when Apple didn't. Now Apples tries to make phones with big screens and Samsung tries to seal off the back so you can't insert a SD card or swap the internal battery like Apple.
avatar
Lin545: The libbluray (would be like "bluray.dll" if compiled for windows) is exactly for handling menus and further metadata stuff.

All decryption is either made into being illegal by breaking patents, or is legal - and a blackbox, as part of the driver, OS etc (technically - a "blob") it resides in kernel which is bad. On Android I hear its implemented in userspace (in contrast to privileged OS kernel space), but only the part which supplies the data to hardware - with hardware doing the black-box magic. I am not a big fan of any DRM due to reason of lock-down to media or to company, so I never followed the bluray or hdcp development, sorry.

FAT32 is not a problem and works good. NTFS requires some licenses, but there is a company which specifically works with NTFS on "non-Windows", I think its Tuxera. Except loosing pretty critical features mentioned in previous paragraph, NTFS brings no advantages and quite a lot of disadvantages. Ext can journal both data and metadata, where NTFS only metadata. NTFS fragments pretty badly and its defragmenting is very subpar - windows defrag api leaves a lot holes, perfect defragmentation takes a lot of time, makes no sense and is not available anymore. Linux currently leads towards f2fs on mobile/flash, btrfs and zfs for data, and ext or xfs for generic use.
The distro's I know, which run off FAT32/NTFS were actually running de-compressed image with native filesystem, stored on earlier mentioned filesystems.
Also having NTFS brings a good conflict with Windows, because later loves to hibernate on shutdown and leave FS in inconsistent state.

Its actually not hard to re-solder the li-ion battery. Just work quick and don't overheat. =)
The metadata you are referring to is an issue with MAC OS applications. My preference was FAT32 since it superseded FAT16 for backward compatibility. Of course there was no idea at the time we would have file sizes of 4GB or larger and that being a limitation we would hit so soon. Had MS just went forward with FAT64 at the Windows 2000 Pro and beyond NTFS would only be used for security whereas FAT64 would have been the ongoing file system for casual computer users. They might have added better security than NTFS and lifted the file size limit to some enormous amount we may not hit in our lifetime.

I only used NTFS only because of the > 4GB file sizes and dealing with HD DVR recording that tend to be over 4GB constantly or roughly over 30 minutes of video. If the video recording hits 4GB it stop recording without letting you know. So while the application looks like it is still recording it isn't. It also does not create a new file and start where it left off. However even if they did that it might not cut the video at the appropriate segment like during a commercial break but instead during the actual show. For awhile I stuck with FAT32 as long as I could and stopping the video recording at around 30 minutes and beginning a new one manually. Now I just do a straight 1 hour recording and stop it with NTFS. The other issue made me jump to NTFS was it was forced for Vista and later Windows OS for initial installation in an effort by MS to force people to switch file systems. I still use FAT16 for making a quick 2GB partition for my Multi OS boot.

For awhile file size limit work arounds were built into certain software to chunk files at 1GB file segments. However if you're playing a video it is easier to deal with one file than find some program to link all the video segments as one smooth flowing uninterrupted playback. Some Blu-ray movies do break up videos into smaller segments intentionally rather than as one huge 25-50GB video file. They do this to make it harder for people to pirate their movies since it would be a headache to find a way to play the video segments in order versus one large video file.

Fragmentation has always been an issue even going back to FAT, FAT12, and FAT16. FAT32 also had its own issues but they improved file cluster sizes down to 4KB which meant less wasted space if you had tons of small files of 1byte to under 4KB it would use less space. I never imagined 4GB files would be common place or even we would be dealing with files that large.

Today there is exFAT from MS which can be downloaded and patched into XP easily. I haven't made the jump to use exFAT so I can't state its performance or fragmentation issues compared to NTFS.

https://www.microsoft.com/en-us/download/details.aspx?id=19364

I haven't tested going to stand by mode on NTFS partitions since I usually used FAT32.

As for hibernate you can disable that. I don't use hibernate because it conflicts with my Multi OS boot.

For example if you are in Windows 7 and you hibernate and the computer shuts down after.

When you turn on the computer you are not greeted with the Multi OS boot menu and instead forced to resume where the hibernation left off in Windows 7.

What's worse is if you are using 32GB like me it requires 32GB of space to hibernate. No thank you. I like my hard drive space.

I'm an efficient user and with the extra memory I have I probably don't need any swap file either but I create a very tiny one because certain applications still need to see it is present in order to work properly.

As for resoldering a new battery into an iPhone?

No thanks. I'd rather pop the back lid off and swap a fully charge battery and it takes me only a few seconds. I'm not going to have a soldering iron handy for those situations where I need quick power on the go. Plus because I can take the back off I actually outfitted mine with a triple capacity battery but usually use a dual capacity battery on the go most of the time due to the extra weight.
Post edited January 13, 2016 by TrueDosGamer
avatar
TrueDosGamer: This is true and obvious but I was looking for a Linux that had native FAT32 and NTFS built into it so you could install onto FAT32 or NTFS partition from the start and not a Linux fs.
avatar
Lin545: There were FAT32 and NTFS patent problems. Recently there were exFAT patent problems. There is really no advantage in them. NTFS supports quicker writes on empty filesystem, but fragments very badly.
Buy a dedicated drive and use native filesystem. If you are not going to, quit wasting time please.

avatar
TrueDosGamer: Also a reason why Linux is screwed out of the boat. If the manufacturers won't let you in on the loop it's going make 3rd party support that much harder and thus makes Linux a poorer choice for support in this case. Only the manufacturer providing Linux support is the only saving grace. You can tweak the OS all you want but if you can't use the hardware you really want it defeats the purpose. I'd use Linux over Windows if Linux had full access to the hardware and make it perform better than on Windows. Unfortunately I doubt we will ever see the day that will happen.
avatar
Lin545: Please don't post generalized nonsense.

avatar
TrueDosGamer: Windows 10 probably is another nail in the coffin for more Linux adopters as now most of the newer games are going to be using DX12.0 The Windows 8 debacle might have been their last chance to steal some desktop users who didn't upgrade to Windows 7. Most of the Windows versions in my opinion has gotten worse since Vista. Windows 7 only gained USB 3.0 support which could have been put into Vista quite easily. Until Windows 10 did they update to DX 12.0. Other than that a more bloated OS and a less efficient user interface. I'll only install W10 if I need to try out DX 12.0. But for regular usage for my desktop needs no thanks.
avatar
Lin545: DX12 is feature-wise same as OpenGL4.5.
Mantle API is available for Linux.
SDL and winelib are also available. Linux has very efficient 3D renderpipe, becomes modern display server soon, has efficient sound server, has efficient network stack, has efficient and broad filesystem support. Combines them with advantages of openness and flexibility. One can attach any interface to it - can run software even on almost pure hardware with 1-2 applications and display server in memory.
I still don't use USB3, because I have eSata. It takes several years until new tech stabilizes, and I am not big fan of data corruption or segmentation fails. Windows has the advantage of out-of-kernel drivers, but they too must be reviewed and tested, which for Linux happens in kernel development (all-in-one).
Actually it is not a waste of time if you can write onto the same file system this would help those who want to install Linux easier without wiping out a partition on their hard drive. Some people only have one single partition or two if they are lucky. This would prevent them from installing Linux on those kinds of setups where they would be forced to backup their partitions so they can delete and create new ones. By having a Linux distro that had native FAT32/NTFS built in I could share the installation on a partition I created rather than reformat the partition solely to use Linux. But If I were to attempt to share Linux with Windows I could use a stand alone hard drive for it and just unhook the original hard drive and do my tests that way avoiding the need to share a bootloader with Linux. Even though Linux could supply its own bootloader that could let you choose the OS I still prefer the Windows Boot loader found up to Windows 7. Once Windows 8 was introduced the boot loader was changed and added more delay since it introduced a GUI.

As far as nonsense. I looked at your link. It appears the "MANUFACTURER" created both the Linux driver and the Windows driver. What I said was if the manufacturer did not release a Linux driver and didn't release any information for people to create a Linux version I would find it hard or nearly impossible to create one from scratch that would perform better than the Windows driver. Perhaps if Geohot was working on the driver than maybe there's a chance this could happen. But in most scenarios without the necessary background on the hardware I don't think some random person could create a Linux driver from scratch to outperform a Windows drivers created by the manufacturer.

Also what I would like to see is a GTX 750 Linux driver that outperforms the Windows driver. It is the most efficient low wattage graphics card and definitely one that would make me turn my head to try out Linux to play games if that happened. The older graphics cards listed on that link seem to show even the 3rd party Linux driver performance was way slower than the one Nvidia the manufacturer released for Linux.

The benchmark of comparing Linux to Windows 8 is probably not the best choice for a comparison test for a Windows OS.

I would have liked to have seen an XP and Windows 7 driver performance comparison to Linux in 2013.

Now if Linux wants a chance to overthrow the Windows market share here's a game plan.

Get the top 10 game makers to release ONLY a Linux version of the game and no other OS or console versions.

If the fan or player is hardcore enough you'll see them go out of their way to install Linux and use Linux just to play the game.

Other than that I can't see any real motivation for most people already using Windows to jump ship. It's already hard enough for Microsoft to convince users to upgrade their OS.

There are just too many applications written for Windows that people need that might not exist in Linux or the counterpart might not be as good or have as many features.

OpenOffice.org was a nice alternative to using Microsoft Office applications but for myself Linux can't do everything I need it to do. But for a person who has never had a computer Linux Mint or something as user friendly might be a suitable alternative if they haven't gotten used to Windows.
Post edited January 13, 2016 by TrueDosGamer
avatar
TrueDosGamer: Actually it is not a waste of time if you can write onto the same file system this would help those who want to install Linux easier without wiping out a partition on their hard drive. Some people only have one single partition or two if they are lucky. This would prevent them from installing Linux on those kinds of setups where they would be forced to backup their partitions so they can delete and create new ones. By having a Linux distro that had native FAT32/NTFS built in I could share the installation on a partition I created rather than reformat the partition solely to use Linux. But If I were to attempt to share Linux with Windows I could use a stand alone hard drive for it and just unhook the original hard drive and do my tests that way avoiding the need to share a bootloader with Linux. Even though Linux could supply its own bootloader that could let you choose the OS I still prefer the Windows Boot loader found up to Windows 7. Once Windows 8 was introduced the boot loader was changed and added more delay since it introduced a GUI.

As far as nonsense. I looked at your link. It appears the "MANUFACTURER" created both the Linux driver and the Windows driver. What I said was if the manufacturer did not release a Linux driver and didn't release any information for people to create a Linux version I would find it hard or nearly impossible to create one from scratch that would perform better than the Windows driver. Perhaps if Geohot was working on the driver than maybe there's a chance this could happen. But in most scenarios without the necessary background on the hardware I don't think some random person could create a Linux driver from scratch to outperform a Windows drivers created by the manufacturer.

Also what I would like to see is a GTX 750 Linux driver that outperforms the Windows driver. It is the most efficient low wattage graphics card and definitely one that would make me turn my head to try out Linux to play games if that happened. The older graphics cards listed on that link seem to show even the 3rd party Linux driver performance was way slower than the one Nvidia the manufacturer released for Linux.

The benchmark of comparing Linux to Windows 8 is probably not the best choice for a comparison test for a Windows OS.

I would have liked to have seen an XP and Windows 7 driver performance comparison to Linux in 2013.

Now if Linux wants a chance to overthrow the Windows market share here's a game plan.

Get the top 10 game makers to release ONLY a Linux version of the game and no other OS or console versions.

If the fan or player is hardcore enough you'll see them go out of their way to install Linux and use Linux just to play the game.

Other than that I can't see any real motivation for most people already using Windows to jump ship. It's already hard enough for Microsoft to convince users to upgrade their OS.

There are just too many applications written for Windows that people need that might not exist in Linux or the counterpart might not be as good or have as many features.

OpenOffice.org was a nice alternative to using Microsoft Office applications but for myself Linux can't do everything I need it to do. But for a person who has never had a computer Linux Mint or something as user friendly might be a suitable alternative if they haven't gotten used to Windows.
a) This compilcates things a lot. Also, this is what an average person would be doing - learning to dual-boot, exposed to all sorts of this info, from where he/she easily concludes - Linux must be hard. I was a die-hard windows user, I know what I am talking about because I walked this path.
b) Windows never was an OS which behaved. It always rewrote bootsector like it owns the computer, like PC is something embedded.
The $50 hard-drive spares the risk. Linux uses GRUB, since ages GRUB supports chain-loading, this is loading startup portion of other boot manager and starting it, like a chain.

Yes, this is what manufacturer should do and how it comes out depends on device complexity. Webcams are better off driven by 3rd party (one webcam driver that supports them all for infinity), where GPUs - by manufacturer. Opensourcing may expand platform support, increase trust and allow to much better integrate into ecosystem. Even with Nvidia, the reverse-engineered nouveau driver supports 8800GT or older better than manufacturer driver. Specifically for GPUs, any GPU which is more than 5 years old is much better served with open driver. The only possible problem here is lack of hardware itself or some hidden hardware caveats which are impossible to correct if original documentation is not available to manufacturer himself anymore.

By the way, since Fermi(?) nvidia GPUs only load SIGNED firmware. Firmware is required to drive GPU. Means, nvidia fully controls who can write the driver, and what performance/features he should get. Just flipping a few bits off in firmware easily disables features or slows it down.

"GTX 750 Linux driver that outperforms the Windows driver" Its same driver, Linux version is more cut. Rest are graphics system nuances, background tasks and OS cheats.

Its a pretty good choice - it was a recent production version of Windows. In XP era, only Nvidia was feasable, with ATI being completely ignorant and Intel virtually non-existant in x86 GPU segment.

"Now if Linux wants a chance to overthrow the Windows market share here's a game plan." says almost anyone new to Linux including me. This isn't gonna happen. MS did this with WinMo, when Android and Apple flooded mobile market. Never works, once market is taken, its taken - and releasing anything "exclusive" is against philosophy of Linux - its supposed to remove restrictions and limits. One uses Linux and similar OSes because one does not find anything comparable on more popular/other OSes, because they are such by design/designers.

Linux users are not motivating, they are not marketing agency, they don't earn anything except possible complaints. They merely offering a possibly better path.

Linux supports 80% of Windows applications, which is much more than current Windows supports. Plus Linux applications.

I use LibreOffice and I prefer it against MSO. I really liked MSO XP, the rest went downhill - but its just an office suit. There are many. If you absolutely want to run MSO, then you will always find the way. Unless vendor dictates exactly how to do it, which strongly opposed Linux (and GNU) mentality (so-called lock-out). A lot of people are okay with lock-out, like they were with DRM - its their choice, they pay for it.
avatar
TrueDosGamer: The metadata you are referring to is an issue with MAC OS applications. My preference was FAT32 since it superseded FAT16 for backward compatibility. Of course there was no idea at the time we would have file sizes of 4GB or larger and that being a limitation we would hit so soon. Had MS just went forward with FAT64 at the Windows 2000 Pro and beyond NTFS would only be used for security whereas FAT64 would have been the ongoing file system for casual computer users. They might have added better security than NTFS and lifted the file size limit to some enormous amount we may not hit in our lifetime.

I only used NTFS only because of the > 4GB file sizes and dealing with HD DVR recording that tend to be over 4GB constantly or roughly over 30 minutes of video. If the video recording hits 4GB it stop recording without letting you know. So while the application looks like it is still recording it isn't. It also does not create a new file and start where it left off. However even if they did that it might not cut the video at the appropriate segment like during a commercial break but instead during the actual show. For awhile I stuck with FAT32 as long as I could and stopping the video recording at around 30 minutes and beginning a new one manually. Now I just do a straight 1 hour recording and stop it with NTFS. The other issue made me jump to NTFS was it was forced for Vista and later Windows OS for initial installation in an effort by MS to force people to switch file systems. I still use FAT16 for making a quick 2GB partition for my Multi OS boot.

For awhile file size limit work arounds were built into certain software to chunk files at 1GB file segments. However if you're playing a video it is easier to deal with one file than find some program to link all the video segments as one smooth flowing uninterrupted playback. Some Blu-ray movies do break up videos into smaller segments intentionally rather than as one huge 25-50GB video file. They do this to make it harder for people to pirate their movies since it would be a headache to find a way to play the video segments in order versus one large video file.

Fragmentation has always been an issue even going back to FAT, FAT12, and FAT16. FAT32 also had its own issues but they improved file cluster sizes down to 4KB which meant less wasted space if you had tons of small files of 1byte to under 4KB it would use less space. I never imagined 4GB files would be common place or even we would be dealing with files that large.

Today there is exFAT from MS which can be downloaded and patched into XP easily. I haven't made the jump to use exFAT so I can't state its performance or fragmentation issues compared to NTFS.

https://www.microsoft.com/en-us/download/details.aspx?id=19364

I haven't tested going to stand by mode on NTFS partitions since I usually used FAT32.

As for hibernate you can disable that. I don't use hibernate because it conflicts with my Multi OS boot.

For example if you are in Windows 7 and you hibernate and the computer shuts down after.

When you turn on the computer you are not greeted with the Multi OS boot menu and instead forced to resume where the hibernation left off in Windows 7.

What's worse is if you are using 32GB like me it requires 32GB of space to hibernate. No thank you. I like my hard drive space.

I'm an efficient user and with the extra memory I have I probably don't need any swap file either but I create a very tiny one because certain applications still need to see it is present in order to work properly.

As for resoldering a new battery into an iPhone?

No thanks. I'd rather pop the back lid off and swap a fully charge battery and it takes me only a few seconds. I'm not going to have a soldering iron handy for those situations where I need quick power on the go. Plus because I can take the back off I actually outfitted mine with a triple capacity battery but usually use a dual capacity battery on the go most of the time due to the extra weight.
Nope, metadata is a filesystem service data, nodes, filenames, attributes etc. NTFS can only journal metadata, means if it crashes any files which were open will be zero or filled with garbage. Ext since ext3 includes option to perform full journaling and on modern ZFS/BTRFS the whole filesystem is a journal, plus they detect and correct the bit rot automatically. Comes with some disadvantages though, like fragmentation is back (ext automatically manages node placement). FATxx is outdated and offers absolutely no advantages, sure "nice to have" though.

Besides a regular user can fully ignore the local FS, unless he is managing other users (disk quota, filesystem rights etc), where he would probably pick something and settle down with it.

NTFS uses 4KB AFAIK, which is to adopt to current HDD sector size. Ext does the same.

I know the caveats of FAT16/FAT32, I don't use them since around 2005, unless there is some device which reads only them, which again is linked to Windows having most marketshare and hence this to be prefered filesystem. Modern embedded devices use MTP protocol, which allows device to completely ignore filesystem support requirement. Still, fragmentation in FAT/NTFS does not depend upon FS - they fragmented horribly and continue to do so, because they prefer throughtput to "behaving nice" and want 3rd party paid software to handle the rest for them.

exFAT is exPAT - its patent encumbered, for a reason.

I think the BluRay breakdown is more due to layered disk architecture and menus.

LiIon will explode if overheated. If you solder correctly (and not "grill" it), it will never overheat. Other than that, Li-Ion devices include some additional dedicated circultry to dump the charge, if they detect overcharging or if they detect batteries to be aging. Its usually connected to the batteries themselves and can be resoldered easily. Phones batt packs usually are connected by wires, soldering a wire is pretty easy.

This wasn't meant as a replacement for "spare batt pack". This was meant for a "built-in 'no user servicable parts inside' battery" solution. Also service shops can do that. Ofc the user-replaceable battery is always prefered.
Post edited January 14, 2016 by Lin545
avatar
TrueDosGamer:
avatar
Lin545: a) This compilcates things a lot. Also, this is what an average person would be doing - learning to dual-boot, exposed to all sorts of this info, from where he/she easily concludes - Linux must be hard. I was a die-hard windows user, I know what I am talking about because I walked this path.
rs
Post edited January 16, 2016 by TrueDosGamer
avatar
TrueDosGamer: The metadata you are referring to is an issue with MAC OS applications. My preference was FAT32 since it superseded FAT16 for backward compatibility. Of course there was no idea at the time we would have file sizes of 4GB or larger and that being a limitation we would hit so soon. Had MS just went forward with FAT64 at the Windows 2000 Pro and beyond NTFS would only be used for security whereas FAT64 would have been the ongoing file system for casual computer users. They might have added better security than NTFS and lifted the file size limit to some enormous amount we may not hit in our lifetime.

I only used NTFS only because of the > 4GB file sizes and dealing with HD DVR recording that tend to be over 4GB constantly or roughly over 30 minutes of video. If the video recording hits 4GB it stop recording without letting you know. So while the application looks like it is still recording it isn't. It also does not create a new file and start where it left off. However even if they did that it might not cut the video at the appropriate segment like during a commercial break but instead during the actual show. For awhile I stuck with FAT32 as long as I could and stopping the video recording at around 30 minutes and beginning a new one manually. Now I just do a straight 1 hour recording and stop it with NTFS. The other issue made me jump to NTFS was it was forced for Vista and later Windows OS for initial installation in an effort by MS to force people to switch file systems. I still use FAT16 for making a quick 2GB partition for my Multi OS boot.

For awhile file size limit work arounds were built into certain software to chunk files at 1GB file segments. However if you're playing a video it is easier to deal with one file than find some program to link all the video segments as one smooth flowing uninterrupted playback. Some Blu-ray movies do break up videos into smaller segments intentionally rather than as one huge 25-50GB video file. They do this to make it harder for people to pirate their movies since it would be a headache to find a way to play the video segments in order versus one large video file.

Fragmentation has always been an issue even going back to FAT, FAT12, and FAT16. FAT32 also had its own issues but they improved file cluster sizes down to 4KB which meant less wasted space if you had tons of small files of 1byte to under 4KB it would use less space. I never imagined 4GB files would be common place or even we would be dealing with files that large.

Today there is exFAT from MS which can be downloaded and patched into XP easily. I haven't made the jump to use exFAT so I can't state its performance or fragmentation issues compared to NTFS.

https://www.microsoft.com/en-us/download/details.aspx?id=19364

I haven't tested going to stand by mode on NTFS partitions since I usually used FAT32.

As for hibernate you can disable that. I don't use hibernate because it conflicts with my Multi OS boot.

For example if you are in Windows 7 and you hibernate and the computer shuts down after.

When you turn on the computer you are not greeted with the Multi OS boot menu and instead forced to resume where the hibernation left off in Windows 7.

What's worse is if you are using 32GB like me it requires 32GB of space to hibernate. No thank you. I like my hard drive space.

I'm an efficient user and with the extra memory I have I probably don't need any swap file either but I create a very tiny one because certain applications still need to see it is present in order to work properly.

As for resoldering a new battery into an iPhone?

No thanks. I'd rather pop the back lid off and swap a fully charge battery and it takes me only a few seconds. I'm not going to have a soldering iron handy for those situations where I need quick power on the go. Plus because I can take the back off I actually outfitted mine with a triple capacity battery but usually use a dual capacity battery on the go most of the time due to the extra weight.
avatar
Lin545: Nope, metadata is a filesystem service data, nodes, filenames, attributes etc. NTFS can only journal metadata, means if it crashes any files which were open will be zero or filled with garbage. Ext since ext3 includes option to perform full journaling and on modern ZFS/BTRFS the whole filesystem is a journal, plus they detect and correct the bit rot automatically. Comes with some disadvantages though, like fragmentation is back (ext automatically manages node placement). FATxx is outdated and offers absolutely no advantages, sure "nice to have" though.

I think the BluRay breakdown is more due to layered disk architecture and menus.

LiIon will explode if overheated. If you solder correctly (and not "grill" it), it will never overheat. Other than that, Li-Ion devices include some additional dedicated circultry to dump the charge, if they detect overcharging or if they detect batteries to be aging. Its usually connected to the batteries themselves and can be resoldered easily. Phones batt packs usually are connected by wires, soldering a wire is pretty easy.

This wasn't meant as a replacement for "spare batt pack". This was meant for a "built-in 'no user servicable parts inside' battery" solution. Also service shops can do that. Ofc the user-replaceable battery is always prefered.
FAT is great for backward compatibility. Not one file system will forever remain permanent because in the future we cannot foresee what new limits will be reached and thus new limits will have to be created for a new generation. I have not tried mixing FAT/NTFS and ext3 partitions so I'm not entirely sure if Windows can write onto Linux partitions.

But once one file system has been mass adopted it's very difficult to get users to switch over to a new one even if it is superior. The new file system has to be tested thoroughly because data is precious. The other issue is compatibility and can it be read by most operating systems. I know a lot of older non PCs did use FAT or a variation of it. So while having a new file system is nice it may not be suitable for older machines which don't understand it.

And the only reason I switched from FAT32 was due to the 4GB file size limitation. Otherwise I didn't see any need to switch to NTFS since I don't require security. Most of my files are just huge HD DVRs. Last thing I want to deal with is NTFS locking privileges to one user and let's say the OS got corrupted or you forgot the password to get back in. All those files are basically encrypted or locked. Thankfully NTFS allows people to use NTFS with full access to all so I can still read the files on another machine.

As for fragmentation. I think this will be a non issue going forward as SSDs become more common over mechanical hard drives and the amount of delay caused by fragmentation might not be that noticeable anymore. But if you're worried about fragmentation being an issue I suggest you create multiple partitions that way each partition that is untouched should be defragmented and if you need to defragment it is easier to deal with a smaller partition than a larger one. Plus it helps to move files off a fragmented partition to another and just quick format the partition which probably would fix it. Then you can move the files back if you need to and it should not fragment. If that doesn't work then you can do a full format on the partition.

Hard drive fragmentation was a huge issue on MFM and RLL hard drives and maybe SCSI and IDE. But SATA drives are so much faster and quieter I just really don't notice any significant issues from file fragmentation. You probably won't notice any real slow down until the entire partition gets filled up which for the common user won't use up as much space as me. I think I plow through 2TB in just 2 days if I'm doing non stop Quad HD DVRing. Most people probably use 2TB for an entire year.

I usually write to Ramdrives to do my file extracting and testing now so I honestly can't tell you if fragmented memory slows it down much or it just simply too fast to have any noticeable effect. But this would give you a good indication that SSDs probably don't suffer much from this issue. Most of the time it's the GPU catching up when I'm running programs.
Post edited January 14, 2016 by TrueDosGamer
avatar
TrueDosGamer: Between the Titan Intel iGPU, ATI, and nVidia if I had a choice between these three manufacturers creating drivers for the same graphics card I would instantly choose nVidia. The reason being if you look at any of their graphics cards you will find MAC OS X, Linux, and some other OS drivers other than Windows. However it wasn't until the GTX 750 or Maxwell architecture of GPUs that they finally squashed the wattage issue. My previous graphics card was an ATI but it was not because of the manufacturer but because it was a single slot passive graphics card. Unfortunately it was the "last" one to remain a single slot as graphics cards soon migrated to the dual slot hog jet engine fans power hungry cards. When Maxwell was introduced it halved the wattage of the previous generation while doubling the performance. This was the logical choice. I'm still keeping an eye on the 1000 series to see if it is worth buying when they release another low wattage high performance graphics card. I'm hoping they can drop the wattage below 50 watts and create totally passive single slot graphics cards again instead of these dual slot behemoths. I know most people are more interested in SLI and having two or three dual slot graphics cards in their system whereas I'm focusing on the other end of the spectrum. I can wait to play the latest games. I already have a huge back log of older generation I haven't had time to touch.

When you say the Linux version is cut I assume you mean reduced features compared to the Windows driver?
It depends on your goals. Nvidia is graphics mafia for a reason. I used to have Nvidia in the past, now its Intel or AMD, depending on how heavy GPU demand is. The hardware support ages with Nvidia currently in worst possible way. AMD and Intel are supported pretty good, unless it very old hardware (pre HD3000/AMD or pre Sandybridge/Intel).

Yes, its reduced. Linux currently does not use DirectX, although there are efforts to attach it anyway, because it is just a middleware library.

Yes, I find Word95 already acceptable actually. I am not a big fan of Ribbon, Metro or Gnome3.
LibreOffice can directly export to PDF with all the fine-tuning options like compression power, embedding of ODT source and adhersion to standards. It also produces good HTML.

Have no issues since I use Linux.. can boot from any USB of any size and it can access all filesystems, provided they were shutdown clean (none of hybernation, "fastboot" etc). Images can be "ghost'ed" (which is "dd" in Linux), but must be then shrunk/grown on size mismatch. I think there is some utility that combines good things from both worlds - it produces image based on filesystem, not partition, level, so its pretty portable/expandable. Ya, I used HirenbootCD, BartPE etc in old times. They are still usable for low-level hardware diagnosis, but most work using FreeDos anyway.

These are obviously CLI. If you need GUI, use GParted.
partitioning - parted
formatting - mkfs, this then calls correct binary (mkfs.ext, mkfs.fat, mkfs.ntfs etc)
labelling - --depends on fs--: e2label, mkswap -L, btrfs label, dosfslabel, ntfslabel; also blkid for ID-based labels.
scanning utility - fsck, acts similar to mkfs

Games are limited on mobile only due to hardware limitations and display area specifics. Check OpenMoko project, it used full typical Linux stack - Kernel and userspace were same. You could run all typical programs. The finger/geisture actions are handled by a Display Server driver (today everything moved into Linux Kernel). On MS, its similar to Windows 10 Surface, which is internally a regular x86 software stack.

I use Linux because it does not get in the way and gives a lot of choice. Not possible with Apple or MS per design. Corporation policy dictates exactly how to run it, with Linux - its people. If it does not run, its then either not currently possible or nobody bothered.

Yes, you can run 80% of all-time Windows applications on Linux. If something is incompatible with Windows N, Linux can run it. If Linux can't run it, then its either very new, or very very complex.

Linux applications are projects (re-)compiled for GNU C library or any other Linux-compatible API. This has nothing to do with license or cost.

"I doubt this has been done on a massive scale." Why? What is the profit? If corporations do this, Linux will become a (managed) product and cease all its advantages.
Linux is not elitist priced and personality-culted like Mac, not generic and monopolist like Windows. Its much less a product, far more - ecosystem and design mentality. PlayStation 4 for example, is pretty much Linux. Its internally BSD plus Sony stack recompiled for this (and DRMed), but BSD includes ability to run Linux binaries. But,.. this is pretty much "product". Not much different from MS, or Google, or Apple.

"The only positive thing about Linux is people do have the hardware so that takes one hurdle out. " I don't understand.... Zero the hard disk, install Linux, add applications, enjoy.



avatar
TrueDosGamer: FAT is great for backward compatibility. Not one file system will forever remain permanent because in the future we cannot foresee what new limits will be reached and thus new limits will have to be created for a new generation. I have not tried mixing FAT/NTFS and ext3 partitions so I'm not entirely sure if Windows can write onto Linux partitions.
FAT/NTFS is widely used, but patent encumbered; ext / xfs / btrfs are a bit less widely used - but open. All are backwards compatible. Yup, they are future proof. Windows can write to ext for sure, all matter of installing a FS driver.

There is no problem in switching. FS has advantages and disadvantages, most FS are fully transparent to users and data. On Linux, you can use about any FS for data and all FS which implement unix-style rights for root parition (your drive "C"). Compatibility is not needed, because FS is exposed via either generic system API or other (network) interfaces (SMB/NFS/SSHFS, MTA etc).

When you switched from FAT to NTFS, have applications behave differently? Only those who interact directly with FS matter. This is same.

NTFS is much better than FAT. Whilst fragmenting about similarly bad, NTFS offers much more features and at least has some kind of journaling. I switched to NTFS since Win2k and used it all way up to Vista. Then hardly anymore.

The reliability cycle is well known and pretty same. Ext, Xfs, ZFS, F2fs are very reliable. FS layout is frozen. Btrfs is new, and should be reliable.

SSD and any RAM memory are not linear IO devices; hence not susceptable to most fragmentation (RAM fragmentation can bring trouble in some conditions, see VRAM fragmentation and large textures, but this is usually solved by memory management logic), I think they also remap the blocks internally for wear management and stuff. Linear-reading devices struggle pretty bad with fragmentation, no matter the interface. Even with read/write cycle optimization, typically FAT/NTFS easily fragment over 50% over time - and by nature easily fragment afterwards.

This is because "perfectly defragmenting" outside of DefragAPI takes a lot of time - like 6-8 hours with Norton Speed Disk 6, and makes no sense in online mode. Then just after a few days of heavy use, the FS is full of holes again - even if some kind of zone placement is used (ie optimization). So, DefragAPI goes for "balanced" approach, which means periodical or background defragmenting. Ext simply manages the block placement on the fly - it does not litter. This makes it less fast, but usually not needing any further optimizaton. This is just FS difference.

What is really important on big drives is bit rot, and only Btrfs or Zfs can really manage that. So, for the server I would prefer these.
avatar
TrueDosGamer: When you say the Linux version is cut I assume you mean reduced features compared to the Windows driver?
avatar
Lin545: It depends on your goals. Nvidia is graphics mafia for a reason. I used to have Nvidia in the past, now its Intel or AMD, depending on how heavy GPU demand is. The hardware support ages with Nvidia currently in worst possible way. AMD and Intel are supported pretty good, unless it very old hardware (pre HD3000/AMD or pre Sandybridge/Intel).

Yes, its reduced. Linux currently does not use DirectX, although there are efforts to attach it anyway, because it is just a middleware library.

These are obviously CLI. If you need GUI, use GParted.
partitioning - parted
formatting - mkfs, this then calls correct binary (mkfs.ext, mkfs.fat, mkfs.ntfs etc)
labelling - --depends on fs--: e2label, mkswap -L, btrfs label, dosfslabel, ntfslabel; also blkid for ID-based labels.
scanning utility - fsck, acts similar to mkfs

I use Linux because it does not get in the way and gives a lot of choice. Not possible with Apple or MS per design. Corporation policy dictates exactly how to run it, with Linux - its people. If it does not run, its then either not currently possible or nobody bothered.

Yes, you can run 80% of all-time Windows applications on Linux. If something is incompatible with Windows N, Linux can run it. If Linux can't run it, then its either very new, or very very complex.

"The only positive thing about Linux is people do have the hardware so that takes one hurdle out. " I don't understand.... Zero the hard disk, install Linux, add applications, enjoy.

avatar
TrueDosGamer: FAT is great for backward compatibility. Not one file system will forever remain permanent because in the future we cannot foresee what new limits will be reached and thus new limits will have to be created for a new generation. I have not tried mixing FAT/NTFS and ext3 partitions so I'm not entirely sure if Windows can write onto Linux partitions.
avatar
Lin545: FAT/NTFS is widely used, but patent encumbered; ext / xfs / btrfs are a bit less widely used - but open. All are backwards compatible. Yup, they are future proof. Windows can write to ext for sure, all matter of installing a FS driver.

There is no problem in switching. FS has advantages and disadvantages, most FS are fully transparent to users and data. On Linux, you can use about any FS for data and all FS which implement unix-style rights for root parition (your drive "C"). Compatibility is not needed, because FS is exposed via either generic system API or other (network) interfaces (SMB/NFS/SSHFS, MTA etc).

When you switched from FAT to NTFS, have applications behave differently? Only those who interact directly with FS matter. This is same.

What is really important on big drives is bit rot, and only Btrfs or Zfs can really manage that. So, for the server I would prefer these.
My browser crashed so I lost my first draft of my response before I could save it but to summarize.

The MS patents on NTFS will always be there which obviously may seem like a negative but as long as you are using an OS that supports NTFS this isn't going to be an issue. I really resisted switching to NTFS for as long as possible maybe even up to 2011 only for Vista installation but didn't fully adopt NTFS till around 2013-2014 when I began massive HD DVRing and got sick of manually stopping recordings at half hour marks or sometimes forgetting. Only Vista and later required NTFS partitions for installation but still allowed you to use FAT32 partitions for data. But most of the internet has fully adopted large files over 4GB today so NTFS is now a necessity.

If Linux did have built in NTFS and FAT32 support natively that would change things a bit when it came to cross platform adoption making it an easier transition to Linux. Let me ask you if you connect a bunch of FAT32 and NTFS hard drive partitions to say Linux Mint does it read and write to these partitions natively or do you need an add on of some sort to patch the kernel to support it due to the MS patents? If you can only read FAT32 and NTFS but only write to a Linux file system then that would actually deter people from fully switching to Linux if they using predominantly Windows and MAC OS. It's great to be able to read FAT32 and NTFS files but writing is also essential.

Defragmenting hard drives would be a waste of time in my opinion. At the SATA stage the cost of storage has dropped significantly since the days of paying $500 for 5MB of hard drive space. IDE hard drives were probably the last line of hard drives that would probably have benefited from defragmenting every now and then. But today you can buy 2TB for $70 and in 10 years you will probably get 20TB for $50. Most people can now afford to buy a new hard drive once they run out of space so defragmenting a hard drive is a waste of time with no much benefit and also can put your data at risk should the power go out unexpectedly. Even if a file system has better defragmenting built in it's not enough to justify switching to a less supported file system. Most of the the time defragmentation will only be an issue for someone who is stuck on one hard drive with one partition and constantly deleting and creating files which may eventually impact the performance of the drive until they fully defragment.

One reason I refrained from defragmenting hard drives today is how long it would take and even if hard drive bandwidth speeds increased I still wouldn't do it. Access speeds are probably at the point where even a badly fragmented hard drive will still play HD videos smoothly without stuttering and for the most part I barely detect any stuttering going on on Blu-ray movies which uses the highest amount of bandwidth. If you went as far back to MFM and RLL hard drives and maybe SCSI defragmenting the hard drive would probably increase the access speeds of loading programs. I remember those days and I did use PC Tools and maybe because hard drive capacities were low enough sub 100MB people would be willing to wait up to 1 hour to a day to fully defragment their entire drive. I remember even up to the Windows 98SE days I did it a few times.

And you are correct that after you fully defragment you will encounter holes again even within the same day. So in the end it will never end. Perhaps if somehow the OS was better designed but even then I can't see how it could keep it 100% defragmented in real time. If I'm constantly writing data from four different DVR windows to the same hard drive I just don't see how you can avoid fragmentation because you will have to anticipate the entire capacity of the recording ahead of time and you can't do that with recordings as I might stop one recording sooner than another or keep it going and thus even if the OS were to plot 4 different regions of the hard drive for laying the file saving to begin it can't anticipate when the file will stop. There will be constant post defragmentation. And let's say I delete some files then it opens up more fragmentation and will fill up that freed up space first.

Your comment about Windows writing to Linux partitions requires the user to install a driver or add on. Not everyone will have administrator rights on the computer they are using and especially if using computers they do not own. Say you go to a library or use someone's computer as a guest.

If I were to bring along 400TB of data with me I would not prefer to have them all as Linux ext3 or ext4 partitions because wherever I'm going I want to have instant access to the data. FAT32 probably has the broadest support even MAC OS has no issues reading and writing to these partitions. However NTFS is still the preferable choice if you're dealing with larger than 4GB file sizes which probably didn't become an issue until around 2008 when HD DVD and Blu-rays became the standard for HD movies and 25GB / 50GB discs were the norm. Even the Sony PS3 forces FAT32 file system access but no NTFS. They probably did this to prevent the PS3 from playing pirated Blu-ray rips.

Also in the future say 20 years from now if you stored your backups with NTFS I bet you would still be able to access all your archived data on any modern machine in the future. However it is questionable whether ext3 or ext4 would be the preferred file system of choice for backup media. USB will still most likely be around if seeing the trend say they are at USB 7.0 in 2036 it should most likely continue to be backward compatible back to USB 1.0.

So while newer file systems or alternative file systems may be superior it doesn't necessarily dictate it is the best choice. Do you remember the Sony Betamax vs VCR video format battle? VCR eventually won even though its image quality was inferior. It was the fact that you could record up to 6 hours of poorer quality footage vs Betamax which probably had 60 to 90 minutes recording time and cost more.

I believe today file systems preference is still for the dated NTFS (though not necessarily the best choice) but because of the mass adoption and yes Windows dominating the market share played a huge part in this. Most people even today when dealing with data sharing between MAC OS and Windows are still going to prefer FAT32 and NTFS partitions over ext3 and ext4 and will probably remain that way. If Windows 10 were to introduce from the beginning a newer file system that supersedes NTFS I'm sure that would be the newer standard in 20 years. But exFAT was the only attempt which didn't quite take off. NTFS still has a lot of life in it despite its age. Until we hit NTFS file size limits will NTFS be considered obsolete. And at the moment the maximum file size of 16 EiB – 1 KiB is probably not going to be hit for a long time. And even as we approached such a file size limit they would just truncate the files in programs as a work around similar to the 1GB chunks used on FAT partitions.
Post edited January 16, 2016 by TrueDosGamer
avatar
Lin545: What is really important on big drives is bit rot, and only Btrfs or Zfs can really manage that. So, for the server I would prefer these.
Yeah "bit rot" probably can be a problem but not necessarily on just large hard drives. I would say MFM and RLL had the worst case. You write it and then one day you access it you get the dreaded CRC error or Not ready reading drive X Abort, Retry, Fail? IDE drives also had this problem over time. But so far in the SATA hard drives, which took me until 2012 before I finally made the switch over from IDE I've not seen any case of this happening. However one of my friend's laptop hard drive that I later used in a desktop did suffer some form of "bit rot" where the file could not be copied without a CRC error. Fortunately it was a video file so I was able to edit out around the missing / damaged bits and create two video files around the damage. Now it could be due to constant banging over time of the laptop or age/usage and since I didn't fully have it in my possession I have no idea of the history of the drive.

As for my own SATA hard drives which are mainly 2.5" I've had no case of any "bit rot" or CRC errors yet. And only within the last year I acquired a 4TB external hard drive enclosure which has a 3.5" SATA internal hard drive. I know going above 4TB and higher there was a shift to using SMR media which I'm avoiding because the technology is still too new to be considered reliable for my own usage and have read a lot of negative reviews of data corruption and slower access times.

However just recently we are seeing larger PMR hard drives.

http://arstechnica.com/gadgets/2016/01/seagate-unveils-its-own-10tb-helium-filled-hard-drive/

And my only concern is what would happen to the data if the hard drive somehow was punctured or purged leaking out the Helium is the data lost or still salvageable by the manufacturer by resealing the drive with helium or moving the platters to a new hard drive chassis. This might be costly. So my eyes are still looking for the largest PMR air filled hard drives as long as I can buy them. By the time helium PMR or SMR irons out all the failures into something reliable I will probably be ready to make the switch. But who knows if SSDs will drop in price enough or lower than mechanical hard drives that it no longer makes sense to buy them.

However I wouldn't rely on just a file system for protecting my data from "bit rot". Btrfs or Zfs might be a temporary solution to avoid the inevitable. However between the two I would choose Zfs if I was on Linux.

If you're really serious about avoiding data loss of valuable information I recommend you adopt a "category" separation backup strategy. In my case I have several hundred TBs of HD DVR video however I'm not worried if these files were to go bad as compared to say some text file which took hours to type up the information. I could probably fit only the most important data I can't afford to lose on something like a 500GB hard drive. I would then back up that data to two other identical hard drives which I would store at a different location than the primary drive.

Then I'd make three Blu-ray backup sets of all that data and also store them at different locations. The optical media has a better chance of surviving EMP damage (due to the Sun or magnets), impact damage (where a hard drive falls from a certain height), or even bit rot from mechanical hard drives. I'm not sure if SSDs have "bit rot" as bad as mechanical hard drives I would guess SSD might be more reliable since the mechanical wear and tear component is removed. But you could also once again backup the data on tapes as a final way of archiving it since they are considered more reliable these days than mechanical hard drives. However having used tapes from Colorado back in the day even data fatigue seems to eventually set in when retrieving files. I have also seen much older generation consumer backup tapes fail by the tape itself severing maybe due to heat or wear and tear when retrieving data.

I would say from past history I've had MFM and RLL drives fail completely. IDE drives I've also seen a total failure but not as common. As for SATA hard drives I've yet to see a complete failure of one that I bought and owned during its entire life cycle. Also for SCSI I haven't seen one fail yet that I owned. I used to use one for a BBS and it never once died on me or had data error and it was over 1GB in capacity which was a lot of space back then.

Maybe a safe way to refresh the bits every year is copy 30GB at a time to a large RAMdrive and then recopy that back into a new folder on the hard drive. IF you have some sort of software you can do a data integrity comparison check before deleting the original copy.

If you want to take a step further you could use something like Winzip or Winrar to backup the files into one large file or chunks than have thousands of tiny files making it slightly easier to verify the integrity. However the drawback is somehow the Winzip or Winrar file itself encounters bit rot then you're out a huge amount of data. If bit rot hit one of 100,000 files then at least you can isolate only losing data from one of 100,000 files which isn't too bad.

Even the cloud is vulnerable to bit rot and harder to check for it and probably a bit more dangerous to rely on than backing up your own data privately. Who knows if one day a cloud storage company will shut down unexpectedly in the distant future. I'd rather not deal with the headache of locating my lost data when I have my own backups easily accessible.
Post edited January 16, 2016 by TrueDosGamer
avatar
TrueDosGamer: (snip)
"Browser crash" >> Textarea Cache (or equivalent) + clipboard manager =)

"Read/write"It supports create, check, read and write for FAT16, FAT32 and NTFS. But specifically for NTFS, if *this* filesystem is suddenly in "unclean" state, write access is disabled. "Unclean" state is specifically caused by Windows shutting down in hibernation. Working externally on such partition is big threat, hence this is a security measure - not shortcoming.

On Windows I defragmented the drives,.. stuff became much quicker. On Linux, for example, the BTRFS may fragment because of how its built internally, so there is also a defragmenting utility. Of course, for matrix access devices, defragmentation makes no sense except in some rare cases - as mentioned before, but linear-access devices benefit from it.

My first ever harddrive was from Connor Electronics, 180 megabytes. It still works! (obviously its full of bads and has typical senility symptoms).

"Bring along 400TiB of data" Oh, believe me, you should. Some filesystems are simply better and are just as reliable. BTRFS is reaching this state soon, so you'll probably see move from Ext. The same as it was with FAT16>FAT32> NTFS switch. Not being an admin is an excuse ;) filesystem driver is just a piece of code and filesystem - piece of data.

"Sony forces FAT32 filesystem" The cheaper the bridge, the bigger the salary of engineer. ;)

"In future say 20 years" Ext1 was implemented in April 1992. Today is 2016. Ext4 driver can read it.

"Do you remember the Sony Betamax vs VCR video format battle?" Yes I do, but here is one different thing. If Linux filesystems would be proprietary, then yes - invest in wrong one, company cuts support, the end. But they are open. There is only way they can stop to be supported - they are not used anymore; and that may only happen, if something technologically better appears.

ExFAT would succeed, if it would play by the rules. Means, it would be technologically competitive and free from patents (well, sorry about that). But isn't its how MS used to make the money the whole time - copy-paste, giveaway and milk the licenses? Everything works fine, until someday customer realizes he has been put into restricted cage and can't escape without pain.

Both ZFS and BTRFS automatically check for CRC and automatically correct these errors. If you have a spare hardware around, you can easily install "Openmediavault" in your network and enjoy the features. From stability viewpoint, its very stable.
avatar
TrueDosGamer: (snip)
avatar
Lin545: "Browser crash" >> Textarea Cache (or equivalent) + clipboard manager =)

"Read/write"It supports create, check, read and write for FAT16, FAT32 and NTFS. But specifically for NTFS, if *this* filesystem is suddenly in "unclean" state, write access is disabled. "Unclean" state is specifically caused by Windows shutting down in hibernation. Working externally on such partition is big threat, hence this is a security measure - not shortcoming.

On Windows I defragmented the drives,.. stuff became much quicker. On Linux, for example, the BTRFS may fragment because of how its built internally, so there is also a defragmenting utility. Of course, for matrix access devices, defragmentation makes no sense except in some rare cases - as mentioned before, but linear-access devices benefit from it.

My first ever harddrive was from Connor Electronics, 180 megabytes. It still works! (obviously its full of bads and has typical senility symptoms).

"Bring along 400TiB of data" Oh, believe me, you should. Some filesystems are simply better and are just as reliable. BTRFS is reaching this state soon, so you'll probably see move from Ext. The same as it was with FAT16>FAT32> NTFS switch. Not being an admin is an excuse ;) filesystem driver is just a piece of code and filesystem - piece of data.

"Sony forces FAT32 filesystem" The cheaper the bridge, the bigger the salary of engineer. ;)

"In future say 20 years" Ext1 was implemented in April 1992. Today is 2016. Ext4 driver can read it.

"Do you remember the Sony Betamax vs VCR video format battle?" Yes I do, but here is one different thing. If Linux filesystems would be proprietary, then yes - invest in wrong one, company cuts support, the end. But they are open. There is only way they can stop to be supported - they are not used anymore; and that may only happen, if something technologically better appears.

ExFAT would succeed, if it would play by the rules. Means, it would be technologically competitive and free from patents (well, sorry about that). But isn't its how MS used to make the money the whole time - copy-paste, giveaway and milk the licenses? Everything works fine, until someday customer realizes he has been put into restricted cage and can't escape without pain.

Both ZFS and BTRFS automatically check for CRC and automatically correct these errors. If you have a spare hardware around, you can easily install "Openmediavault" in your network and enjoy the features. From stability viewpoint, its very stable.
I just use a notepad and paste it manually. It's quick and if the browser crashes unexpectedly I can't expect that to save whatever I was typing when I launch it again. Sometimes these browser addons break after a new browser version comes out or they require Javascript to be enabled at the time. I tend to toggle javascript on and off when the computer gets sluggish. When you got 100 tabs open at once sometimes bad code on one webpage skyrockets the CPU usage. Toggling the javascript off then on again stops this nonsense.

I couldn't find a link to this Linux program "Read/write" you talked about.

However it looks like Microsoft is at it again and has found a new successor for NTFS. exFAT and FAT64 never actually caught on but ReFS looks like it's going to be doing what you are doing in Linux so if data rot is your beef with NTFS then ReFS might be the Windows equivalent solution. You probably won't be using it on Linux due to patent concerns but if Microsoft has its way it will be the true successor of NTFS in the distant future. For myself, I'm just going to stick to NTFS until I require an OS that won't use it anymore then I'll consider ReFS then but for my personal needs it's doubtful I'm going to be using anything beyond NTFS in my lifetime the way it's looking so far. I don't think I'm going to hit the maximum file size limitation of 16 EB any time soon in the next 50 years but who knows we might have 3D Holographic Video files in 20K resolution and perhaps then ReFS will be mandatory.

https://en.wikipedia.org/wiki/ReFS

If you defragmented your hard drive whatever small speed gains you had is probably minimal. If I had a choice to defrag my hard drive I'd focus on the OS files only so when it boots it will not be hunting for the data all over the place. That's why I use small partition sizes instead of large single partitions. It speeds up the process. As for the other files you may constantly save and delete it would be a waste of time to have a background process to constantly move data chunks around and if the power went out unexpectedly then the file might get corrupted. I prefer the OS to just write the file and leave it alone. That extra wear and tear on mechanical drives will probably reduce the MTBF. Just get some 2TB SSDs when they drop in price over the next few years. You'll never to deal with defragging anything. The increase in speed from defragging an SSD would probably be insignificant.

I still have a few Connor hard drives. But usually in the small form factor IDE usually found in those laptops.

You'll have to get permission from the IT department to install that ext4 filesystem driver for Windows. Try explaining to them why you want this installed on every machine. Otherwise if you find yourself on a machine that doesn't have ext4 installed with 400TiB of data that won't be a happy sight if not every machine has the ext4 driver installed. :)

If they included it with Windows 10 that would be a different scenario. Even Winzip was not integrated in Windows 2000 until XP. I had to manually install Winzip for Windows 2000.

Like I said FAT32 did not become obsolete but just impractical for me since the HD video files kept breaking the 4GB file size barrier. Most standard files we create aren't going to be that large. Even software installers can be chunked to 4GB parts if necessary. HD Videos can be chunked as well but it's easier to keep a video file as one single file than splitting it into many parts. Other than that if FAT32 had become FAT64 or exFAT and launched with XP things would be different now. People would be using FAT64 or exFAT instead of NTFS. However it did not happen and thus FAT and NTFS were the only file systems supported by Windows natively. exFAT can be added by patching even on XP. It's a shame they didn't try to include it in XP SP1. They were probably hoping to license exFAT to companies for money. However if they truly wanted to sell it to consumers they should have just included it with Windows XP so it could gain its own popularity over time. However on the bright side Vista SP2 includes exFAT and so Windows 7 and later.

For now most consumers are probably on NTFS and might consider switching to ReFS as Windows 10 gains popularity and succeeds Windows 7.

ZFS and BTRFS is still popular on Linux systems due to open source and no patent headaches. So at least you can avoid Windows as long as possible. :)
Post edited January 22, 2016 by TrueDosGamer
avatar
TrueDosGamer: (snip)
What do you mean, at least I can avoid Windows as long as possible? Its like saying "You can avoid IPhone as long as possible" to Android user. Of course I can. Forever, pretty much.

"For now most consumers are probably on NTFS and might consider switching to ReFS"
Consumers are not to decide what to use - its take it or switch OS. The lead architect decides which features the system has. If the system is open and somewhat stable, then others may create extensions. MS just does what it considers to be right and then cancels older version support - when sell profits outweigh reputation damage. Tell me MS worries about reputation - it defines reputation. If you use it, comply or switch OS.

ZFS and BTRFS are production quality filesystems for data centers. ReFS is a proprietary clone, which only advantage is a better integration into Windows ecosystem, advantage which hardly anyone will use except maybe on drive "C:". So, I take it for granted, newer Windows will just silently force ReFS for new installations and make any alternative hard to use. Like they always do (Windows XP with NTFS forced if parition is more than 4(?)GiB, "because NTFS is more efficient").

Sleepers and complyers have been the most of what Windows userbase made of for ages and also been their policy. Because this behavior is most profitable to them and because of them, MS still exist. But take Google, Apple - thats pretty close to the same policy. If consumers are not brainwashed into being complyers, then they are taken into corporate cult.

ZFS does have licensing issues, because Sun - the company which acted incorrectly to Linux, and was destroyed by it - instructed its engineers to create a license that would be explicitly incompatible with GPL. Which means, ZFS code may not be integrated into Linux kernel. But easily as a kernel module or userspace driver.

"Like I said FAT32 did not become obsolete" Whats the advantage of FAT32 to Ext2 or even NTFS? Its like using 1.44 floppies claiming they are not outdated.

" People would be using FAT64 or exFAT instead of NTFS" NTFS supports everything that FAT64 or exFAT support. FAT is still alive only because of how primitive it is, embedded storage and devices is the only reason and goal of its existence. FAT and NTFS exist only with Windows world or anything that might connect to it - much more advanced technology has been available since ages. Install the driver and use. Its like using Firefox instead of IE.
But this conflicts with MS vision - having full control over platform to make money. They even developed ugly TFAT.

"If you defragmented your hard drive whatever small speed gains you had is probably minimal. " Defragmentation brings major performance improvements. Like I said - HDD like floppies, like VHS, like Vinyl - are sequential (linear) access storage. Keeping file reduces the read/write head positioning delays. Typical linear read speed is 40MiB/s. Typical random read speed is 0.5MiB/s.
Advanced defragmentation software uses some sort of grouping - and sorts files based on modification date (modification frequency) and whether its a system or user file and file size. User files with biggest size and system files go all way to the end of hard-drive, where system files known to be modified often and small and frequently modified user files - go near start. There is empty place in the middle exactly for new and modified data. This keeps amount of defragmentation minimal. Still, the price for fragmentation is less, when its done periodically - hence automatic defragmentation in the background is the way to go.

So there is no need in any extra partitions just to defragment less.
Also, the defragmentation process is more efficient when there is some free space. If I am not mistaken, there should at least be 20% free space or the process will take considerably more time to finish.

Now, FAT and NTFS could completely avoid fragmentation by smart block management - like Ext does, but they chose not to to have speed advantages. Take it or switch OS.

Fragmentation is completely irrelevant for random access storage like SSD or memory. There is only one situation when it may have big impact - one huge data block, which may not be split - and no such continuous block available. This does not apply to filesystems, however.

"You'll have to get permission from the IT department to install that ext4 filesystem driver for Windows. Try explaining to them why you want this installed on every machine. "
Because you value your data.

"If they included it with Windows 10 that would be a different scenario. Even Winzip was not integrated in Windows 2000 until XP. I had to manually install Winzip for Windows 2000." Why would you even use Winzip? First, 7z with its LZMA is much more advanced and available completely free. Second, Windows already has "Ziped folders", which is essentially Zip archiver by file manager.

' I couldn't find a link to this Linux program "Read/write" you talked about.' Its not a program, its ability. Linux supports read/write to/from FAT16, FAT32 and NTFS.
Post edited January 25, 2016 by Lin545
avatar
TrueDosGamer:
avatar
Lin545:
What do you mean, at least I can avoid Windows as long as possible? Its like saying "You can avoid IPhone as long as possible" to Android user. Of course I can. Forever, pretty much.

Because those Linux file systems are patent free is what I meant. However in addition if you have all the necessary software on Linux you need and none that exist on Windows that isn't on Linux then that would also be true. In my case I can't say 100% of the software I need or use on Windows is found on Linux.



Like they always do (Windows XP with NTFS forced if parition is more than 4(?)GiB, "because NTFS is more efficient").

Hmmm, I always partition my own partitions in DOS so I never encountered XP ever forcing me to use NTFS when my partitions were larger than 4GiB. I do know XP won't format over 32GB partitions for FAT32 so you would need to do it in DOS or another partition manager. NTFS is "NOT" required for XP installation. However Vista and higher do require NTFS for the OS installation.



"Like I said FAT32 did not become obsolete" Whats the advantage of FAT32 to Ext2 or even NTFS? Its like using 1.44 floppies claiming they are not outdated.

FAT32's advantage is backward compatibility for OS installation and readability by any OS out there practically. Even a Playstation 3 can read FAT32 but it can't read NTFS due to MS patent. Why FAT32 is readable even though that is a MS patented file system escapes me.

FAT32 is a file system not a capacity issue. You can create up to 2.2TB partitions with FAT32 under MBR. A file system can be used on older or newer technology. Mechanical or SSD. The only major limitation is file size limits of 4GB. Of course most people use software that chunks it as 1GB or up to 4GB parts to get around it. Personally I use FAT32 to store Ghost images. I can get back easily to working OS by restoring a ghost image of my boot partition which can be as small as 32MB. If somehow I tried to format the C: as a Linux partition I don't know for certain Ghost could read it. Second if somehow you wanted to install XP or some other Windows OS onto a Linux partition I don't think it would allow it because it doesn't understand or can interpret that partition. Like you said you must install a driver (post OS) installation stage before it can read and write to Linux partitions. This is another Catch 22 situation of file systems.



" People would be using FAT64 or exFAT instead of NTFS" NTFS supports everything that FAT64 or exFAT support. FAT is still alive only because of how primitive it is, embedded storage and devices is the only reason and goal of its existence. FAT and NTFS exist only with Windows world or anything that might connect to it - much more advanced technology has been available since ages. Install the driver and use. Its like using Firefox instead of IE.
But this conflicts with MS vision - having full control over platform to make money. They even developed ugly TFAT.

This has always been a problem with adoption. Even if something superior comes along not everyone will jump on board. I also envision the same issue with NTFS to ReFS adoption. Most people will be hesitant switching over. Look how long it took me to adopt NTFS fully. Mine was out of necessity for large file sizes. Most hard drive manufacturers preformat their external hard drives as FAT32 about 5 years ago and earlier but now begin using NTFS on > 500GB hard drives. Would they ever preformat it as ext4 or other Linux file system the answer is no. People want to be able to use their hard drive right away not have to repartition and reformat the drive which can take up to 24 hours for a full reformat for 2TB. I've done it myself reformatting a NTFS drive as FAT32. These days NTFS is preferable out of the box. Will there be a day they will use ext4 or Refs? Hard to say. 10 years from now who knows what file system will be the new standard. It could take 10 years before Refs gets adopted if people decided to switch. My thought is most people will stay with NTFS despite its age. Many do it for compatibility or convenience. However I do agree that if this is for data storage or servers that ZFS and BTRFS would be a better choice because they need to ensure uptime and stability.



"You'll have to get permission from the IT department to install that ext4 filesystem driver for Windows. Try explaining to them why you want this installed on every machine. "
Because you value your data.

You might value your data but for the IT department to convert all the computers to support a new file system already in place would most likely frown upon such a drastic move even if well intentioned. The only way I could see it adopted is they started with the new file systems from the start avoiding the need to convert anything.



"If they included it with Windows 10 that would be a different scenario. Even Winzip was not integrated in Windows 2000 until XP. I had to manually install Winzip for Windows 2000." Why would you even use Winzip? First, 7z with its LZMA is much more advanced and available completely free. Second, Windows already has "Ziped folders", which is essentially Zip archiver by file manager.

Winzip is standardized back in the BBS days it was PKZIP. Even if newer more advanced compression formats are available they are no longer as important. ARJ was a better compression algorithm back in the day than PKZIP but PKZIP was more popular. ARJ did have good inroads because space was a premium and the more bytes you saved was worthwhile. Today when you can buy 2TB for $70 bucks the need to choose between compressing with Winzip, RAR, or 7z is not as important. However if you use classic Winzip to compress you can instantly open the file in Windows XP and later without needing a separate decompressor. Personally, I just use WinRAR these days because most I decompressed comes in that format. For personal use I might choose Winzip using an older version or WinRAR if the gains of space saved is significant. Sometimes faster decompression outweighs the extra 1% of compression gained. It's been awhile since I compared all the compression formats so who knows if in the future there will be yet another superior compression algorithm that really wipes the floor and takes the crown.



' I couldn't find a link to this Linux program "Read/write" you talked about.' Its not a program, its ability. Linux supports read/write to/from FAT16, FAT32 and NTFS.

I thought NTFS was MS patented which was your complaint about using that file system.

If you can use NTFS partitions to install Linux onto that would a great advantage for Linux. From what I understood you could only install a 3rd party driver to read and write NTFS in Linux but not install Linux onto a NTFS partition?
Post edited February 02, 2016 by TrueDosGamer