It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Then you won't be interested in Mydefrag. I'll take a lot less time to turn those reds blue and does it quietly in the background so you can get on with other things.
I used to have a Windows computer that needed to be defragmented every now and then. That was in the beforetime.
avatar
hedwards: Not really, a heavily fragmented HDD can definitely cause a computer to get extremely sluggish if one hasn't had the foresight to set the swap file to a constant size.

Also, it forces the computer to spend a lot of time looking for parts of files to read rather than the whole thing. Sure, you don't really need to do it if you're under about 5%, but it does have a noticeable effect on performance.

The other thing is making sure that there is at least 20% of the disk empty.
avatar
Antimateria: Of course if it goes that bad as seen in the gif. But when you are installing a game, you are supposed to defrag after that If it seems sluggish. That's a bit overstatement.
I use that piriforms defragger 'cos it let's you chooce if you want just defrag some part of the disk. That 20% empty space is quite hard to keep. I have two hard drives, one is 160 gb (primary) and second 80. New games takes space so much.
I have read that there is some logic to keep page file all the time same min and max. About 1.5 (or something) x ram. Than keep the default that keeps changing.. Any true in that?
So.. If my hard drives are slow and I had 8 (maximum with this motherboard) gigs of ram, there's still use for page file?
And turning off services, well I don't remember seeing much difference when I was tinkering with this.
So I pretty much sometimes wonder that if I should just make everything default in win 7.
That was a mouthful.
The thing about the pagefile is, it should never be used, but it has to be there anyway. It is to Windows memory management what that old blanket is to Linus.

When Windows allocates memory, it reserves pagefile space, just in case it ever has to write that memory out to the pagefile. The only time it actually writes something to the pagefile is if it runs so short on memory that it has to page out "dirty" read-write pages. If this happens, Windows performance will already have become so catastrophically sluggish that you will have given thought to Task Managering applications or hitting the reset button.

Windows allows you to remove the pagefile altogether, but it doesn't like it. It will give you dire warnings, and if it ever gets in a situation where it would have to use the pagefile, it will panic (like Linus without his blanket).

So, because no I/O to the pagefile should ever occur anyway, it's best to let Windows manage the pagefile itself, unless you have special needs. For example, if you're running Windows off an SSD, putting the pagefile on a secondary mechanical drive is a win, because you're not wasting expensive flash memory on the biggest and most useless file in the system.
Post edited September 14, 2011 by cjrgreen
Easiest way to defrag a drive: Robocopy the contents to a new drive, delete the old drive's contents, then copy it back. 0% fragmentation.

'Course it takes a long time, but I found this out when I replaced my 50% fragmented 750 GB drive with a 1 TB one.
avatar
cjrgreen: The thing about the pagefile is, it should never be used, but it has to be there anyway. It is to Windows memory management what that old blanket is to Linus.

When Windows allocates memory, it reserves pagefile space, just in case it ever has to write that memory out to the pagefile. The only time it actually writes something to the pagefile is if it runs so short on memory that it has to page out "dirty" read-write pages. If this happens, Windows performance will already have become so catastrophically sluggish that you will have given thought to Task Managering applications or hitting the reset button.

Windows allows you to remove the pagefile altogether, but it doesn't like it. It will give you dire warnings, and if it ever gets in a situation where it would have to use the pagefile, it will panic (like Linus without his blanket).

So, because no I/O to the pagefile should ever occur anyway, it's best to let Windows manage the pagefile itself, unless you have special needs. For example, if you're running Windows off an SSD, putting the pagefile on a secondary mechanical drive is a win, because you're not wasting expensive flash memory on the biggest and most useless file in the system.
If that's the case, then MS needs to get better people working on their architecture.

There's always some debate as to how much swap should be required and how aggressively the OS should be about pushing pages to swap. But, if Windows is set up to basically ignore swap until it runs out of RAM, that's just an example of incompetence and is probably part of the reason why they've had so much trouble with interactivity over the years..

FreeBSD tends to be more aggressive about pushing things to swap than Linux is, but both will push things to swap before they run out of RAM and before they appear likely to run out of RAM imminently. The reason they do it is that just because a program requests memory, does not mean that the program needs it imminently and it's a bad idea, performance wise, to fill up the RAM without leaving room for buffer and possible future needs.

Admittedly, it is tricky to figure out what should be in RAM and what should be in swap, but it's a necessary evil and chances are good that if a page hasn't been hit in 10 or 20 minutes that the added latency that comes from hitting swap isn't going to be noticed. But, by having it in RAM anyways, every program that runs is going to have to compete with that program for RAM, even if it might not even be running at that point.
avatar
Foxhack: Easiest way to defrag a drive: Robocopy the contents to a new drive, delete the old drive's contents, then copy it back. 0% fragmentation.

'Course it takes a long time, but I found this out when I replaced my 50% fragmented 750 GB drive with a 1 TB one.
not sure this is safe , if there is a tiny glitch lets say power problem or hangs or some file gets corrupted, the hdd could die soon, never keep moving huge contents btw disks , it reduces its lifetime considerably like say keep moving 500-700gb to and from a 1tb disk constantly and the disk will die faster than its normal lifetime.

only defrag stuff like games which you use frequently, if you have steam you can check the fragmentation of each game and defrag them. constant defraging is huge time consuming process and will kill the health off your disks, do it once in a while.

i have managed to keep all my disks alive right from the 125mb hdd from the 386 to the current 1.5tb. whereas the others who constantly perform so called maintenance, on their disks by constant scanning for errors, defragging , moving huge contents etc have lost their disk sooner than it should go.

Keep installation files and other media you rarely use on portable drives and just the stuff you use daily on your main hdd.
avatar
cjrgreen: The thing about the pagefile is, it should never be used, but it has to be there anyway. It is to Windows memory management what that old blanket is to Linus.

When Windows allocates memory, it reserves pagefile space, just in case it ever has to write that memory out to the pagefile. The only time it actually writes something to the pagefile is if it runs so short on memory that it has to page out "dirty" read-write pages. If this happens, Windows performance will already have become so catastrophically sluggish that you will have given thought to Task Managering applications or hitting the reset button.

Windows allows you to remove the pagefile altogether, but it doesn't like it. It will give you dire warnings, and if it ever gets in a situation where it would have to use the pagefile, it will panic (like Linus without his blanket).

So, because no I/O to the pagefile should ever occur anyway, it's best to let Windows manage the pagefile itself, unless you have special needs. For example, if you're running Windows off an SSD, putting the pagefile on a secondary mechanical drive is a win, because you're not wasting expensive flash memory on the biggest and most useless file in the system.
avatar
hedwards: If that's the case, then MS needs to get better people working on their architecture.

There's always some debate as to how much swap should be required and how aggressively the OS should be about pushing pages to swap. But, if Windows is set up to basically ignore swap until it runs out of RAM, that's just an example of incompetence and is probably part of the reason why they've had so much trouble with interactivity over the years..

FreeBSD tends to be more aggressive about pushing things to swap than Linux is, but both will push things to swap before they run out of RAM and before they appear likely to run out of RAM imminently. The reason they do it is that just because a program requests memory, does not mean that the program needs it imminently and it's a bad idea, performance wise, to fill up the RAM without leaving room for buffer and possible future needs.

Admittedly, it is tricky to figure out what should be in RAM and what should be in swap, but it's a necessary evil and chances are good that if a page hasn't been hit in 10 or 20 minutes that the added latency that comes from hitting swap isn't going to be noticed. But, by having it in RAM anyways, every program that runs is going to have to compete with that program for RAM, even if it might not even be running at that point.
I was talking to our IT guys about swap today. They are of the opinion, and I agree, that any workstation or minor server that is ever hitting pagefile or swap is badly undersized.

Windows design is not much different from that of other mainstream OS, including FreeBSD. Every modern OS manages its memory in a very similar way: SWAP IS A LAST RESORT.

Read-only pages never get swapped or pagefiled. They get zeroed and handed to the process that wants them. Later they get reloaded from the same file they came from. In normal operation, that is enough to fulfill all demand for memory.

It's only when the OS has to steal dirtied read-write pages that you ever write or read swap or pagefile. The number of times that should happen in normal operation is zero, unless you're running a damned big server that can't hold all its read-write data in RAM.
Post edited September 15, 2011 by cjrgreen
avatar
cjrgreen: I was talking to our IT guys about swap today. They are of the opinion, and I agree, that any workstation or minor server that is ever hitting pagefile or swap is badly undersized.

Windows design is not much different from that of other mainstream OS, including FreeBSD. Every modern OS manages its memory in a very similar way: SWAP IS A LAST RESORT.

Read-only pages never get swapped or pagefiled. They get zeroed and handed to the process that wants them. Later they get reloaded from the same file they came from. In normal operation, that is enough to fulfill all demand for memory.

It's only when the OS has to steal dirtied read-write pages that you ever write or read swap or pagefile. The number of times that should happen in normal operation is zero, unless you're running a damned big server that can't hold all its read-write data in RAM.
I can't agree with that. It's not the work load alone that dictates that, it's the architecture of the OS. The only way you can prevent it from happening is by removing the swap completely.

As for your comment about FreeBSD, that's simply not true. For as long as I can recall they've proactively moved completely idle pages to swap, even when there's sufficient RAM. In fact there's even an entire FAQ entry that deals with that particular question. I may have missed them changing that, but even if that's the case, they were still doing that in the last decade. Unless of course you don't consider Win XP to be a modern OS.

It's desirable behavior and one way of reducing performance problems that come from short term spikes in RAM utilization. You miss out on the performance hit that comes from determining at that time what pages can be moved to swap and actually doing so freeing up resources at the time you're more likely to be needing to use them, rather than when the system is partially idle.

Ultimately, performance benefits from RAM are the result of having the information you need in RAM rather than on disk. To that end, keeping idle pages in memory so as to avoid having to hit the disk makes little sense as you're trading the availability of RAM for things that might pop up for a minor performance loss from having to move a page from swap to memory if needed.
Also, swapfiles were introduced in a time where the actual physical restrictions on manufacturing RAM, as well as the cost, were very real. So it made sense so use extra hard drive space as a backup. In the modern era, with vastly improved manufacturing, only video games, computer animation, and complex science tend to use all of the RAM, but the swapfiles are an artifact from an older era, and people continue to design both hardware and software with them in mind.

On topic, I generally run Auslogics DiskDefrag once any time I install a new program. I generally don't have to do it any other time. Defragging once a week is useful if you create and delete LOTS of files on a regular basis, but more often than that is a waste of time.
avatar
Foxhack: Easiest way to defrag a drive: Robocopy the contents to a new drive, delete the old drive's contents, then copy it back. 0% fragmentation.
Yeah I've done the same a few times. After all it is the only real option if you want to try to defragment a drive which is close to full already because for a defragmentation program you'd need to free some space on it first.

avatar
liquidsnakehpks: not sure this is safe , if there is a tiny glitch lets say power problem or hangs or some file gets corrupted, the hdd could die soon, never keep moving huge contents btw disks , it reduces its lifetime considerably like say keep moving 500-700gb to and from a 1tb disk constantly and the disk will die faster than its normal lifetime.
Defragging is not necessarily safe either if there is a power problem. (Anyway that's one thing I also like about laptops, they don't care if there is a short power failure).

Also, copying files should be pretty safe even then. Yes the target could be corrupted in a case of a problem, but the origin should still be ok, after all you are just copying the files from it, not moving.

Furthermore, I am not convinced copying the whole contents of a HDD is any more hazardous to its operating life than defragging, where it has to read and write constantly all over the disk anyway. It is not like it is done daily or weekly anyway.
Post edited September 15, 2011 by timppu
Well, I've worked with FreeBSD daily for more than 10 years, and I know it doesn't simply kick pages out to swap to balance anything, certainly not the way you claim. Pages go to swap when it has no choice, and at no other time. If swap is nonzero at any point during a run, you ran out of memory.

Writing a page to pagefile or swap is the costliest thing a memory manager can do. It certainly does not do anything of the kind proactively, because it could always get more memory without incurring a write anytime it wants it by flushing a read-only page.

To be specific, "dirty" pages live on a chain that is the last consumed; they are never taken from that chain and written to swap unless all prior chains have been fully consumed. You have to work really hard to get FreeBSD to give up those pages.

Anyway, there are two general kinds of disk defragmenters: ones that use the NTFS defragmenting API, and ones that do something else. A defragmenter that uses the NTFS API (MyDefrag is one) is solid against a power failure, because the state of the filesystem is preserved by the journal, and at no time can a sudden failure cause a loss of data. It is also solid against dirty reads, because at no time is any file in a state inconsistent with the original file. As a penalty for doing it this way, NTFS defragmenters are slow.

A really good defragmenter can beat Robocopy, too, because the good ones have heuristics for optimizing file placement. It just takes longer, usually a lot longer.
avatar
Antimateria: Well, it certainly would help, I wouldn't have to be always uninstalling something.
Those ssd drives sure are expensive and small.
avatar
hedwards: Oh, never mind then, I didn't realize you were talking about SSD drives. Those don't benefit much from defragging for reading, although they do for writing.

SSDs are kind of odd when it comes to defragging, it's not really something that one ought to do very often with them.
Defragging SSDs decreases their lifespan considerably, each cell can be written to only a limited number of times before they're dead. The controller chip makes sure each cell is written to roughly equally (which makes the physical data location inherently fragged), but when defragging you write to all of them as a buttload of data is moved around.

This concern is lessened with single-level cell SSDs as they store only a single bit per cell, but those are also a hell of a lot more expensive than the more common multi-level cell SSDs hat store multiple bits per cell (on defragmentation each bit is written at least once, and if the SSD uses multi-level cells, the cell is written to multiple times, decreasing the cell's life-span by that many writes).


... In short: Don't defrag SSDs.
Post edited September 15, 2011 by Miaghstir
avatar
hedwards: Oh, never mind then, I didn't realize you were talking about SSD drives. Those don't benefit much from defragging for reading, although they do for writing.

SSDs are kind of odd when it comes to defragging, it's not really something that one ought to do very often with them.
avatar
Miaghstir: Defragging SSDs decreases their lifespan considerably, each cell can be written to only a limited number of times before they're dead. The controller chip makes sure each cell is written to roughly equally (which makes the physical data location inherently fragged), but when defragging you write to all of them as a buttload of data is moved around.

This concern is lessened with single-level cell SSDs as they store only a single bit per cell, but those are also a hell of a lot more expensive than the more common multi-level cell SSDs hat store multiple bits per cell (on defragmentation each bit is written at least once, and if the SSD uses multi-level cells, the cell is written to multiple times, decreasing the cell's life-span by that many writes).


... In short: Don't defrag SSDs.
True, but there are other more important reasons. SSDs are not organized the same as mechanical disks, and writes to SSDs don't work the same way as mechanical disks.

SSDs have an erase-write cycle. If a page (nominally 4K) to be written is not known to be erased, a whole block of pages (512K) has to be erased first. Thus SSDs have to have ready access to contiguous erased space. If erased space is fragmented, SSD write performance degrades, sometimes catastrophically.

Typical filesystems and defragmenters don't know about erasing. They're designed to assume that when a block has been mapped free, that it can be written to at any time. So look what a defragmenter does: it moves files from fragmented space (and marks the fragments free) to contiguous space. But to an SSD, all it has done is decrease writeable space (by not erasing the fragments, and consuming previously contiguous erased space).

SSDs and SSD-aware OS and filesystems already solve this problem in a manner that eliminates the need for defragmenters altogether. They have the TRIM command and built-in garbage collection. In effect, they are already defragmenting in real time.

So I'll strengthen your conclusion. Don't defragment SSDs, because doing so is at cross purposes with the SSD's own effort to maintain efficient operation. Especially, don't defragment an SSD that takes a lot of random writes, like a Windows system volume.

OS and filesystems that know about and use TRIM include Windows 7 and Server 2008R2, Linux (at least 2.6.33, with Ext4 or Btrfs), FreeBSD (8.2, with UFS), Solaris, DragonFly, and Mac OS X (Snow Leopard, unofficially; Lion, officially).