Talk:Defragmentation

Latest comment: 2 years ago by SWStiletto in topic Defrag and performance improvements

To Defrag or not to Defrag?

edit

Dear Contributors!

Since it is unlikely to post here a PDF, I recommend to search for "A Fast File System for UNIX".

The current file name is most probably "ffs.ps". The older file name "05fastfs.ps" ten years ago is no longer existent.

GhostView will show it, you can convert it to PDF with this tool as well if you prefer Adobe Acrobat Reader etc.

I'll refer to this paper in discussing a lot about disk defragmentation. Since in my page it's not clearly said.

Back in old DOS/Win16 days, there may have been an advantage for contiguous files.

Like on the PDP-11, binaries got slurped into RAM in one chunk, OS/2's and Windows' DLLs already posing a problem because no longer a single binary got loaded.

See on page "3" why defragging the "old" 7thED file system was a non-issue under *NIX then, it involved a dump, rebuild, and restore.

There also an idea published 1976 was mentioned that suggested regularly reorganising the disk for restoring locality which could be viewed as defragging.

The VAX introduced a new concept of virtual memory, demand paging. Prior to this, only swapping of segments, mostly of 64KB size, was common.

Since then, binaries are read only that far to set up the process, the "call" to main(argc,argv), note an OS *returns* to a process to facilitate multitasking, involves a page fault.

With some luck, that page is in the buffer cache, but surely the first call to a function will result in another page fault, where the luck of finding it in the buffer cache is greatly diminished and the disk block surely have been rotated away.

Page "7" of the FFS paper mentions a rotationally optimal layout, in DOS days, there were tuning programs to change the interleave factor which became obsolete when CPUs got fast enough and DMA disk access became common, the paper calls this I/O channel, and interleave factor "1" became standard.

Also, booting becomes less sequential if you extend this term beyond the loading and starting of the kernel to hardware detection and especially to loading and starting background processes and initialising the GUI and it's processes.

Linux is still sequential up to GUI start, which is parallelised everywhere, but some of the BSDs try to go parallel after hardware detection, albeit with some provision for interdependencies.

OS/2, Windows, and MacOS_X switch early to parallelised GUI mode, MacOS<=9 never showed a text screen, I don't know if MacOS_X ever shows a text screen.

Then quite a bazillion of processes contend for the disk arm, you may separate some of *NIX subtrees to different SCSI disks to limit this, albeit not too much, IDE disks are only recently capable of detaching after a command to enable parallelity.

Partitions on the same disk may aggravate the problem because they force a long seek when the elevator algorithm has to switch partitions.

Especially DLLs, due to their shared nature, shared libraries under *NIX are not that numerous and pervasive, are never in the vicinity of the binary calling them.

Thus defragmenting becomes practically irrelevant, at least for executables.

Buffer underrun protection is now common with any CD/DVD toaster due to their high speed, but the source of buffer underruns is more a process madly accessing the disk and/or the GUI than a fragmented disk which is usually faster than any high speed CD/DVD.

So defragmenting becomes irrelevant for normal files as well.

Traditional defraggers run in batch mode, which may be tolerable on a workstation after business hours, but intolerable on an Internet server which is accessed 24/7.

Also batch defraggers which don't need umounting the disk and thus can run in the background have the problem, that their analysis is likely to be obsolete at it's end so the defrag is suboptimal.

This is especially true for mail and/or news servers where bazillions of mostly small files are created and deleted in quick succession.

There would be the option of an incremental defragger which moves any file closed after writing to the first contiguous free space after and fill the gap from files below this boundary.

Over time, file shuffling decreases as static files tend to land at the beginning of the disk and the dynamic ones behind them.

A batch defrag with ascending sort over modification date may shorten this process significantly.

However, this scheme also gets overwhelmed on mail and/or news servers.

As mentioned on page "3" of the FFS paper, defragging was too costly back then, thus they decided to implement a controlled fragmentation scheme described mostly on page "8" with cylinder groups and heuristics to place files there, large files being deliberately split up.

OS/2's HPFS definitely is modelled after BFFS, Microsoft tries to hide that this holds also for NTFS.

I verified this both on NTFS 4 and 5.1 by loading a bazillion of files, including large ones, to the NTFS drive and firing up a defragger with a fine block display.

A checkerboard pattern will show up, revealing BFFS-like strategies.

Defragging this spoils the scheme and only calls for regular defrag runs.

Thus even under NTFS, defragging becomes a non-issue, this may be different for FAT.

Note NTFS is still difficult to read with a dead Windows, and practically impossible to repair.

Bad idea for production systems.

The successor to NTFS is still to be published, so no information about this is available, it will only be sure that your precious data again are practically lost with a dead Windows.

So it is reasonable to keep your precious data on FAT, or better on a Samba server.

They will be accessible for Windows' malware anyway, that is the design fault of this OS.

Even Vista will not help, the "security" measures are reported to be such a nuisance that users will switch them off.

And malware will find it's way into even with full "security" enabled.

However, XP runs the built-in defragger during idle time and places the files recorded in %windir%\Prefetch\ in the middle of free space and leaves enough gaps for new files.

Boot time is marginally affected by this.

To get rid of this, you must disable the Windows equivalent of the cron daemon which may be undesirable.

You can disable the use of %windir%\Prefetch\ with X-Setup, then these files aren't moved, but the defragmentation will still take place.

Thus it is a better idea to leave these setting as they are, file shuffling settles comparably fast.

Thus defragging becomes an old DOS/Win16 legacy which is still demanded by the users.

This demand is artificially kept up by the defrag software providers which want to secure their income, even new companies jump on the bandwagon.

Back in DOS times, Heise's c't magazine closed their conclusion with the acid comment that defragging is mostly for messies which like to watch their disks being tidied up, but only these, not their room or house.

Debian Sarge cometh with an ext2fs defragger, unusable with ext3fs, requiring umounting the disk, thus practically useless.

The mail address was dead, so no discussion possible.

However, ext2fs already follows the ideas of BFFS, so defrag should be a non-issue there, too.

ReiserFS got somewhat out of focus since the fate of Hans Reiser is quite unknown with that lawsuit for murdering his wife.

Also tests of Heise's iX magazine revealed that balancing it's trees will create an intolerable load on mail and/or news servers.

Rumours were that a defragger was thought of.

Note also that internally the CHS scheme is broken by some disk vendors, Heise's c't magazine once found an IBM drive going over one surface from rim to spindle and then the next surface from spindle to rim, creating a HCS scheme.

Also disk platters are now few to one to cope with low height profiles, even beyond laptops, disks are now 3.5" and 2.5" with heights below a third of the standard height form factor. 5.25" disks with full height, CD/DVD are half height, and ten platters as Maxtor built once are unlikely to reappear.

Also the sector zoning breaks internally the CHS scheme, but BFFS' cylinder groups are still beneficial in all these cases, it will spread disk access time and speed evenly anyway.

Conclusion: Defraggers are obsolete now, only an issue for some software providers, and probably for harddisk vendors.

Kind regards

Norbert Grün (gnor.gpl@googlemail.com) Gnor.gpl 12:05, 1 December 2007 (UTC)Reply


  Uh, what a mixture of fact and myth. I totally, absolutely disagree with the conclusion (except possibly in the case of SSDs). Guys, can this be removed from the talk page? It is old, out of sequence, and it takes so much space (all those paragraph breaks). Read my reply in the #Defrag and performance improvements section (it's the fifth post in that section) for the real story (in my experience with Windows and many different computers, anyway).
  At least he was cordial, and wrote well.
76.6.164.233 (talk) 22:52, 8 April 2013 (UTC)Reply

OS Centric

edit

Fragmentation is a general challenge in the field of File system design. Some filesystems are more fragmentation than others, and some feature integrated background defragmentation. It would be useful to expand this article to cover the subject of defragmentation in all of it's forms.Gmaxwell 23:22, 27 Dec 2004 (UTC)

Free space question

edit

"A defragmentation program must move files around within the free space available in order to undo fragmentation. This is a memory intensive operation and cannot be performed on a file system with no free space."

I'd like to ask: why there should be free space on the volume being defragmented? What if not? Can't the defragmenter move files around using free memory or free space on other volumes? Of course, if system crashes during the defragmentation process, the file system is easier to recover when files are moved only on the volume being defragmented. But why it must be done so?
211.167.159.70 (talk) 20:54, 08:49, 5 September 2005‎ (UTC)Reply

Defragmenting all the files is a risky process. The normal process is AFAIK to pick where the next block of the file should go, copy what is already there to free space on the drive, verify the copy, then change the file table to reflect the change. Then it copies the block of the file to the now free spot, verifies it, then changes the file table. Doing it this way ensures that no matter what stage it crashes at, the file is still accessible and at worse you have an extra copy of the data that needs to be removed. If you have no free hard drive space and store it to memory, you would have to _move_ the information to memory, meaning you can't verify it (memory corruption does happen from time to time), and if the system crashes you lose the data. As for copying to another partition/drive, its possible, but then you are relying on two hard drives working, possibly different partition types, and you can't keep the file always accessible during the defrag or in the event of a failure because you can't have parts of the file spanning two file systems. 65.93.15.119

OS X and auto-defragmentation

edit

OS X and its built-in defragmentation deserve some sort of a mention expansion in this article. I still think it would be worthy to go into detail on how it works. —Rob (talk) 15:13, 4 April 2006 (UTC)Reply


FFS ?

OS X is Unix-like.Mike92591 01:44, 31 August 2006 (UTC)Reply

Defragmentation software for Windows

edit

Perhaps it's worth mentioning O&O Defrag, which greatly improves upon the standard defragmentation software included. You can read more about it here: http://www.oo-software.com/en/products/oodefrag/info/

Please note: "O&O Defrag V8 Professional Edition is compatible with Windows XP, Windows 2000 Professional, and Windows NT 4.0 Workstation. The Professional Edition cannot be used on Windows 2003/2000/NT servers.

O&O Defrag V8 Professional Edition and O&O Defrag V8 Server Edition cannot be installed on computers running Windows 95/98/ME."

I will not be editing the article, do as you please. boaub


I tried adding a stub article on O&O Defrag, but it got deleted, citing "notability". I wasn't about to explain that O&O software is a European company and is therefore not that well known in the USA, where the deleter seemed to come from. Donn Edwards 15:32, 7 June 2007 (UTC)Reply

Please see the notability guideline on how to establish notability. -- intgr #%@! 18:50, 7 June 2007 (UTC)Reply

Windows XP Defrag vs. Other Tools

edit

Is there a big advantage if you use a special defragmentation tool and not that internal thing in Windows XP? 172.174.23.201 22:56, 30 November 2006 (UTC)Reply

The internal defragmenter can have problems on severely fragmented partitions which contain mostly large files. During defragmentation it requires that a block of continueous free space is available as large as the file it is trying to defragment. However, sometimes the free space on a drive can be so fragmented that it is impossible for the defragmenter to even defragment a single file. This only occurs on partitions that have many large files that have been growing (in parallel) for long periods of time (like a download partition) which are then subsequently removed to be stored elsewhere.--John Hendrikx 09:44, 28 May 2007 (UTC)Reply

New article titled "file system fragmentation"

edit

I was somewhat dissatisfied with this article in the current state, so I decided to approach the problem from another angle in the new article "file system fragmentation". More details about my motivation at Talk:File system fragmentation#Reasons for duplicating "defragmentation" article. Please don't suggest a merge just yet. I would like to hear anyone's thoughts, comments, and criticisms though. -- intgr 03:58, 14 December 2006 (UTC)Reply

Common myth and unreliable sources

edit

I am fairly confident that the sections claiming Unix file systems (and/or ext2+) don't fragment when 20% of space is kept free, is nothing more than a myth and wishful thinking. The sources cited by this article do not appear to be written by people particularly competent in the field of file systems, and thus do not qualify as reliable sources per WP:RS. I have read quite a few papers for the article I mentioned above and I can promise to eat my shorts the day someone cites a real file system designer claiming this, as it will be a huge breakthrough in file system research. :)

Does anyone disagree, or am I free to remove the offending claims? -- intgr 12:37, 14 December 2006 (UTC)Reply

Done, removed -- intgr 02:14, 24 December 2006 (UTC)Reply
I can confirm it is a myth (disclaimer: I wrote Smart Filesystem). When using ext in a certain way it can fragment just as badly as most other filesystems. The usage pattern that is very hard to handle for most filesystems is that of many files growing slowly in parallel over the course of weeks. These files will be severely fragmented as they tend to weave patterns like ABCBAABCABBCCA when stored on disk due to their slowly growing nature. The files can end up to be several dozen megabytes in size so even if a filesystem will try to pre-allocate space for slowly growing files the fragmentation can get very bad. From there it only gets worse, because when such a file is removed, it will leave many gaps in free space, which will compound the fragmentation when it needs to be reused. Keeping a certain amount of space always free can help to reduce fragmentation but will not prevent this usage pattern from eventually degenerating.--John Hendrikx 09:55, 28 May 2007 (UTC)Reply

Fragmentation is very rarely an issue on *nix filesystems. It's not that they don't fragment, but rather that a combinitation of allocation algorithms and reordering of requests to optimise head movements effectively negate the issue. This is sufficiently effective to the level that the standard procedure to defragment a volume amongst Solaris admins is to backup and restore. Why this is still an issue with Windows/ntfs, I have no clue, it obviously shouldn't be. A good explanation here: http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html Lnott 14:03, 7 February 2007 (UTC)Reply

Note a few things:
  • The mailing list post compares ext2 to the FAT implementation of MS-DOS.
  • All modern operating systems use quite similar approaches to readahead, page cache, elevator algorithms, etc.
  • Neither is fragmentation a big issue on non-Unix file systems. (Do you have a reliable source supporting your claim of NTFS fragmenting more than Unix file systems?)
  • How well a file system performs depends primarily on access patterns. Under certain loads, fragmentation can become a big issue with any file system, hence why defragmenters are necessary.
I cannot offer a valid counterargument about NTFS allocation algorithms, as very little is known about its implementation. The article file system fragmentation documents proactive fragmentation reduction techniques (though cylinder groups is still on the TODO list). But in short, it's all about access patterns, not fragmentation.
-- intgr 19:20, 18 February 2007 (UTC)Reply
NTFS does not have block groups. And it has an MFT that can fragment, unlike static inode tables which cannot. So no, they don't use quite similar approaches to prevent fragmentation. Do unix file systems fragment over time? e2fsck prints fragmentation statistics, which I've never seen go bad on long-lived not too full file systems, over many years. So, while I'm certainly no reliable source, I know it was never a problem for me. It is true that bad access patterns can fragment any file system, but that does not mean that the access patterns prevalent in practice will have that effect, not even over a much longer period of time. Of course, no serious file system architect will claim that 20%+ empty file systems cannot fragment, because it's not true, strictly speaking. But then, none of those guys have ever bothered to write an extN or FFS defragmenter, either, which should tell you something. —Preceding unsigned comment added by 24.7.28.168 (talk) 06:56, 22 May 2010 (UTC)Reply

Myths

edit

The article is not bad as it stands -- but it avoids mentioning the most culturally important aspects of defragging. *** As the article states, fragmentation is properly entirely a filesystem speed/performance issue. As the article does not mention, the performance impact of using a fragmented system may actually be minor. There is very little credible objective real-world information available about this. The article does not mention that many Windows users believe that it is very important to defrag very frequently. The article does not mention that defragmentation is risky, since it involves moving all the files around. *** The article suggests that newer larger hard drives have more of a problem with fragmentation. The opposite may be true: Fragmentation may be less of a problem when volumes have lots of free space, and new hard drives are so large that many people are only using a very small percentage of the space. *** Most Windows users imagine that defragging is necessary to keep their systems from crashing. Vendors and magazine article writers encourage this delusion. But no properly functioning OS will crash because files are fragmented -- computers are designed to function this way. If they couldn't, they would not allow files to be fragmented to begin with!--69.87.193.53 18:50, 18 February 2007 (UTC)Reply

"The article does not mention that defragmentation is risky, since it involves moving all the files around."
Because it's not. At least NTFS logs all block moves to the journal, so even if your computer loses power during defragmentation, it can restore a consistent file system state after booting.
It depends on the defragmenter used and what filesystem you are defragmenting. For example, ReOrg 3.11 for Amiga systems used to scan the entire filesystem, calculate the optimal block layout in memory (for every block, including meta data) and then would start to make passes over the disk (using an algorithm that moved like an elevator over the disk) caching as much as possible in memory on each pass, and writing out the data to the new locations as the "elevator" passed over their new location on the disk. It was a very satisfying process to see in action, and it was very fast due to the large caches used, but also very risky since a crash during defragmentation would leave the filesystem in a completely garbled state not to mention losing everything that was cached in memory at the time.--John Hendrikx 10:07, 28 May 2007 (UTC)Reply
Moreover, restoring the file system to a consistent state does not guarantee that your data survives, at all. The state can be consistent, yet degenerate. —Preceding unsigned comment added by 24.7.28.168 (talk) 07:31, 22 May 2010 (UTC)Reply
"The article suggests that newer larger hard drives have more of a problem with fragmentation. The opposite may be true"
Fair point, though that doesn't apply to enterprise use (e.g., large file server clusters). -- intgr 19:29, 18 February 2007 (UTC)Reply
"If your disks use NTFS then you're even safe when the computer crashes in the middle of defragging. Nevertheless, it's still a good idea to backup before defragmenting, just like with other defragmenters, because the heavy use of the harddisk may trigger a hardware fault."[1]
It is just plain stupid to take the giant risk of moving around all of the files on your disk unless you have a full independent backup. And unless you have a damn good reason. And since most users will never understand what is involved, it seems irresponsible to encourage them to get involved with defragging on a regular basis.--69.87.194.65 01:25, 28 February 2007 (UTC)Reply
Yes, use of the hard disk may trigger a hardware fault, whether you're defragmenting or using the disk for other purposes, so you should have a backup anyway. Or even if you are not using your disk and it's collecting dust on the shelf, you'd still better have a back up since your house can burn down.
The majority of premature hard disk failures are caused by manufacturing errors and mechanical impacts. Manufacturing errors mean that the disk will fail sooner or later anyway. The most prevalent kind of hard disk failures, plain simple media failures, are not dependent on the use of the hard disk at all. Defragmentation does not incur any "giant risk", merely a slightly higher chance of spotting an error sooner rather than later. Also note that decent defragmentation software will minimize the amount of files that would actually need to be relocated, and will not do anything if there is nothing to defragment. (while indeed some inadequate commercial defragmentation software will relocate all files on the disk, which is obviously redundant and unnecessary). -- intgr 10:05, 28 February 2007 (UTC)Reply
Disk failures are not the only possible hardware failures, and not all failures are permanent. It is not true that every failure happening during defragmentation would also necessarily have happened otherwise. Neither is the conclusion that defragmentation does not put your data at risk true. As a simple example, you could have a power outage while defragmenting. It is really quite simple: Your data is at risk each time you move it around. The risk may be small for every single occurrence, but you still may not want to move it around unnecessarily. (And no, metadata journaling does not protect you from data loss in general.) —Preceding unsigned comment added by 24.7.28.168 (talk) 06:53, 27 May 2010 (UTC)Reply

I say toss the entire section; there may have been no use for defrag back when the Hard drives were only 50Mb; but I just bought some 750GB hard drives before the holiday last year and even the manuals tell me to defrag my drives, as it increases the life of the disk. Concerns over moving around files are unwarranted. Simply put, in a windows environment, every time you load a file, there is a slight change to it. That's even greater on Microsoft specific files, such as Office documents, files loaded in Media Player, etc, where Microsoft's software makes a tiny note to the file each time it's loaded. Beyond that, Windows by its very nature constantly moves files around the disk. The MSDN forum has an entire spread dedicated to discussing this fact. The statements themselves are wholly POV, as what one person does or does not notice depends on what they use the drive for, the size of the drive, their perceptive abilities, and their habits. If not tossing the statements, then moving them to another section, and listing varying degrees of perception regarding performance gains as the single verified con to defragmenting, against the world of good it does. —The preceding unsigned comment was added by Lostinlodos (talkcontribs).

Defrag and performance improvements

edit

I reverted this recently-added statement from the article since it did not tie into the text, although it does point out that the performance results/improvements are not as black and white as the article makes it sound.

Although it may produce substantial filesystem speed improvements in some cases, for the typical Windows user the overall performance improvement may be minor or unnoticeable( this information IS NOT CORRECT (http://findarticles.com/p/articles/mi_m0FOX/is_13_4/ai_55349694).

Although I do realize that even though benchmarks may point out an X% increase in performance, the user might not notice it, since the performance was never a problem to begin with. But anyway, if anyone can find the time, some of this should find its way to the article. And note that the current POV in the particular section is unsourced as well. -- intgr 22:14, 6 March 2007 (UTC)Reply

There is no doubt, that in some circumstances defragging may result in giant improvements in some measured ≤performances. Which has almost no bearing on which real-world users, in which real-world circumstances, will actually experience noticeable improvements from defragging, and how often they should do it. Companies that sell defragging programs are quite biased sources, and companies that are paid to advertise such software are also suspect. (In the world of technology, differences of a few percent are often considered important. In the world of humans, differences of less than 10% often are not noticed, and it may take a difference of about a factor of two to get our attention. An order of magnitude -- a factor of ten -- now there is a real difference!)-69.87.200.164 21:23, 5 April 2007 (UTC)Reply

Defragmentation on NTFS volumes is only an issue when the OS must make many small writes over a period of time, e.g. a busy Exchange or database server. The Wikipedia article should not give the impression to the average user that defragmentation will usually result in performance gains. A brief mention of the edge cases where defragging can be useful might be worth a brief mention. Many people falsely believe that every so often they need to defrag their drive, when actually that work is done automatically by the OS, and even then the effect will not usually be noticeable. The "placebo effect" of defragging your computer "manually" by watching a multi-colored representation of the hard drive layout as it slowly rebuilds into a human-recognizable pattern probably accounts for why people persist in believing the defragging myth. BTW, The article cited above purporting to prove the need for defragmentation was from year 1999, which is too old. 72.24.227.120 19:52, 10 April 2007 (UTC)Reply

Defragmentation alone will do very little. Proper partitioning is just as important and I agree that defragging one huge C-drive will not help much if you have too little memory (constant swapping) or if you are using software that uses a lot of temporary disk space like Photoshop with no dedicated partitions for them defined. As for the noticability I just came from a computer that has not been defragged for two years and has 80% of its disk full. It loads XP in about 2 minutes whereas this one loads it almost at no time. The same goes for saving data on the disk - it takes ages to save a Photoshop file on the heavily fragmented one whereas this one saves it in a couple of seconds. It seems to me that the critics have only been playing with almost-empty file systems and simple utilities and NOT the real world of A3-sized Photoshop graphics, for instance. But as said, you need proper partitioning there as well.
217.41.51.136 10:38, 16 July 2007‎ (UTC)Reply

  The major problem with so many defragmenters is the competition for speed (e.g. who can finish defragmenting in the shortest time) and the desire to leave as many files untouched as possible. Microsoft already found out that removing say, 5 fragments from a 100 MB file or a 10 GB movie is not going to speed that file's access time by much. The problem is that the same can be true for even smaller files, like 5 – 50 MB, depending on what that file is. Yet so many defragmenters do a quick “Let's find any file that is fragmented, and defragment it, moving it somewhere else on the disk,” without any regard for the size of the file versus the number of fragments it contains, or it's rank in the loading order (if it is part of some program's loading process). The real speed boost comes from getting the files frequently used by the OS and programs to the beginning of the disk, and in (or as close to) the order that they will be accessed. There are two things for someone writing defragmentation software to remember: The beginning of a HDD is faster then the end, consistently (across many different makes and models), by almost 2×. Also, most HDDs can do a sequential read 10× faster than they can do random reads. This is where the real speed benefits are; but many defragmenters continue to be programmed to behave in ways that do not take any advantage to these facts.
  Because of my extreme dissatisfaction with many existing defragmenters, I wrote myself an experimental disk defragmenter that takes advantage of as many of these things as I could manage. It is very slow, frequently taking 1 – 10 hours to defragment (depending on the size of the disk and the number of files on it. A clean install of XP on a modest desktop HDD would take it under an hour to defragment, however). I have seen boot times come down from 2 minutes to 30 seconds for Windows XP. Programs that took 5 seconds to load open instantly. HDDs in computers defragmented with it are much more quiet; they are frequently silent while booting, instead of growling and grinding all the way to the desktop; additionally, on some computers, the HDD light spends a lot of time “off” instead glowing continuously while the computer is booting. However, at this point my defragmenter is experimental, and I have not released it. My point is: if you have not defragmented for a long time, and then you do defragment, if there is minimal improvement, it is not because defragmentation does not work; it is because of a badly written defragmenter. It may reduce the “fragment count,” but that's it.
  How can you tell if a defragmenter is good or not? Listen to you computer while it is booting Windows. Watch the HDD LED on the front, too (if your computer has one). If the HDD needs defragmented, it will make lots of continuous grinding and growling sounds with the HDD LED lit continuously while Windows is booting. If it is properly defragmented, the HDD will be silent except for an occasional click or churn as Windows is booting, and the HDD LED will frequently blink “off” during the boot process. If after defragmentation, the HDD keeps “carrying on” as before, ditch the defragmenter. It is no good. Note that after defragmentation, you will need to reboot Windows several times over a few days/hours to get the maximum effect of defragmentation (before which, the HDD will still churn more than it needs to, even after proper defragmentation; this is due to the Windows Prefetcher). Also, note that some virus scanners greatly reduce the efficiency of HDD file reads, and may be guilty of causing the HDD to churn more than it would otherwise.
  217.41.51.136, what you mentioned earlier about partitioning is called “short-stroking” the HDD. The reason it can help so much is because of what I wrote above: So many defragmenters pay no attention to the frequency of use, the importance, or the sequence of use, of the file that they are defragmenting. Short-stroking the HDD reduces the area that the files can possibly be spread to on the HDD, thus decreasing seek time (the HDD head doesn't have to travel as far) and increasing read speed (the system files are on the first partition, making them closer to the beginning of the disk). However, proper defragmentation can improve speed much more than short-stroking alone ever could.
  Do note that this dissertation of mine only applies to regular HDDs; the new SSDs are an entirely different ballgame.
76.6.164.233 (talk) 22:46, 8 April 2013 (UTC)Reply

There's not many online reviews that cross-compare and benchmark the various Windows defragmentation utilities that are available. As of this writing, noteworthy articles include:

SWStiletto (talk) 06:02, 21 July 2022 (UTC)Reply

Free space defragmentation

edit

What I didn't see in the article is that there are really two goals in defragmentation, to defragment files and to defragment the remaining free space. The latter is by far the more important one to prevent new fragmentation from occuring. If free space is severely fragmented to begin with, the filesystem will have no choice but to fragment newly added files. Free space fragmentation is the biggest cause of file fragmentation for almost any filesystem. It is the reason why experts as a general rule say you should always keep a certain amount of free space available to prevent fragmentation. Letting disks fill up and then removing files and letting them fill up again will result in bad fragmentation as the filesystem will have no choice but to fill the remaining gaps to store data instead of making a more educated guess from other free areas. Note that even this rule will not remove the need for defragmentation, it just delays it.--John Hendrikx 10:22, 28 May 2007 (UTC)Reply

Please do add this to the article - and cite some sources. Tempshill 20:44, 21 August 2007 (UTC)Reply


Windows Defrag Utilities

edit

The list of utilities grew to the following, which is fairly comprehensive

Commercial (Windows):

Freeware (Windows):

  • Auslogics Disk Defrag: A free defragmentation program for NTFS.[18]
  • Contig: A command-line based defragmentation utility.[19]
  • DefragMentor Lite CL: a command line utility.[20]
  • IOBit SmartDefrag [21] (Beta software)
  • JkDefrag: A free (GPLed) disk defragment and optimize utility for Windows 2000/XP/2003/Vista.[22]
  • Microsoft's Windows Disk Defragmenter (already included in most versions of Windows)
  • PageDefrag: Runs at startup and attempts to defragment system files that cannot be defragmented while they are in use.[23]
  • Power Defragmenter GUI [24]
  • Rapid File Defragmentor Lite: command line utility.[25]
  • SpeeDefrag [26]
  • SpeedItUp FREE [27]

So which ones to mention, and which ones to ignore? The most commonly reviewed products are:

But articles on the second two are regularly deleted on Wikipedia, because of "notability", even though the computer press regularly reviews them. I agree that the others are "also rans" but surely an encyclopedia is supposed to be thorough, rather than vague. If the question of links is a problem, then delete the links, not the list of products.

The most notable freeware products are JkDefrag, Contig and PageDefrag, but why should the other products be ignored simply because they can be ignored?

The deleting in this section is particularly draconian and heavy handed, IMHO. Donn Edwards 20:57, 9 June 2007 (UTC)Reply

"But articles on the second two are regularly deleted on Wikipedia, because of "notability""
The speedy deletion criteria states "article about a person, group, company, or web content that does not assert the importance of the subject." — not that the subject is nonnotable. Please read the notability guideline on how to establish notability. The notability criteron is often uses as an "excuse" for deleting articles that have other problems as well, such as WP:NPOV or WP:V, since the notability criterion is much easier to assess objectively than other qualities of an article.
As for the removal of the list, my edit comment said it all: Wikipedia's purpose is not to be a directory of products, links, etc. If there's no further information about the given products on Wikipedia then the list is not much use (per WP:NOT). This is not the kind of "thoroughness" Wikipedia is after — I'd much rather see people working on the substance of the article (which is not very good at all) rather than lists. Such indiscriminate lists are also often target to spammers and advertising, for products that in fact are not notable — I intensely dislike them.
I do not understand what you mean with "draconian and heavy handed" though; I thought the list was bad and boldly removed it; it was not a "punishment" for anything. If there's a good reason to restore the list then reverting the edit is trivial. -- intgr #%@! 23:39, 9 June 2007 (UTC)Reply

What's so "notable" about Diskeeper? The Scientology uproar? The problem with the rapid deletion of other articles is even if there is a stub in place, the article gets deleted, so no-one ever gets a chance to contribute or discuss. It just gets nuked. Some of the entries I made didn't even last 1 week. That's draconian. Donn Edwards 17:11, 11 June 2007 (UTC)Reply

Then write a stub that asserts the notability of its subject, and avoid getting speedily deleted for non-notability. It also helps if you cite a couple of sources in the article. Tempshill 20:44, 21 August 2007 (UTC)Reply

defrag meta-data

edit

FAT file systems suffer badly from fragmentation of the directory system. Defrag systems that defragmented the directory system made a dramatic difference. Cut down defrag systems that did not defrag the directory system made very little difference except to people intensively using very large files. NTFS has a different and more complex meta-data system. NTFS directories are less susceptible to fragmentation. File fragmentation on NTFS, as on a FAT or Unix file system, normally makes very little difference, and defrag systems that only defrag the files normally make very little difference. You need a third party defrag utility to defrag the meta-data on NTFS partition, and if the meta-data has become badly fragmented, defragmentation of it makes a dramatic difference. If you optimise a file to one location on disk, but the meta-data is spread out, you've gained nothing, and a defrag the only defrags the files does nothing for you.

PerfectDisk claims to be the only defrag utility that successfully defrags metadata RitaSkeeter 17:00, 19 September 2007 (UTC)Reply

Risks for defragmentation?

edit

Aren't there risks associated with the abusing of defragmentation on a hard drive? WinterSpw 17:24, 20 July 2007 (UTC)Reply

I suppose that excessive churning increases the wear on the parts, but can't cite a source. Tempshill 20:44, 21 August 2007 (UTC)Reply
Defragmentation software vendors claim that less seeking with a defragmented drive more than compensates for the wear done during the defragmentation process; I don't think any studies have been done to test this. However, Google's study on hard disk reliability indicates that there is very little correlation between hard disk load and failure rate. -- intgr #%@! 22:33, 22 August 2007 (UTC)Reply
Steve Gibson, author of Spinrite, claimed on the Security Now podcast that "excessive" defragmentation would only reduce the lifetime of the drive to the extent that the drive was being used during the defragmentation process. HTH! RitaSkeeter 17:03, 19 September 2007 (UTC)Reply

Apple II

edit

The Apple II had a couple of utilities that would defrag its 5.25" floppy disks. I was glad at the attempts in this article to try and approach this as a general issue for all operating systems and disk types. I think the article might benefit from brief mentions of non-hard disk defragmenting: the lack of a benefit for Flash drives would be useful to mention. Tempshill 20:44, 21 August 2007 (UTC)Reply

Agreed; conceptually, defragmentation (and fragmentation) relates to file systems, not storage media (e.g. floppies or disks). This article should be more careful with its use of the term "disk". -- intgr #%@! 16:26, 28 August 2007 (UTC)Reply

History of the Defragmenter

edit

Does anyone know about the history of the defragger? I tried looking it up, but don't care enough to look further. However, a mentor of mine told me that the defrag program was designed for a contest that Intel threw a decade or three ago (not sure at all)...

Intel: We have a super good processor! No one can make it max out! No one! In fact, we will give prize money to whoever CAN max it out, because we just KNOW no one can! Some guy: Hah! I made a program that defragments your hard disk! It's useless, but it's so energy consuming that it maxes out your processor! Intel: Dangit, there goes our money! Some guy: Bwahahahah!

Well, I'll assume people saw a big use in it, but obviously it was too slow at the time.

Anyway, I'd research it myself, but I can't find anything and don't care to delve further. I'd put it up, but I don't remember the whole deal, don't have sources, and don't know if this story is even right!

DEMONIIIK (talk) 03:58, 5 December 2007 (UTC)Reply

Earliest defragmentor I know of was made for DEC PDP-11 (RSX or RSTS/E OS) by Software Techniques in early 1980s. They then created the first disk defragmentor for the DEC VAX running VMS in 1985. talk 12:56, 3 February 2008 (UTC)

I was an owner of Software Techniques and wrote the first disk defragmentor known as DSU (disk structuring utility) and was a component of our product DISKIT for the RSTS/E operating system. This was released in 1981 and to my knowledge was the first defragmentor ever commercially released. We decided to develop a defragmentor after I gave a presentation at the DECUS in San Diego circa 1981 explaining the fragmentation tendency of the on disk structure for RSTS/E and the performance benefits that could be achieved in optimizing the placement of file directories and frequently accessed files. After the presentation members of the audience approached me with their business cards wanting to purchase the "product." That's when we realized the product had commercial potential. — Preceding unsigned comment added by Sdavis9248 (talkcontribs) 01:00, 16 November 2012 (UTC)Reply

Misleading explantation, lack of discussion of alternate strategies

edit

I just skimmed through this article, it seems to have several fairly serious problems. The foremost is that the explanation of fragmentation describes how the old, deprecated msdos file system (fat, fat16, fat32) works. This filesystem was so badly prone to fragmentation that even microsoft abandoned it ... what ... over a decade ago, now? ... by introducing ntfs, and slowly incorporating it into its windows product line.

I'd like to see a discussion of fragmentation avoidance strategies. I think many of these have now been developed; I was hoping to see some discussion of their merits, and relative performance. A discussion of things like wear-levelling for usb sticks (flash ram) would also be appropriate.(think of wear-leveling as a fragmentation-avoidance in time rather than in space; the goal is to use sectors evenly, so that one part of flash doesn't wear out prematurely. In general, flash drives don't have the seek-time problem (or less of one) so the fragmentation issue is not the problem like it is for hard drives. Rather, its the wear that becomes more of a consideration. 67.100.217.180 (talk) 19:19, 5 January 2008 (UTC)Reply

NTFS disadvantage: smaller default-clustersize

edit

NTFS has smaller default-clustersizes than FAT, which increases the propability of fragmentation. --Qaywsxedc (talk) 06:03, 29 February 2008 (UTC)Reply

  Yes, NTFS has a smaller cluster size by default, but that does not increase the probability of fragmentation because all the other algorithms added to reduce fragmentation. One of these benefits is the MFT, which can store very small files (like desktop links and small INI files) without creating an extra "extent" on the disk for the file; also, due to the MFT, NTFS doesn't have a continually expanding, binary linked, ad-hoc filetable like FAT does. Additionally, NTFS has the ability to pre-allocate space without the accompanying write time (using the sparse file mechanism), which means that a program can indicate the size of a new, large file, and Windows will find and reserve a contiguous (if possible) space for the file, instead of blindly writing a new file to the first (and likely too small) space that it can find.
  However, NTFS does have B-tree structures (sometimes called metadata in defragmenter advertisements) that hold folder contents listings; these readily get fragmented and spread all over the disk, causing file searches and listing the contents of folders that contain a lot of items (like 10,000 or more) to be very slow with lots of HDD thrashing. Unfortunately, many defragmenters either do not defragment these files, or they treat these files improperly. For optimum performance, these files need to be defragmented, sorted by filepath, and moved end-to-end near the MFT (a good defragmenter would know how to do this; but unfortunately, I can not recommend names because I have not tested other defragmenters in detail, although I have written one privately that is very thorough and does this).
76.6.164.233 (talk) 21:24, 8 April 2013 (UTC)Reply

RAID defragmentation

edit

Are there any studies or benchmarks on the performance of a badly fragmented RAID5 array versus a defragmented RAID5 array? Or any other RAID levels for that matter? Since the data is physically fragmented amongst multiple drives, does logical fragmentation of files matter? My intuition says it does matter, but has less impact on performance than fragmentation on a single drive. Could fragmentation lead to a file being unevenly distributed amongst the drives (i.e., more than 50% of a file on a single drive in a three-drive RAID5 array)? I'm not an expert, so sorry if that seems ludicrous. I'd like a reliable study which shows whether it makes a difference or not, and if so, to what extent. dejaphoenix (talk) 01:36, 14 July 2008 (UTC)Reply

Animation improvements

edit

The animation showing coloured blocks being shuffled is helpful but without a pause and short blurb below the graphic describing whats going on the viewer isn't really aware of the separate steps going on.

Suggested approach:

"We start with an empty system and fill it up.."

<opening animation>

"Then some items get removed and new ones fill in the empty spaces

<animation>

"The system is now fragmented, portions of some items are scattered. Now we start the defrag process..."

<closing animation>

--Hooperbloob (talk) 05:08, 4 June 2010 (UTC)Reply

Page is too focused on file fragmentation.

edit

The page is too focused on only file fragmentation. It misses memory fragmentation for instance.

Much of the information can be combined with this page. — Preceding unsigned comment added by Semanticrecord (talkcontribs) 01:49, 15 April 2012 (UTC)Reply

Defragmenting and hard drive stress

edit

Can defragmenting too much (or too often) be detrimental to the hard drive by wearing it out and making it run too hot? One defrag app, DiskTune actually stops defragmenting if the temperature of the hard drive gets too high. If defragmenting too much/too often is harmful to the health of the hard drive, the article should mention it. MetalFusion81 (talk) 18:13, 14 August 2014 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified one external link on Defragmentation. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 06:12, 10 December 2016 (UTC)Reply

Neutral point of view regarding tools and stuff.

edit

This line: "...Windows Vista, Windows 7 and Windows 8, the tool has been greatly improved and was given a new interface with no visual diskmap and is no longer part of Computer Management."

I feel that it's actually making a judgment on the efficiency and the quality of the the mentioned OS'. The statement might very well be true, but it's sounds more like a soundbite and less than something descriptive.

I'm pretty sure there are similar statements regarding other OS:s in the article if the above line is in it. I do however feel that this is something that I don't want to go through with myself as I probably would need to brush up on the relevant Wikipedia Guideline Docs. Sadly I've got no time for that at the moment which is why this message will do for now. I rather not do something rash and have someone clean up my mess.

But to quickly relate to what I know better. Articles regarding bands or album releases have both informational sections and a 'reception' section. I think that this particular section in the above mentioned line: "Windows Vista, Windows 7 and Windows 8, the tool has been greatly improved" sounds like a quote from a review or an advertisement. Not something that's written for educational/informational purposes.


So a couple of quick proposals/suggestions (for the purpose of community feedback):

  1. At a bare minimum keep the line as it is and just cut out "greatly", I'd say that would make it more neutral.
  2. Somewhat better: Rewrite the whole part, tersely explain how it was improved and the qualitative implications of it. Why it made it file-system format more efficient.
  3. What I feel would be a best case scenario: Basically do what's said at '2.' but more in-depth. After that explain too technical parts so someone like your grandma could get the gist of it even if some of it is, short but necessary, technobabble. I think it's feasible to do that in ~1000 characters if you'd provide proper sources

.

At the end of the day computing is for everyone in our modern world. Let's help people understand that world. I don't think we should assume the PR-departments does a good job of doing that. (If anything of the above sound nonchalant or something I just want to mention that I'm tired, not angry and I'm not telling people to do stuff. I'm asking for input. If you feel like it sounds like me being rude I apologize in advance.)

Signing of, Kxxvii (talk) 20:42, 30 November 2018 (UTC)Reply

Translations

edit

defragmentation =

edit
  • Greek: μονομερισμός (not μονομέρεια, because -ισμός is the process towards something· -εια is usually a description of a state/condition/situation)

defragment, defrag

edit
  • Greek: μονομερίζω

defragmenter

edit
  • μονομεριστής

Windows erroneously translate defragmentation as ανασυγκρότηση, which means reorganization. Reorganization isn't a bad description, but it doesn't include the sense of mereological unification of fragments/unity restoration of files/unimerous restoration uni- + meros/part + -ous. Defragmentation = unimerization (UK: unimerisation).

Ok. In Windows and other operating systems the program sorts out the files; but we shouldn't destroy the English language because of some companies.

Section missing for some reason?

edit

Previously in the article, there was a section about optimisation (optimization). Besides defragmenting, a defrag utility could place the defragmented program files and dependencies next to each other — in their same loading order. For example, if program A loads A.exe, A1.dll, A2.dll, A1.dat and so on, Windows records their loading sequence and stores this data in layout.ini I believe? The defrag tool can use the data from layout.ini to place the program files and dependencies next to each other in their same loading order (after defragmenting them files).

Also mentioned previously: the outer tracks of the hard drive are faster. The most used files can be defragmented, optimised (as above) and also placed onto the outer tracks to improve performance.

Defrag tools can do more than just defragment files. They can optimise performance, as explained above. Why not include this in the article again, along with citations?

Yes I'm aware that Windows XP and above use prefetcher and later versions of Windows (Vista onwards) also use superfetch to make programs start faster. It may be worth mentioning them in the article? MetalFusion81 (talk) 18:12, 26 October 2019 (UTC)Reply