![]() |
![]() |
![]() |
Files on a computer are stored on disk drives which contain round platters. When the computer is turned on the platters are constantly spinning.
When you access a file, components of the drive assembly called heads hover above each disk and read the data as it passes beneath. The data on the disk is organised in units called clusters. At first, each file is stored in consecutive clusters, which allows the head to read the entire file in one pass. |
![]() |
The diagram above shows a number of full clusters and empty clusters
on a section of hard drive. The empty clusters have probably been created by the deletion or editing of a file.
Each time a new file is added to the system, the file system starts at the beginning of the drive and looks for the first unused cluster. Files normally use more than one cluster so when the first cluster is filled it moves to the next available and so on.
The new file has now been added to the hard disk and is represented by the red squares. As you can see, the new file is not consecutively placed on the disk because the file system uses free space as it finds it.
Fragmentation allows the files to be broken down into separate pieces on the hard drive. The more full the hard drive, the more likely a new file will be fragmented into more pieces. Eventually pieces of the file end up all over the disk in a random order. The head will now have to pass a number of times to read each file and of course, this takes a longer period of time. When the file is fragmented into two parts on the same track, the read/write head has to move into the position above the required track. The first part of the file will be scanned and then there will be a pause while it waits for the second part of the file to move under the head. The head is then reactivated and the remainder of the file is scanned.
The time needed to read a fragmented file is longer than the time need to scan an unfragmented file. The exact time needed is the time to rotate the entire file under the head, plus the time needed to rotate the gap under the head. A gap may only add a few milliseconds to the time needed to access a file but a number of gaps will significantly slow down the time it takes to read the file. On top of that you have to add the extra operating system overhead required processing the extra I/Os. If the two fragments are on two different tracks, we have to add time for movement of the head from one track to another. The track to track motion is much more time consuming than rotational delay as the head has to physically move. As the head moves to the next track, sometimes it can miss the beginning of the second fragment so the delay is further increased to another rotation of the disk.
In the two above examples I have discussed the file being fragmented into just two pieces but in reality this could be hundreds. The head will have to move across a number of tracks to access a file. This is no problem for the operating system as it knows exactly where to find each fragment but to the user it can be frustrating as they wait for their file to be collated. As the files on the disk become more fragmented this waiting becomes longer and longer. The users will wait for their applications to load, then wait for them to complete while excess fragments of files are chased up around the disk. They also wait for new files to be created while the operating system searches for enough free space on the disk and because the free space is also fragmented, this process is slower.
So how do we defragment our hard drives to optimise their performance?
Fortunately, a tool to help combat this problem is built into certain operating systems such as Windows 95.
The picture above shows my hard disk and how it is fragmented. When the defragmentation tool is activated it will first read the drive to find out what state it is in, locate all fragments of all files and find out where the free space is. Once this information is collected the drive is checked for errors. The defragmentation now begins.
When the hard disk starts its defragmentation, files are moved from a state of being in multiple pieces on various parts of the drive to one contiguous block. Each piece of data may have to be moved and therefore can have its risk. A small problem or error on the hard drive can become a big one. For this reason it is best to run SCANDISK or a similar application to check and repair any damage before defragmentation begins. Once the defragmentation is done it should look like this.
All the files will be together, in one cluster after another and all the free space will be at the end of the drive.
When running disk defragmenter, choose a time that you will not need the system for a while as this process can take a long time if it has not been used regularly. The more regular it is used the less time it will take as there will not be as much work for it to do.
It is a good idea to run the defragmenter regularly anyway in order to keep your disk healthy and to keep your system running at peek performance. I have read that Windows NT systems that have run for more than two months without defragmentation, have after defragmentation, at least doubled their performance.
The first step to increasing your systems performance is to ensure your hard disk is defragmented on a regular basis before even considering upgrading the CPU or memory.