Most people understandably don’t have a clue about how a hard drive stores information, and therefore don’t have a clue about what it takes to repair it when something goes wrong.
Think of a hard drive like a book. There is a table of contents which points at the chapters. Let’s say that chapter 2 begins on page 30; chapter 3 begins on page 45 and chapter 4 begins on page 70.
Now, if the Table of Contents gets messed up and says that chapter begins on page 72 (instead of page 45), you’re going to start reading in the wrong place.
(Obviously here, I’m equating the actual chapters to files on your hard drive, and the Table of Contents to the drive directory “the catalog.”)
So far that’s not too bad, and products such as DiskWarrior can easily go thru the actual “book” and find where the chapters begin, and replace the defective table of contents with a new one.
This is by far the most common corruption on a hard-drive, and the easiest to fix.
However, in reality, the situation is considerably more complex, because instead of each page in a given chapter following one after the other, they are scattered all over the book. In the worst case scenario, no two adjacent pages are actually adjacent to each other.
So, in order to keep track of that mess, there is a hidden list of where things are “attached” to the chapter listing in the table of Contents. In order to “read” a chapter’s worth of pages in sequence, the computer has to refer back to this hidden list (called a B-Tree) to find each page. These B-Tree point at “Extents” which are contiguous blocks of space on the drive
Already finding and fixing a corrupted directory becomes far more difficult that our first simple example.
But still doable, based on certain assumptions…
But we’re not done yet… because this is no ordinary book! The fact is that the reader can write new pages and add them, or delete old pages. Now the process of keeping the contents up to date is even more complex… and made harder because that list of extents may run out of space. So if that’s the case, there is a structure created, call the Extents Overflow, which is also examined to find where the rest of the pieces of the file might be
You’ve got lists pointing at lists of lists which point at pages in chapters or documents…. it’s “non-trivial” as they say.
NOW… supposed you delete one of those pages. Heck, delete several of them (which you think of as series of contiguous pages, but which are in fact scattered all over the drive) and you have “holes” in the lists of lists…. all of which have to be updated to indicate that they are now empty space on the drive, and can be used for something else.
Whew! NOW… you go add a new file or report or install a new program, and some of it goes in those holes. Well… it goes where the lists -say- that the holes are.
However, if something goes wrong; the power goes out; some installer doesn’t work right… and one of those lists ends up reporting that there is free space when in fact that space is actually used by a program or file…and the new program overwrites it (because the list incorrectly said it was empty)… well NOW you’ve got a real mess.
You’ve got a totally corrupted hard drive, and no amount of Disk Warrior, Drive Genius, TechTool Pro et al is likely to succeed in fixing it. That’s because there’s data where the lists say there is supposed to be data, and data where the lists say the space can be reused. (Deleting things does not do anything at all to the data: it just changes the lists themselves.)
Some repair utilities have routines that can handle -some- of this kind of corruption, but only if it happened THIS way and not THAT way, while others can handle those lost chains only if they happened THAT way, but not THIS.
And if you run too long with a disk that has corrupted lists of what is free space and what is not… if you run with those list wrong, then as time goes by, the corruption becomes worse and worse. Eventually you’ll get to a place where absolutely no software will ever be able to repair your drive… and you’re toast.
So: that’s why anecdotal reports of Drive Genius fixed something that Disk Warrior couldn’t, don’t mean diddly squat. Had the corruption been the other way around, DW would have fixed it and DG might have failed.
The fact is that the products that are out there now: Disk Warrior, Drive Genius, TechTool Pro, DiskTools Pro… all of them are good at what they do; most of them do fundamentally the same things; and all of them do those things slightly differently.
So anyone saying, as a general, overall statement “Disk Repair Product A works, and Product B doesn’t, so never buy Product B” simply doesn’t understand what is going on.
-Most- of the time, for simple directory corruption, any and all of those products will repair the catalog. Because simple corruption is the most common thing, and because Disk Warrior has chosen to concentrate on that, it has a justified excellent reputation. But there certainly are things it will not fix that the other packages will.
I hope this helps explain why you see so many “A works, B sucks… NO NO: B WORKS, it’s A that sucks!” comments out there.
>Thanks for another of your great contributions to this list.
>It brings up a few questions:
>If a drive is cloned, via SD or CCC, does that make files more contiguous?
Very good question. If you begin with a blank destination drive, then the answer is “yes.” This is a -much- faster way of “defragmenting” a drive (than to use defrag utilities.)
(CCC offers a “block copy” which is an -exact- physical-layout copy of the source. Using it, therefore, dutifully copies all the fragmentation of the original.)
>It used to be recommended to drag copy a main drive to a blank drive, erase the main drive, then drag copy everything back rather than using one of the utilities to clean this up. Would either method help reduce this problem?
Drag and drop copies of a boot drive will not result in a bootable drive under OSX. File links and structure are broken, and the system will not operate.
>Is a reasonably frequent restore from backups a usable method to lower this kind of corruption? Obviously, this cannot correct a power drop failure to write correctly.
Not really. You’d not want to clone back something you did, say, a week ago, because you’d be losing all the files that were newly created during that week.
Fragmentation is perfectly natural, and not to be feared. The HFS+ system does a grand job of keeping it under control. Defragmentation is normally not needed on a Mac (or at least -very- rarely.) The only common use for it is with data drives devoted to huge files, such as video files which are being edited. A severely fragmented video file will not stream thru the editor as smoothly as a defragmented one.
>One could infer from the article that using several (DW, WG, TTP) might not be a bad idea. Do they play well with each other?
Hmmm… as a “pro” with clients, I have one of each. Whether or not that’s a great idea in general is slightly up in the air. If money is not an issue, then having each wouldn’t hurt… with a caveat that comes from the second part of your question: it’s been my experience that “ganging them up” one after the other with the thought that one might fix what another missed, isn’t the best of all possible ideas.
A few years ago, before these had all matured, it was definitely a -bad- idea, since it frequently ended up making things worse. Now that they have matured, I’d be less hesitant about it… but only “less.” They each have various different routines to handle situations, and it’s almost impossible to say that they will or will not “mesh.”
My own approach is to use one, and then see if the issue is fixed. If it is, I’m done. If not, then I’ll try it again with the same software. If the issue still remains, only then will I try another. YMMV.
Someone asked if my explanation means that one should “regularly defragment your drive”…
Absolutely NOT! First, Apple’s HFS defrags many frequently used files automatically. Second, the fact that the file structure is complex doesn’t mean that defragging makes it that much less complex. Third: defragging is dangerous to your data, and should never be done at all without a current, full, bootable, backup. Fourth, if you’re going to create a clone/backup, you can defrag faster and safer by just wiping your drive and cloning back. Fifth, drives and the HFS are fast enough that you won’t likely notice any difference, unless – Sixth, you’re doing video or audio editing and you have a drive dedicated to it. THEN, you’ll find A/V will stream faster into your editor if the files are contiguous.
So, what do you use?
I own all of them. Some people say that Disk Warrior does what no others do – and that’s not correct. In fact, the analyze/rewrite catalog was first done my TechTool Pro years ago. Now they all do it. That said, Disk Warrior has concentrated almost exclusively on that aspect, (although they’ve expanded their tool in later versions) and pretty much have directory corruption repair under control. Because of that, and because it’s generally thought of as a “one-trick-pony” most folks (including me) go to it first.
That said, Drive Genius and TTP do the same thing… and they do more as well. I like Drive Genius for a quick check of the drive and for block cloning. It has other features which are used less frequently, and seems to perform well. TechTool Pro and DiskTools Pro also have their uses.
Each of these programs overlap the others to some extent. Do you need all of them? Probably not, unless you’re doing repairs professionally. The general consensus that Disk Warrior is the go-to tool is correct for most cases, since most problems are of the kind it repairs with great success.
But honestly, the others will do the same thing, and offer more features. The thing is, as I’ve explained above, there IS NO “one best” and I wouldn’t hesitate to recommend any of these.