|
Fair point. But I've been listening to his podcast for over a decade, and I have come to the conclusion that those who called him a quack were just poorly informed.
I forget what his exact concern was (something about XP's default network configuration?), but in the end he was proven right and Microsoft eventually had to seriously lock it down with SP2, which introduced (for the first time) the Windows firewall.
|
|
|
|
|
I remember him alright.
He isn't a quack, but has a tendency to fight windmills.
|
|
|
|
|
Jörgen Andersson wrote: fight windmills.
I had never heard of that one. That's a cute variation on "tempest in a teapot". I like it. Probably because it applies exactly. LMAO.
"Fighting windmills" is probably how I thought of him at the time I sided against him on some of his old claims. One thing I'll say for him, is that he's got honest beliefs. He believes in what he claims, and doesn't try to BS anyone. Which doesn't mean he can't ever be wrong.
|
|
|
|
|
Does a "defrag" use less space? Are there fewer "pointers" to follow? How much can you "save" in extreme cases? Is space a concern on a "maxed out" SSD?
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: Is space a concern on a "maxed out" SSD?
Is space not a concern on any maxed out drive, no matter what the underlying technology might be?
|
|
|
|
|
Defragging an SSD makes no sense, but trimming does.
SSD TRIM is an ATA command that enables an operating system to inform an SSD drive which data blocks it can erase because they are no longer in use. The use of TRIM can improve the performance of writing data to SSDs and contribute to longer SSD life. This is an expensive operation, which is why it isn't performed after every time a block is released.
See this explanation by Kingston Technology, a RAM and SSD drive manufacturer: The Importance of Garbage Collection and TRIM Processes for SSD Performance
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
|
I respectfully disagree. Windows recognizes that your drive is an SSD and doesn't do it. Instead, there are other optimizations that Windows does to SSDs that are good to keep it working well. From my reading and understanding over the years, defragmenting, however, does nothing at all to an SSD drive except needlessly burn read/write cycles. If you know of information supporting your viewpoint, I'd love to read about it.
|
|
|
|
|
Keefer S wrote: If you know of information supporting your viewpoint, I'd love to read about it.
You read the link?
|
|
|
|
|
Interesting read and explains why Rasco's Perfect Disk uses a consolidate free space algorithm by default for SSDs.
|
|
|
|
|
ok...
Note of course the post is 9 years old. So maybe something has changed since then.
Additionally it does not provide any references. Closest is the following
"I dug deeper and talked to developers on the Windows storage team"
The first image gives a screen shot. On my personal computer I can see that the service is not on. Which suggests that to a certain extent, if Microsoft thinks it should be happening, it is not (on my computer.)
The article says this.
"First, yes, your SSD will get intelligently defragmented once a month."
And it also says the following
"Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance."
I did not change the default. And as noted it is not on. I am running Windows 10. So perhaps no longer as relevant.
And at least back then, 2014, SSDs had a reliability problem. So maybe that has changed since then.
|
|
|
|
|
Well,it was still around three years ago on windows 10, since they released a bugfix for it.
Microsoft fixes Windows 10 bug causing excessive SSD defragging[^]
If you think about it, it makes sense to defrag also SSDs, just not very often.
If you get a lot of file fragments spread all over the disk, it will cause excessive writes since it will have to spread out the files on more blocks.
Default setting since windows 8.1 is every 28 days.
|
|
|
|
|
Yup. The problem isn't so much degraded performance speed-wise.
I think the problem is that fragmentation reduces the MTBF (mean time before failure). As fragmentation makes it increasingly impossible to write data in contiguous blocks, it means things getting stored are going to take both more writes and more reads. I think this "death spiral" in fact killed more than few early adopter SSDs.
|
|
|
|
|
Even magnetic disks do low level reallocation of blocks when a bad block is detected. The disk addresses appear continuous, but one or more blocks may be physically located in a different location. On a magnetic disk, this of course affects average access time. Probably less than you would think.
SSDs always do a physical layer 'reallocation' (which is really an allocation, without the re), below the disk address level, to even out wear, so that the same physical blocks are not used again and again, but new writes are distributed among all free blocks. I would be very surprised if this allocation mechanism wouldn't handle bad pages as well.
What would make sense on an SSD is if the disk driver kept track of blocks in read-only files, written once and later only read. If the disk is 90% full, and other files come and go, even with wear leveling the remaining 10% of the blocks may have been written 50,000 times. It would make sense to move read-only files into this area, to provide a 'virgin' area for the next million block writes. (I never heard of any SSD disk doing this, but maybe some of them do.)
|
|
|
|
|
"below the disk address level, to even out wear, so that the same physical blocks are not used again and again, but new writes are distributed among all free blocks. I would be very surprised if this allocation mechanism wouldn't handle bad pages as well. "
Yeah... that bit is what I thought TRIM was doing. Defrag just does that now if it is pointed at an SSD? But I think so does windows now without you specifically messing with it.
|
|
|
|
|
I've only read bits and pieces of Scott's article, looking for specific keywords, but (I think) what he fails to mention is that Windows has adapted its defrag approach so it now knows how to tell an SSD apart from a spinner.
I believe there were justified concerns at the time, when SSDs first came out (if I remember my timeline correctly), XP's defragger just treated all drives like any spinner (the only thing it knew about) and blindly tried to run the defrag code that only made sense for traditional drives. It's only later that MS introduced the Trim command to Windows.
|
|
|
|
|
The trim command was introduced with win 7 iirc
|
|
|
|
|
You're probably right.
And if it's that "recent", I suspect it was never backported to XP, so I suspect XP's defrag would still try to do bad things to an SSD.
|
|
|
|
|
To my knowledge, this happens automatically anyway, but in the form of TRIM, whereby unused blocks are cleaned in some way. But that happens automatically, as far as I know.
In the Samsung FAQs, they also specifically recommend against using any kind of defragmentation as that will cause additional writes, which in turn shortens the lifespan of the SSD. 
|
|
|
|
|
Good question. Other comments favor NO or just TRIM (which is something else and was an issue when, long time ago, you couldn't yet set things up to do this automatically).
Arguments on the hardware side are pretty convincing, but then still:
- hardware: why is reading/writing the same bytes in different sized packages so much slower for small packages?
- what about the OS having to emit many more diskrequests, switching from user mode?
Can't that slow things down?
- and finally, what about measuring?
My impression is that it does make a difference. So, max once a month, when I believe it is useful, I do a full defrag.
There is (in my case) an argument against in differential backup (diskimage): defrag will cost many more backup bites than without defrag, so much so, after a few backups, that a new complete backup is an option.
So, I will lower my defrag frequency even more to say once in 3 months.
Never say never!
|
|
|
|
|
The part that matters isn't so much performance like defrag can help with a spinning platter.
I think Windows is supposed to approach defrag differently based on the firmware of the drive and its being SSD (it just happens).
The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
This is because instead of being able to stick data in one place in a full page it has to put it multiple different places.... so instead of 1 read/write for that file now, you're doing 5. Not exactly, but pretty much.
I think early on, this, and the fact that they needed to treat SSD different in this regard was not recognized and caused several brands of drives to die well before their expected MTBF.
|
|
|
|
|
jochance wrote: The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
As quite a few links indicate (posted here and when googled) describe what happens for a SSD.
With the older hard disk drives there was a physical spinning platter. Thus 'wear and tear' as the arm switched back and forth over the various tracks.
With an SSD there is no arm to move. Addressing is direct.
|
|
|
|
|
jschell wrote: Thus 'wear and tear' as the arm switched back and forth over the various tracks. I never ever heard of a disk with a worn out arm. (Nor of a loudspeaker with a worn out voice coil - the mechanisms are similar. The speaker has probably made magnitudes more back-and-forth moves.) There is no physical contact between the arm/head and the platter, and no physical wear from long use.
With an SSD there is no arm to move. Addressing is direct. Down to some level. While a magnet can be flipped one way or the other a more or less unlimited number of times, a cell in an SSD is worn out with repeated writes. A good rule-of-thumb used to be 100.000 writes; some people said that was overly optimistic. Technology may have improved, but still all SSDs use some kind of wear leveling: There is a mapping to physical blocks, so that writing is spread evenly on the free blocks. The external address you use when writing a block does not map directly to one physical location on the SSD.
|
|
|
|
|
I think they misunderstand. The wear was more on the bearing of the spinning platter. I think mostly the arm casualties were from bearings that gave and a 1/2 pound of spinning metal being set free in the enclosure.
|
|
|
|
|
Yes I am aware.
It's not an arm moving causing the wear. It's the natural process which every bit of flash ever is susceptible to and bunches of fragmentation can cause more of it.
If you google hard enough I'm sure you'll find I'm not fibbing to you.
|
|
|
|