|
The trim command was introduced with win 7 iirc
|
|
|
|
|
You're probably right.
And if it's that "recent", I suspect it was never backported to XP, so I suspect XP's defrag would still try to do bad things to an SSD.
|
|
|
|
|
To my knowledge, this happens automatically anyway, but in the form of TRIM, whereby unused blocks are cleaned in some way. But that happens automatically, as far as I know.
In the Samsung FAQs, they also specifically recommend against using any kind of defragmentation as that will cause additional writes, which in turn shortens the lifespan of the SSD. 
|
|
|
|
|
Good question. Other comments favor NO or just TRIM (which is something else and was an issue when, long time ago, you couldn't yet set things up to do this automatically).
Arguments on the hardware side are pretty convincing, but then still:
- hardware: why is reading/writing the same bytes in different sized packages so much slower for small packages?
- what about the OS having to emit many more diskrequests, switching from user mode?
Can't that slow things down?
- and finally, what about measuring?
My impression is that it does make a difference. So, max once a month, when I believe it is useful, I do a full defrag.
There is (in my case) an argument against in differential backup (diskimage): defrag will cost many more backup bites than without defrag, so much so, after a few backups, that a new complete backup is an option.
So, I will lower my defrag frequency even more to say once in 3 months.
Never say never!
|
|
|
|
|
The part that matters isn't so much performance like defrag can help with a spinning platter.
I think Windows is supposed to approach defrag differently based on the firmware of the drive and its being SSD (it just happens).
The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
This is because instead of being able to stick data in one place in a full page it has to put it multiple different places.... so instead of 1 read/write for that file now, you're doing 5. Not exactly, but pretty much.
I think early on, this, and the fact that they needed to treat SSD different in this regard was not recognized and caused several brands of drives to die well before their expected MTBF.
|
|
|
|
|
jochance wrote: The reason it matters to an SSD is because when an SSD gets heavily fragmented, what it can mean is much faster wear and tear on the drive.
As quite a few links indicate (posted here and when googled) describe what happens for a SSD.
With the older hard disk drives there was a physical spinning platter. Thus 'wear and tear' as the arm switched back and forth over the various tracks.
With an SSD there is no arm to move. Addressing is direct.
|
|
|
|
|
jschell wrote: Thus 'wear and tear' as the arm switched back and forth over the various tracks. I never ever heard of a disk with a worn out arm. (Nor of a loudspeaker with a worn out voice coil - the mechanisms are similar. The speaker has probably made magnitudes more back-and-forth moves.) There is no physical contact between the arm/head and the platter, and no physical wear from long use.
With an SSD there is no arm to move. Addressing is direct. Down to some level. While a magnet can be flipped one way or the other a more or less unlimited number of times, a cell in an SSD is worn out with repeated writes. A good rule-of-thumb used to be 100.000 writes; some people said that was overly optimistic. Technology may have improved, but still all SSDs use some kind of wear leveling: There is a mapping to physical blocks, so that writing is spread evenly on the free blocks. The external address you use when writing a block does not map directly to one physical location on the SSD.
|
|
|
|
|
I think they misunderstand. The wear was more on the bearing of the spinning platter. I think mostly the arm casualties were from bearings that gave and a 1/2 pound of spinning metal being set free in the enclosure.
|
|
|
|
|
Yes I am aware.
It's not an arm moving causing the wear. It's the natural process which every bit of flash ever is susceptible to and bunches of fragmentation can cause more of it.
If you google hard enough I'm sure you'll find I'm not fibbing to you.
|
|
|
|
|
Other posts provided links but I still have not seen anything authoritative.
Following, still not authoritative, seems to follow what other even less authoritative sources say. And at least the post date is more recent.
https://www.pcmag.com/how-to/how-to-defrag-your-hard-drive-in-windows-10[^]
That link, and others, state that a SSD should not be 'defragged'. But rather it is 'trimmed'. And that is what that process does.
As noted in my other post, my windows 10 computer does NOT have the stated process enabled. I didn't turn it off. And I think I remember installing Windows 10 directly (I remember because I was annoyed that it didn't come installed out of the box.)
But I can't find anything that suggests whether the default is turned on or off.
|
|
|
|
|
No, it's utter nonsense. For at least two reasons.
For one, assuming you are running a recent Windows, NTFS isn't prone to fragmentation anymore in the way FAT used to be in the days of old.
And second, you are just wasting write cycles on that SSD (which are still limited below the lifetime of a "spinning rust" drive) for little gain, if any at all.
|
|
|
|
|
Doesn't Windows itself refuse to defrag a SSD? Try running Windows defrag on SSD and it will simply do some 'trimming', and won't show any fragmentation status. That should be clear answer. A maker of OS should know best.
|
|
|
|
|
Try this: Run defrag c: from an elevated command prompt.
Be patient, it takes quite a while, and you get:
Pre-Optimization Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 20%
Largest free space size = 863.72 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 0%
Largest free space size = 863.75 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
Ok, I have had my coffee, so you can all come out now!
modified 2hrs ago.
|
|
|
|
|
...downhill!
VS consuming huge amount of memory isn't new (even MS decided to ignore it totally)...
But now I have something new... And it confirmed several times...
I have a solution with around 80 projects in it, only a several loaded at any given time... If I reload a project to change something it will not compile until VS closed and re-opened...
Until that time it will report compilation failed without any actual error, but also without the option to run...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
After the last update of VS2022 my colleague reported that debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging, he also uses other debugging tools that might interfere with VS debugging.
|
|
|
|
|
RickZeeland wrote: debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging
Interesting you'd mention that. I installed the latest update last week, and on Thursday/Friday, on multiple occasions, single-stepping (F10) seemed to continue execution or couldn't recover or something like that. I attributed it to me fat-fingering it, but happened enough times that now I see your post, I'm wondering if there's something to it.
In my case that would be C#.
|
|
|
|
|
wow. Not testing much, are you Microsoft.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
One has to remember that multi-project solutions don't (always) compile if you haven't checked the proper project(s) in the "Build | Configuration Manager" unless you specifically ask to "Build / Rebuild" that project. (Been there)
On the other hand, when VS is "sleeping", it "seems" to release (more) excess memory. I think they're doing a lot of tinkering.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I have a very precise dependency tree, so compiling the main project will compile everything that is outdated - I also mostly do build-solution...
But the main issue is that, there is no error behind the fail and re-opening VS solves the problem - which indicates that VS does no know how to reload a unloaded project correctly... anymore... (which is fixed by re-opening VS and the solution)...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
Kornfeld Eliyahu Peter wrote: VS consuming huge amount of memory isn't new
Versus which IDE that uses very little?
Kornfeld Eliyahu Peter wrote: I have a solution with around 80 projects in it
To me that would be an organization problem. I would break it into different solutions and if that was not possible then it would suggest different sort of problem.
|
|
|
|
|
Wordle 897 3/6*
🟩🟨⬛⬛⬛
🟩🟨🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 897 3/6
🟩🟩⬜⬜⬜
🟩🟩⬜🟩🟩
🟩🟩🟩🟩🟩
All green 💚.
|
|
|
|
|
Wordle 897 3/6
⬛⬛🟩⬛⬛
⬛🟨🟩⬛🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
⬜⬜🟩⬜⬜
🟨⬜⬜⬜⬜
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 897 4/6
⬜⬜⬜⬜⬜
⬜⬜⬜⬜⬜
🟨🟨⬜🟩🟨
🟩🟩🟩🟩🟩
|
|
|
|