Transfer speed on discs was and is almost exclusively a matter of file size, so it should be easy to estimate a much better time than the dumb “total bytes / current speed” that constantly fluctuates since file sizes are not all identical.
That’s so wrong. It always fluctuates because the speed itself always fluctuates. It’s only easy when you know it doesn’t fluctuate because you’re not using the computer at the same time.
Since when is that so? Or where? W11 essentially “pauses” when there are lots of small files after bigger ones with >1000 MiB/s since those only reach perhaps 100 MiB/s.
Since forever. I can’t say for windows since I haven’t used it in forever but almost all sensible algorithms take it in consideration. There are also many factors, such as what filesystem (ext4…) you use. You can’t account for them all. Usually you simply add a small “overhead” constant per file, so smaller files get that many times while big ones only get it once.
On disc you have read/write misses and seeks, and due to constant RPM + geometry the read/write the speed literally varies with the physical distance of the written data from the center of the cylinder (more dots per arcsecond at the outer edge)
Transfer speed on discs was and is almost exclusively a matter of file size, so it should be easy to estimate a much better time than the dumb “total bytes / current speed” that constantly fluctuates since file sizes are not all identical.
That’s so wrong. It always fluctuates because the speed itself always fluctuates. It’s only easy when you know it doesn’t fluctuate because you’re not using the computer at the same time.
Since file size is not taken into account, it fluctuates wildly even if you don’t do anything other than transferring those files.
File size is taken into account, but it’s just one factor among many.
Since when is that so? Or where? W11 essentially “pauses” when there are lots of small files after bigger ones with >1000 MiB/s since those only reach perhaps 100 MiB/s.
Since forever. I can’t say for windows since I haven’t used it in forever but almost all sensible algorithms take it in consideration. There are also many factors, such as what filesystem (ext4…) you use. You can’t account for them all. Usually you simply add a small “overhead” constant per file, so smaller files get that many times while big ones only get it once.
On disc you have read/write misses and seeks, and due to constant RPM + geometry the read/write the speed literally varies with the physical distance of the written data from the center of the cylinder (more dots per arcsecond at the outer edge)
The issue is still the same with SSDs.