Flash storage trans...
 
Notifications
Clear all

Flash storage transfer rates slower on parts never written to?

Heracleides
(@heracleides)
New Member

After purchasing a SanDisk USB 3.0 flash drive, I tried to benchmark its sequential reading speed inside GNOME disks (which is functionally sadly somewhat limited in comparison to HDDScan for Windows, as it lacks full sequential scans, doesn't remember scan parameters, only works on mounted partitions and not full disks, lacks the ability to start at an offset LBA, lacks the ability to set block size, and is unable to pause for temporarily gaining full transfer rates without having to restart the benchmark thereafter).

I noticed that parts already written to read at about 180 MB/s, while parts never written to only read at 50 MB/s.

Does this have something to do with the manufacturing process?

Quote
Topic starter Posted : 31/05/2021 3:51 am
Topic Tags
Passmark
(@passmark)
Active Member

Maybe the data already (recently) written in now in cache. So the reads are faster.

While reading other data is not in cache and slow.

Or depending on the drive & setup you might be seeing fast reads on highly compressible data (e.g. all zeros) and slow reads on data that can't be compressed (e.g. random data).

ReplyQuote
Posted : 01/06/2021 2:43 am
Heracleides
(@heracleides)
New Member
Posted by: @passmark

Maybe the data already (recently) written in now in cache.

No cache. Otherwise, it would have been much faster than 180 MB/s on the already written parts.

It must be because of the flash drive itself. It also happens after re-plugging.

Posted by: @passmark

you might be seeing fast reads on highly compressible data (e.g. all zeros) and slow reads on data that can't be compressed

The opposite is the case here. The unwritten parts (all zeroes) are read slower.

ReplyQuote
Topic starter Posted : 07/06/2021 11:17 am
jaclaz
(@jaclaz)
Community Legend
Posted by: @heracleides

After purchasing a SanDisk USB 3.0 flash drive, I tried to benchmark its sequential reading speed inside GNOME disks (which is functionally sadly somewhat limited in comparison to HDDScan for Windows, as it lacks full sequential scans, doesn't remember scan parameters, only works on mounted partitions and not full disks, lacks the ability to start at an offset LBA, lacks the ability to set block size, and is unable to pause for temporarily gaining full transfer rates without having to restart the benchmark thereafter).

I noticed that parts already written to read at about 180 MB/s, while parts never written to only read at 50 MB/s.

Does this have something to do with the manufacturing process?

Off-topic (but not much) is the first question that comes to mind:

IF that tool (GNOME disks) is so bad and limited, WHY (the heck) you use it (as opposed to another tool with all the features you are lamenting as missing?

Given that it misses all these features, maybe it simply "sucks overall" and if this is the case, than the benchmark results it provides may be simply unreliable.

As a side note when you say "parts never written to" you actually mean "parts that I never wrote to knowingly" (which is not exactly the same thing, usually during the process of initializing (in factory) the whole flash is written to (possibly 00 filled) as part of the configuration of the controller, volume(s) LUN(s) settings, overprovisioning and bad sectors re-mapping) and remember that with (relatively) modern flash the wear leveling algorithms (internal to the controller) may make it so that you do not really-really know where you are writing to (or where exactly you are reading from). 

jaclaz

ReplyQuote
Posted : 08/06/2021 12:59 pm
Heracleides
(@heracleides)
New Member
Posted by: @jaclaz

IF that tool (GNOME disks) is so bad and limited, WHY (the heck) you use it (as opposed to another tool with all the features you are lamenting as missing?

GNOME disks does suffice for some uses, but I have already sought alternatives and there were none. KDiskMark can not produce line graphs, only numbers, making it effectly like dd with some added eye candy for measuring transfer rates.

If you know of any fully sequential benchmarking tool for Linux like HDDscan for Windows, I would appreciate it if you let me know.

Posted by: @jaclaz

maybe it simply "sucks overall" and if this is the case, than the benchmark results it provides may be simply unreliable.

The results are fine. GNOME disks works as intended within its functionality, but said functionality is limited.

Posted by: @jaclaz

As a side note when you say "parts never written to" you actually mean "parts that I never wrote to knowingly" (which is not exactly the same thing, usually during the process of initializing (in factory) the whole flash is written to (possibly 00 filled) as part of the configuration of the controller, volume(s) LUN(s) settings, overprovisioning and bad sectors re-mapping) and remember that with (relatively) modern flash the wear leveling algorithms (internal to the controller) may make it so that you do not really-really know where you are writing to (or where exactly you are reading from). 

Yes, I meant the logical blocks never written to. I used that SanDisk flash drive for a portable Linux installation with ext4, which has a spread-out writing pattern. The parts with LBAs not allocated to a file read at 50 MB/s, and the parts written to read at 180 MB/s. The stick is few months old and was pre-formatted with, as usual, FAT32.

But a MicroSD card from the mid-2010s which has somewhat damaged content actually has faster transfer rates on the unwritten LBAs than the written ones. Around 15GB of its 16GB (user-accessible space) were filled up; the last 1GB was not written to at all. The transfer rate was around 2 MB/s on the damaged parts, and higher on the never-written parts (last 1 GB). I can't remember the exact transfer rates, so I would have to test it again.

That MicroSD card is from Transcend, a brand I have mostly positive experiences with (next to SanDisk and Intenso), but with this strange exception. 32GB and 64GB Transcend MicroSD cards with a similar non-usage span (~5 years) have, against all odds, retained 100% data integrity, which suggests much internal difference, even though one would expect the higher storage density, thus smaller transistors of the 32GB and 64GB ones to retain data for a shorter time. My closest guess is more redundancy/error correction code.

 

 

ReplyQuote
Topic starter Posted : 08/06/2021 7:11 pm
jaclaz
(@jaclaz)
Community Legend

But IF the tool you used accesses logical blocks (as in your words only works on mounted partitions) then it is entirely possible that there is a logic (pardon me the pun) in the programs that somehow favours (or pre-caches or *whatever*) allocated clusters and this only happens on some devices (but not on others) because of the different controller (or again *whatever*).

jaclaz

ReplyQuote
Posted : 09/06/2021 9:40 am
Heracleides
(@heracleides)
New Member

@jaclaz I tested the direct I/O transfer rate:

#  dd if=/dev/sdf ibs=1048576 iflag=direct of=/dev/null count=200
200+0 records in
409600+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1,56158 s, 134 MB/s
# dd if=/dev/sdf ibs=2M iflag=direct of=/dev/null count=200
200+0 records in
819200+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 2,98854 s, 140 MB/s
This post was modified 2 months ago 2 times by Heracleides
ReplyQuote
Topic starter Posted : 15/06/2021 3:56 am
mokosiy
(@mokosiy)
Junior Member

@heracleides Read more, at least 5GB with dd. It may show a different picture.

One of simple approaches USB manufacturers use is just caching 3-5 percent of the beginning of NAND memory space.

ReplyQuote
Posted : 15/06/2021 8:14 am
Heracleides
(@heracleides)
New Member

@raydenvm

# sudo dd if=/dev/sdf ibs=2M iflag=direct of=/dev/null count=3000
3000+0 records in
12288000+0 records out
6291456000 bytes (6,3 GB, 5,9 GiB) copied, 45,1807 s, 139 MB/s
ReplyQuote
Topic starter Posted : 15/06/2021 10:50 am
Share: