4-channel 2-way read/write bandwidth

From OpenSSDWiki
Viewed 5196 times, With a total of 2 Posts
Jump to: navigation, search

Just Got Here
Threads 3
Posts 4
I played a bit with memory module and slot combinations and found out that if you insert the two modules into slots 0,1 or 2,3 or 4,5 or 6,7 you get banks spread across all channels, specifically for slots 0 and 1, you get banks A0-D0 and A4-D4, assuming you've moved the jumper from AMIGOS to BAREFOOT. The BANK_BMP configuration obtained with this is not defined in include/bank.h so I added it and ran some tests. Using the tutorial FTL, with 2 channel (i.e. default) configuration I could get ~90MB/s sequential read/write speed, whereas with 4 channels I could get a read bandwidth of 140MB/s, whereas increase in write speed wasn't noticeable (all of these were done on Linux using dd with direct IO and with read sizes of 256KB-512KB with input size of 1GB).

I would like to hear any comments you have about the results, i.e. why is there increase in read bandwidth and not write, and why is the increase in read is %55 and not say %100.

Also, I'm curious about the read/write bandwidth others have got using the tutorial FTL. Specifically, has anyone gotten the advertised 230MB/s bandwidth using any of the FTLs?

Clicked A Few Times
Threads 4
Posts 9
might be the read/write operations in your test are not fully sequential.
I did similar tests before. When I traced <lba, sector_cnt> of IO commands running in linux 'dd' command, it was not fully sequential. I'm not sure but this issue is related to the linux IO scheduler.

I recommend to use a 'IOmeter benchmark tool' on Windows OS in your experiment.
Sang-Phil Lim (M.S. Candidate Student)

VLDB Lab. (http://vldb.skku.ac.kr/)
Department of Embedded Software
Sungkyunkwan Univ., Korea.

Just Got Here
Threads 3
Posts 4
Quote:Lsfeel0204 Jan 6th 5:04 pm
I did similar tests before. When I traced <lba, sector_cnt> of IO commands running in linux 'dd' command, it was not fully sequential. I'm not sure but this issue is related to the linux IO scheduler.


The default IO scheduler is CFQ, which is NOOP for a single process. I verified that read/write requests are sequential by modifying dummy FTL and printing out the arguments passed to ftl_read/ftl_write functions.

Quote:
I recommend to use a 'IOmeter benchmark tool' on Windows OS in your experiment.


I mistakenly tried sequential read test first with Iometer, which gave me 210MB/s result, but that doesn't perform any NAND operations since tutorial FTL returns 0xFF for bytes not written. So doing a write test followed by a read test gave me worse results than dd on Linux.

I look forward to people explaining how they got large bandwidth in their tests.


Forum >> Jasmine OpenSSD Platform >> Jasmine Hacks



Who's here now Members 0 Guests 0 Bots/Crawler 0


AWC's: 2.5.12 MediaWiki - Stand Alone Forum Extension
Forum theme style by: AWC
Views
Personal tools