gpcheckperf results from hammering against a couple of our R510s. The servers are setup with 12 3.5 600GB 15k SAS6 disks split into four virtual disks. The first 6 are one group and 50GB is split off for an OS partition and the rest dropped into a data partition. The second set of six disks are setup in a similar fashion with 50GB going to a swap partition and the rest going to another big data partition. No Read Ahead, Force Write Back and a Stripe Elements Size of 128KB. Partitions formatted with XFS and running on RHEL5.6.
[gpadmin@mdw ~]$ /usr/local/greenplum-db/bin/gpcheckperf -h sdw13 -h sdw15 -d /data/vol1 -d /data/vol2 -r dsN -D -v
==================== == RESULT ==================== disk write avg time (sec): 85.04 disk write tot bytes: 202537697280 disk write tot bandwidth (MB/s): 2275.84 disk write min bandwidth (MB/s): 1087.34 [sdw15] disk write max bandwidth (MB/s): 1188.50 [sdw13] -- per host bandwidth -- disk write bandwidth (MB/s): 1087.34 [sdw15] disk write bandwidth (MB/s): 1188.50 [sdw13] disk read avg time (sec): 64.67 disk read tot bytes: 202537697280 disk read tot bandwidth (MB/s): 2987.98 disk read min bandwidth (MB/s): 1461.30 [sdw15] disk read max bandwidth (MB/s): 1526.68 [sdw13] -- per host bandwidth -- disk read bandwidth (MB/s): 1461.30 [sdw15] disk read bandwidth (MB/s): 1526.68 [sdw13] stream tot bandwidth (MB/s): 8853.81 stream min bandwidth (MB/s): 4250.22 [sdw13] stream max bandwidth (MB/s): 4603.59 [sdw15] -- per host bandwidth -- stream bandwidth (MB/s): 4603.59 [sdw15] stream bandwidth (MB/s): 4250.22 [sdw13] Netperf bisection bandwidth test sdw13 -> sdw15 = 1131.840000 sdw15 -> sdw13 = 1131.820000 Summary: sum = 2263.66 MB/sec min = 1131.82 MB/sec max = 1131.84 MB/sec avg = 1131.83 MB/sec median = 1131.84 MB/sec
3 replies on “Benchmarks for R510 Greenplum Nodes”
Thanks for posting these details! Can I ask what type of RAID configuration this is? We have a pretty similar configuration — 6 15k RPM 600gb SAS drives in a single RAID10 volume on Dell R510s with 64 GB.
The system is under a lot of load now so I can’t get a clean gpcheckperf, but the highest numbers we’ve seen in the past have been around 500MB/s read/write. We’re using a PERC H700.
I’m kind of surprised that you can get to >1 GB/s write performance with 6 drives. I would expect something like 180 MB/s per drive * 3 drives worth of throughput if you’re in RAID 10 = ~ 540MB/s. Unless you’re using RAID0 and trusting GP mirroring to save you when a disk fails? (Maybe a smart idea…)
Maybe I need to try reconfiguring one of our servers..
Those were on R510s with 12 of the same disks you mention, they are setup in two RAID5 sets of six disks and I’m hitting both of those RAID sets in that benchmark. So if you are getting ~500MB with half the number of disks you are close to what I was seeing. We’re going RAID5 because the space was a large concern, the four extra disks of space outweighed performance gained in RAID10.
Ah, thanks! I didn’t notice that you had two data directories in gpcheckperf and that the total was a 2-machine total. Makes a lot of sense now.