Categories
Benchmarking

Benchmarks for R510 Greenplum Nodes

gpcheckperf results from hammering against a couple of our R510s. The servers are setup with 12 3.5 600GB 15k SAS6 disks split into four virtual disks. The first 6 are one group and 50GB is split off for an OS partition and the rest dropped into a data partition. The second set of six disks are setup in a similar fashion with 50GB going to a swap partition and the rest going to another big data partition. No Read Ahead, Force Write Back and a Stripe Elements Size of 128KB. Partitions formatted with XFS and running on RHEL5.6.

[gpadmin@mdw ~]$ /usr/local/greenplum-db/bin/gpcheckperf -h sdw13 -h sdw15  -d /data/vol1 -d /data/vol2 -r dsN -D -v
 ====================
 ==  RESULT
 ====================

  disk write avg time (sec): 85.04
  disk write tot bytes: 202537697280
  disk write tot bandwidth (MB/s): 2275.84
  disk write min bandwidth (MB/s): 1087.34 [sdw15]
  disk write max bandwidth (MB/s): 1188.50 [sdw13]
  -- per host bandwidth --
     disk write bandwidth (MB/s): 1087.34 [sdw15]
     disk write bandwidth (MB/s): 1188.50 [sdw13]

  disk read avg time (sec): 64.67
  disk read tot bytes: 202537697280
  disk read tot bandwidth (MB/s): 2987.98
  disk read min bandwidth (MB/s): 1461.30 [sdw15]
  disk read max bandwidth (MB/s): 1526.68 [sdw13]
  -- per host bandwidth --
     disk read bandwidth (MB/s): 1461.30 [sdw15]
     disk read bandwidth (MB/s): 1526.68 [sdw13]

  stream tot bandwidth (MB/s): 8853.81
  stream min bandwidth (MB/s): 4250.22 [sdw13]
  stream max bandwidth (MB/s): 4603.59 [sdw15]
  -- per host bandwidth --
     stream bandwidth (MB/s): 4603.59 [sdw15]
     stream bandwidth (MB/s): 4250.22 [sdw13]

 Netperf bisection bandwidth test
 sdw13 -> sdw15 = 1131.840000
 sdw15 -> sdw13 = 1131.820000

 Summary:
 sum = 2263.66 MB/sec
 min = 1131.82 MB/sec
 max = 1131.84 MB/sec
 avg = 1131.83 MB/sec
 median = 1131.84 MB/sec
Categories
Uncategorized

Testing out UAC

I haven’t been a big fan of Greenplum’s performance monitoring tools. It’s been a couple years though so it’s time to try them again and see what we’ve got. So just got UAC up and running today and it’s looking fairly nice.

Categories
Uncategorized

Finding trouble spots

I’ve been fighting some database performance issues recently and started to use the following query to look for tables that are being used that are showing that they have 0 rows. These are likely to be unanalyzed tables being used in queries. We have queries to look through the entire database for potential unanalyzed tables, but it takes much less resources just to look at what’s currently in flight and try to address what we are currently hammering on. There are a lot of other table in there that I don’t need to join on for the visible data set, but I’ve got them there in case I need to start ungrouping things and pulling in more specific data.

SELECT n.nspname AS "schema_name",                                       
       c.relname AS "object_name",                                       
       c.reltuples,                                                      
       c.relpages                                                 
FROM pg_locks l,                                                       
     pg_class c,                                                       
     pg_database d,                                                    
     pg_stat_activity s,                                               
     pg_namespace n                                               
WHERE l.relation = c.oid                                          
AND l.database = d.oid                                            
AND l.pid = s.procpid                                             
AND c.relnamespace = n.oid                                        
AND c.relkind = 'r'                                               
AND n.nspname !~ '^pg_'::text                                     
AND c.reltuples = 0                                               
GROUP BY 1, 2, 3, 4                                               
ORDER BY 1, 2;
Categories
Uncategorized

Data Science Summit 2012

I’m attending the 2012 Data Science Summit and I am happy to report it has been well worth my time. It isn’t a nuts and bolts confernce on what technologies to use or how to use them, what processes you should work or which machine learning algos to apply in a situation. What there are is presentations and panels on topics around working with data that apply directly to mich of the work I do.

Categories
Uncategorized

Dashboarding Greenplum

When Greenplum first landed in our shop they had wanted us to use gpperfmon. It quickly because obvious that it wasn’t stable at that time and that it created way to much overhead. So a couple years ago I came up with my own dashboarding tools that rely on the database as little as possible and exists outside of the cluster. My thought being that if they cluster is down it pretty hard to trouble shoot what’s wrong with it when the stats are kept in the cluster itself. The tool I came up with blends some Greenplum query checks, with sar data and uses MegaCLI to pull disk health. Here’s a quick glimpse so you can get an idea of what I’ve got going.

Categories
Uncategorized

FATAL website has all kinds of issues

Went to drop and update on the website and it is in all kinds of pain. Looks like there is some work to be done

Categories
Uncategorized

Falling behind

The gpadmin site is sorely lacking in updates recently. That’s not to say I don’t have things to post about. Just haven’t had the time to make a reasonable post about them. Look for some updates soon.

Categories
Uncategorized

Figuring out what’s running on nodes

I use this to figure out what’s going happening on the nodes and if I might have hanging queries on one of them

gpssh -f hosts.seg
=> ps -ef | grep postgres | grep con | awk '{print $12}' | sort | uniq -c; echo "===="
[node02]       8 con5397
[node02]      12 con5769
[node02]       9 con5782
[node02]       4 con5989
[node02] ====
[node04]       8 con5397
[node04]      12 con5769
[node04]       8 con5782
[node04]       4 con5989
[node04] ====
[node03]       8 con5397
[node03]      12 con5769
[node03]       8 con5782
[node03]       4 con5989
[node03] ====
[node01]       8 con5397
[node01]      12 con5769
[node01]       8 con5782
[node01]       4 con5989
[node01] ====
=>
Categories
Uncategorized

GP 4.0.5.4 out

GP 4.05.4 just rolled out and it looks like it addresses some bugs we’ve hit. We had upgraded from 4.0.5.1 to 4.0.5.3 and had to revert back because we started seeing segments popping offline. The README notes for the new release says it addresses this.

GPDB-4054-README

Categories
Uncategorized

Greenplum HD Announced

Reading the news on the Greenplum HD announcement. I find it especially interesting because one of the main reasons I had an initial flurry of posts here and then trailed off was that I got heavily involved in our Hadoop installation and restructuring it. We’re currently using Cloudera‘s Hadoop packages and the way they handle distribution their software is about as good as you can get. I’m interested to see how Greenplum’s version of the software works. I’d heard talk of a couple of Map Reduce implementations at startups that were seeing impressive performance improvements. In a large enterprise there is definitely a place for both Hadoop and an MPP database and the trick is getting them to share data easily, which is why I was very impressed to see the 4.1 version of Greenplum with the ability to read from HDFS.

The big question is how well is Greenplum going to be able to support the release going forward. Greenplum is based off an older version of Postgres and I get a monthly question from someone about some feature that is in a later version of Postgres that doesn’t seem to be in Greenplum. Is their Hadoop implementation going to get the same treatment? Will Greenplum be able to keep up with the frequent changes to the Hadoop codebase and keep their internal product up to date, will it even really matter?

One extremely interesting thing we should see over the next year is a push on how to integrate EMC SAN architecture into both Greenplum and Hadoop. The old pre-EMC Greenplum sounded much like what I’ve heard from my Cloudera interactions, “Begrudgingly we see a use case for SAN storage, but might I suggest instead you cut off your left arm and beat yourself to death with it first.” I realize we’re working with really big data here so looking at SAN storage seems insane at first. Once you get into managing site to site interactions, non-interruptive backups and attempting to keep consistent IO through put across dozens if not hundreds of not only servers but different generations of servers, you can see the play.

I’m looking really long look at Flume right now and that’s a key feature that Cloudera implementation will have over what I’ve seen from Greenplum. The fault tolerant Name Node and Job Tracker look interesting but I don’t see these as very high risks in the current Hadoop system and as I understand they are already in the process of being addressed in core Hadoop. Performance promises are “meh”, I don’t take any bullet point that says X times speedup seriously. It could be true, but you really need a good whitepaper to backup a speed improvement boast.

So for me the jury is still out. It looks cool and I can’t wait to actually use it, but I said the same thing about Chorus a year ago and we still haven’t been approached with a production ready version of it.