LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Red Hat (https://www.linuxquestions.org/questions/red-hat-31/)
-   -   Veritas Cluster Filesystem on RedHat - du command time anomaly (https://www.linuxquestions.org/questions/red-hat-31/veritas-cluster-filesystem-on-redhat-du-command-time-anomaly-801099/)

DantePasquale 04-09-2010 02:26 PM

Veritas Cluster Filesystem on RedHat - du command time anomaly
 
Hi,

I have a 2 node RH 5u4 64-bit. I have installed and configured the latest Veritas CFS (Cluster File System) which also uses Veritas Cluster Server. File system if VxFS. Storage is on EMC Symmetrix arrays with Veritas Mirroring between the arrays.

We have noticed that running 'du -hs' on the shared directory/filesystem where it takes about 3 minutes on one node and 30 - 45 minutes on the other node.

I've been running strace on 'du'. 'du' runs an 'lstat' on each file (66,000+ files). On the slower node, the ave time spent in 'du' is about .001 seconds longer, which accounts for the 30-45 minutes. Also, the standard deviation is much larger, which means to me that the lstat times are all over the place!

Another interesting thing is that iozone profiling shows that the i/o rates from both nodes are darn near identical, with no anomalies at various buffer & file sizes! And, iostat looks really good as does 'vxdmpadm iostat show'!!!

Any ideas on this one? Is lstat an issue on large clustered directories?

Thanks, Danté

MensaWater 04-10-2010 07:33 AM

Do both nodes have the same memory and cpu? Are they the same in other respects? Are the fibre HBAs the same?

You might want to check kernel parameters (man sysctl) to verify both nodes have the same settings.

Are you trying the du on the second node while there is heavy activity on the first node?

Does shutting down the first node have any effect on this? It may be some sort of file locking.

DantePasquale 04-10-2010 08:39 AM

Great ideas for checking; I should have pointed out that both nodes are identical hardware, the rpm's installed post build and any configuration post build are the same and both are built from the same kickstart image.

Currently, WCM is running, but there's little to no activity going on this server as the app guys are waiting on me to hand it over to them (formally).

I'll recheck the kernel parms ... they have been adjusted per Oracle recommendations; Oracle now owns WCM and we also have an oracle client that connects to a back-end database which stores information about the files in the clustered file system. I'll check on Monday!

DantePasquale 04-12-2010 01:09 PM

here's what I've been able to find so far:

Packages on both servers are identical as is hardware; Veritas DMP is setup to have the ruleset min-q-length on all dmp devices and that appears to be working as expected.

Kernel Settings are identical on both nodes, too.

strace of du command shows a large 'delay' after getdents is called, but only from the *slow* node:

Code:

    0.000089 lstat("94436.xml", {st_mode=S_IFREG|0664, st_size=9001, ...}) = 0
    0.000083 lstat("94437.xml", {st_mode=S_IFREG|0664, st_size=9001, ...}) = 0
    0.000167 lstat("94438.xml", {st_mode=S_IFREG|0664, st_size=9001, ...}) = 0
    0.000131 getdents(4, /* 1024 entries */, 32768) = 32768
    0.004166 lstat("94439.xml", {st_mode=S_IFREG|0664, st_size=9001, ...}) = 0
    0.000091 lstat("94440.xml", {st_mode=S_IFREG|0664, st_size=9001, ...}) = 0

And, that is very consistent but happens on both nodes even if I swap the CFS Primary Server!

I have a feeling that some meta-data update occurs after calling getdents and that requires some communication/handshake that shows up under lstat as lstat is probably waiting on something to happen within the volume/filesystem.

Sound familiar to anyone?


All times are GMT -5. The time now is 11:11 PM.