Poor Progress Performance on Netapp Filer

si7757

New Member
I have deployed a progress DB on a Netapp Filer. The server running Progress is a Solaris box. The server is connected via 2 GigE connections. NFS is used as the filesystem. Technically everything is working but the performance, particularly on random IO (write or reads) is very poor. Sequential IO appears to be fine running 60 - 80 MB/s sustained. Random IO is at 1 - 3 MB/s and tends to drop over time.

Has anyone had similar experiences running Progress against a NetApp box?
 
FROM PTW2005 Adam Backman:
RAID 5 is a really really bad configuration for databases. And only in some cases it's a really bad configuration.

Look at solution KB21152.

[snippet]
At a minimum, all write operations to RAID 5 arrays require writing
the data to one disk and writing an equal amount of "parity" or
error-correction information to a second disk. In many cases, a
single write operation will actually require 4 disk i/o operations --
two reads to get the previous data and parity information, and two
writes to update the new data and parity information.
[/snippet]

IF you have a SAN solution with large cache then this affect could be overcome. (I've seen SAN solutions which perform well even though the SAN has a RAID 5 configuration.)

Casper.
 
You can sometimes get RAID5 to perform almost as well as RAID10 (mirroring + striping) if you spend a whole lot of money on a large cache and fancy controllers. (Of course that isn't really RAID5 anymore -- it's RAID5 plus a bunch of stuff.) Other than lining sales weasel pockets I'm not quite sure what the point of that is. I guess some bean counter somewhere probably feels that it is better to spend more money than to "waste" disk space. (Bean counters don't seem to understand that disks aren't about space -- they're about IO operations per second.) Personally, I'd spend the money on a enough disk drives to do a proper configuration using RAID10. I'd spend the surplus on beer :awink:

A "large cache" is generally something on the order of 16GB or so... IOW if you can fit a big chunk of your db in the cache then RAID5 might not be completely horrible. Most of the time. It will still be dreadful if you do something IO intensive that saturates the cache -- things like dumping and loading, rebuilding indexes, restoring and rolling forward...

RAID5 is also dreadful if you lose a disk and it is putting your data back together for you while you scrounge around for a replacement disk.

If none of that matters then by all means go ahead and implement RAID5. If it's too slow hire me -- I'll be happy to come out and help you make it faster by getting rid of the RAID5. ;)
 
Back
Top