Question OE 11.7.3 on RHEL 7.4

RealHeavyDude

Well-Known Member
#1
I've just had a discussion with our storage admins concerning the file system types used for databases with RHEL.

Up to this point I was under the impression that we will get Ext4 file systerm for the database but now checking the system I just got I see that they are xfs.

In the product availability guide I spotted the following:

XFS is the default file system for RHEL 7.0 64-bit/CentOS 7.0 64-bit, and OpenEdge certifications have been carried out using XFS.

Therefore I'll take it that xfs is okay for RHEL.
What file system would you use, would you prefer Ext4 over xfs?

Thanks in Advance and Best Regards, RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
#2
On internal RHEL/CentOS systems we have a mix. Earlier 6.x systems use ext4 for the most part, and newer 7.x use xfs.

I can't say that I have a preference and I haven't done any comparison benchmark testing between them. Unfortunately as we have transitioned from bare-metal and direct-attached disks to a mix of virtualization technologies and low-end "SAN" storage, I/O performance is at best about a third of what it once was so I don't see much point in such testing on my systems.

That said, I haven't encountered any xfs-specific issues yet and like you, I feel it's safer to be on the well-traveled path with PSC.

If you or someone in your organization have the opportunity and the resources to do comparison testing in isolation, it would be interesting to see how if there are measurable performance differences in read and write performance with OpenEdge-based benchmarks, e.g. readprobe or ATM. Do you think you will have that opportunity?
 

RealHeavyDude

Well-Known Member
#3
Hi Rob,

unfortunately I won't get a chance to do a comparison testing of Ext4 vs. xfs in the near future. Like you we didn't get bare metal - instead we got a RHEL VM and file systems residing on a SAN (should be an EMC highend one). But, from my point of view - compared with Solaris SPARC virtual zones and ZFS file systems residing on a SAN - I think we should not have worse performance with RHEL. Nevertheless I will do some comparisons. Lucky me I got 64 instead of 32 GB for the test and 128 instead of 64 GB for the prod system. That should allow me to at least double -B ...

Considering that at my employer we're running some 3'000 servers - that is the best what I could get.

Thanks and Best Regards, RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
#4
Unless the virtualization guys do something really bad, I think you'll have *much* better performance with this new setup than you had with Solaris/ZFS. And with more cache you'll also do less physical I/O, making the difference even bigger. I'm sure you're looking forward to it!
 
Top