As a general statement RAID5 sucks if the performance of an application matters. Regardless of whether the application is a Progress application or some other application.
You often don't notice that performance sucks (hence the generally low level of recognition that RAID5 is devil spawn) because read operations dominate most applications and RAID5 read performance is mostly ok (not great, just ok). Furthermore vendors realize that RAID5 sucks so they don't really deliver RAID5 -- they deliver RAID5 plus a big RAM cache which mostly masks most of the problem most of the time. But note the word "most" (used thrice).
For the RAM cache to be effective it has to lie to the OS about when writes are completed ("write back" vs "write thru"). If it is lying ("write back") and it has reordered the write operations (a common optimization) and if a bad thing, such as a power outage, happens then the database can be corrupted. This used to be fairly common. Vendors have become smarter but you usually have to pay a lot more for a trustworthy implementation of RAID5 (think EMC or IBM). If you have a low cost RAID5 "solution" you probably have a lot more risk than you realize. (All the more reason to be aggressively leveraging after-imaging and getting those ai files archived onto another machine pronto...)
It is also important to recognize that even "large" caches can be easily saturated by certain operations. It doesn't take very long for a database restore, an ai roll forward, a dump and load, an index rebuild, a dbanalys, dbtool, or a large table scan to saturate the cache (do the math, it isn't hard). At which point you're limited by the naked RAID5 -- and RAID5 write performance is essentially that of a single disk. While many of these operations are relatively rare they also take place under extreme time pressure -- restoring and rolling forward, for instance, isn't usually something that you're doing just for kicks. The boss is probably pacing the aisle by your cube
Read performance can also suck with RAID5. When a disk fails and the data is being reconstructed from parity you'll see what I mean
Lastly -- even if you aren't having any particular performance problems your performance is significantly less than it could be if you had instead implemented RAID10 (striping aka RAID0 and mirroring aka RAID1 combined).
Database performance is not dependent on disk
space (which is what RAID5 optimizes). Nor does it generally depend on disk throughput as measured in bytes per second (that's important for sequential IO like streaming video on demand). Instead RDBMS' depend on disk
operations -- IO ops per second. The only way to get more IO ops per second is to get more disks -- and since disks get bigger (space wise) faster than they get faster that generally means you'll have lots of empty space. Bean counters have a hard time with that concept. Never allow the empty space to be called "wasted" -- set it aside for particular purposes that do not contend with the database for the IO ops. Good uses include things like a recovery area -- so if you have to restore you do not have to overwrite your database (sometimes backups turn out to be no good; you'll want your broken production database if that happens to you) and an upgrade area (it is very handy to be able to fall back on the untouched database if an upgrade has to be rolled back) and so forth.