Database size and time for maintenance

aldionh

New Member
Hello everybody.

I'm wondering if you could share with me which is the biggest size you have handled with Progress databases and the time you required to perform a dump and load on it.

We have 60 GB databases and it takes between 36 and 48 hour to complete the maintenance process.

Thanks in advance for your reply.
 
It depends greatly on the hardware resources available, the version of Progress used and the techniques that you apply.

But I've been able to d&l databases of that size in around 6 hours...
 
Our database is around 20GB and it's set up as UTF-8. The dump takes about 3 hours, loading takes about 4~5 hours.
The filesystem is a on a NAS box that gives me a throughput of up to 60MB per second. The machine itself is a VMWare ESX environment with Linux ES, the hardware underneath is some Dell box with 4x3,66GHz processor.

I do a weekly offline backup and then truncate the BI file. (AI and BI files also reside on that NAS box). Once a year I perform the D/L procedure. (I know it's recommended to do more often, but there are 'reasons').

Still, 36~48 hours is a long time. Could it be that your disks are in a RAID5 array? RAID arrays can be performance killers.

Perhaps you can inform us about your hardware config, or how much througput you achieve on your disks?

Note that I'm not a windoze guy, but with Linux I may be able to give you some tips & tricks.


Regards,
Willem
 
IMO dump time largely depends on the 'state' of the database your dumping. Once you have a well configured typeII storage area database and some decent hardware then in my experience dump and load times don't vary that much.

I dumped recently some very scattered 20GB+ databases. Dump time varied between 4-6 hours. Load took place in about 15 minutes and index rebuild some 30 minutes. I can now easily dump these databases within the hour.

One more thing: don't believe this recomendation to do a dump/load on some kind of frequent basis. It all depends on the nature of the database and the changing of the data within the database. In theory one doesn't need to do a reload (for performance reasons) of a well configured database unless there are changes to the way the database is used. (e.g. fields are added which make records bigger, users decide one day that it is nice to write essays in some fields etc... :-)).
So in short: Only do a dump/load if proper measurement tells you so. So keep monitoring the database and the data will tell you what to do when. :)

Casper.
 
Thank you all for your response.

We are running Progress 10 over AIX. It could be that we are using RAID5 arrays for the disks or maybe we are using other kind of techniques.

D you know where could I find references about the proper techniques and the recommendations to set the discs?

Regards.
 
Back
Top