QAD too slow

Dear All,

Now we got QAD too slow process.
1. MRP 23.2 16hrs.
2. BOM Roll-Up 13.12.13 5hrs.
3. Routing Roll-up 14.13.13 5hrs.

What is wrong with our QAD database?

Cringer Moderator
Staff member
How long should they take? What has changed in the meantime? Is the server running ok? Changes in the network?
We are not clairvoyant here, although it would definitely help. I would look to run some diagnostics and monitoring on the server as a starting point. Maybe investigate installing ProTop (the free version will do just great).


"Now" it is too slow? Was it faster before? If it was faster before "now" then the main question should be "what has changed"?

If it has always been too slow then an investigation of your configuration and potential tuning options will probably be necessary. As Cringer mentions ProTop is an excellent tool to help you get started with that - Progress OpenEdge Monitoring with ProTop - White Star Software

In either case it would also be useful to know what the *expected* and *acceptable* run times are for the functions that are "too slow".

Rob Fitzpatrick Sponsor
What is wrong with our QAD database?
I get questions like this from time to time, where someone asks a question that implies they have already jumped to a conclusion.

If the application is slow, and in particular slower than it used to be, how do you know that the problem is the database being slow? Do you have evidence that someone changed the database configuration for the worse?

Have you investigated other possible contributing factors, e.g.:
  • application code change
  • application configuration change
  • application propath change
  • user behaviour change
  • user count change
  • network change
  • server OS change
  • server workload change
  • virtualization host change
  • storage subsystem change
  • etc. etc.
Also: have you talked to QAD?

Thank you for your answer.
We are doing execute this menu in batch job and see in log file,
1. MRP 23.2 - 16hrs., last month 4-6 hrs.
2. BOM Roll-Up 13.12.13 - 5hrs., last month 1-2hrs.
3. Routing Roll-up 14.13.13 - 5hrs., last month 1-2hrs.

Now, we found corrupt record in tr_hist and we would like to rebuild index for this weekend.

what is your suggestion on this issued?


What is your evidence for having found a "corrupt record"?

Depending on what you have actually found you may, or may not, have a good reason to rebuild one or more indexes.

It is unlikely that a single "corrupt record" is responsible for the sort of performance difference that you describe and rebuilding one or more indexes is not very likely to result in any significant performance improvement.

It is far more likely that there has been a change in your environment. Some of the more common changes that lead to this sort of dramatic performance loss are:

- migration to new hardware
- conversion to a virtualized host
- implementation of a SAN
- increased activity by other applications on shared infrastructure
- accidental changes to Progress startup parameters or configuration options
Hi Tom,

Now we are already fix corrupt record in tr_hist, but the performance not improve.

we try to transfer data to create in Test Server in 1 folder structure in 1 database (all database extend in 1 disc) and we try to run MRP again.
and the result is we can finished in 2.30hrs. of MRP 23.2,It's improve MRP performance.

but MRP process time is 16hrs. in production database and the database structure are in many disc like db1, db2, db3..
Is many disc effect with QAD database performance?

What is your idea?


If the disks are all equally performant from a hardware point of view and you do not cripple them by using RAID5 etc then spreading data across multiple disks will be faster.

If consolidating to a single disk helped things then I imagine the previous disks must have been saturated or very slow devices. You might see something like that if the original disks are on a RAID5 SAN that is in degraded mode (a disk has failed and data is being supplied via parity calculations) or if you consolidated onto an internal SSD. In a case like that, sure, you could see a big improvement.

Test servers are often faster than production servers. In many cases this is because IT has not crippled them by inflicting a SAN upon them or putting them in a brain damaged VM.