how to reduce the growth of BI file while CIM

mjdarm

Member
Hi All,

We have requirement to pass a CIM to QAD's simulation cost rollup. This, we need will need to do for around 50 sites and around 24 cost sets for each site ie 50 * 24 CIMs.

Version of QAD is eB2 character version on unix.

While doing this, the process, it takes hell a lot of time. Moreover, the BI file grows like anything. Sometime it crosses 2gb and the database crashes.

Is there any parameter we need to look into to reduce the growth of BI file and improve the performance?

The DB is in multi user mode. The BI file which i mention above is actually the b1 file.

The parameter for main database in thestartup script is below
-L 400000 -c 350 -B 1000

The parameter while connecting the db is below
-Bt 350 -D 100 -mmax 3000 -nb 200 -s 63

Please note that this is being done in development environment and there are hardly 10 users. The server is IBM AIX and it has minimum configuration like 1 GB RAM and 1 processor.

Any idea will help us. Little urgent please.

Mugundan
 

RealHeavyDude

Well-Known Member
I don't know QAD at all - so I don't know anything about your Progress/OpenEdge version you are using ...

The only way to decrease BI file growth is to make your transactions smaller. There is hardly something you can do on the server side in changing startup parameters or the configuration for the BI file. The root cause for the unwanted BI file growth is long running transactions and the transaction load itself and this is determined by the application logic.

What you can do is have a look at the performance and stability. I must admit that this is not the most familiar terrain for me, but:

  • Make sure your database blocksize is 8K (assuming your Unix filesystem has 8K blocksize).
  • If you are using the enterprise license, make sure have the BIW (Before Image Writer), the AIW (After Image Writer) if AI is used *, and enough APWs (Asynchronous Page Writers) running.
  • If you are using the enterprise license you should enable large files to prevent the database crashing on hitting the 2GB file limit
  • Have a look a the BI block- and clustersize. You can change these setting in truncating the BI. To determine useful values one would need to know the actual settings in the first place.
    • Increasing the BI cluster size in conjunction with the background writers will influence your checkpoint interval.
  • The -B parameter to me seems way too low. 1000 even with a database blocksize of 8K are just 8MB of memory ...
But just a warning: Performance tuning is not just a magic button - it takes experience and time to tune a system and one would need a lot more information to give a serious advise. Others may correct me here or add to it ...


HTH, RealHeavyDude.

*) Anybody that is responsible for a Progress database that has some valuable data in it should agree with me that using AI is not an option but an absolute must. And the AI is not tied to any specific license it also works when you access a database in single user mode.
 

TomBascom

Curmudgeon
Good advice.

Two additional notes:

1) There is no 2GB limit on bi growth in verion 9 and greater. If the db is crashing when the bi file hits 2GB it is because you have not enabled "large files" (or the OS has not enabled them) or because you are using -bithold.

2) BI growth is also sensitive to transactions in other sessions. QAD is especially infamous for having transactions that span user interaction -- IOW a user starts a transaction, goes to lunch (or goes home) without committing or rolling back the TRX but leaving it active. When this happens Progress cannot reuse space in the bi file which causes it to grow very rapidly. In production environments QAD customers typically monitor for "long running transactions" and may kill those users.
 

tamhas

ProgressTalk.com Sponsor
One might note that -L 400000 is a strong indicator of a transaction scoping problem as well.
 
Top