Rule of thumb for BI Size?

JTWilson

New Member
I hoping somebody out there would have a rule of thumb for BI Size. I under the size of this file has little to do with the actual size of the DB and all to do with the amount of updates & new data writes. Does anybody have a way to guestimate the size of a BI file? I’m sure there must be some sort of formula based off the Record Creates / Updates and other DB Activity available from the PROMON R&D Menus.

Also there seems to be a subculture of ‘truncate –C bi’, is there any performance gains in doing this, apart from space reduction on the file system? As from reading the documentation on BI files, they say it normally grows to the required size over time. While BI size increases can also happened due to performance issues, as BI is not able to free up old clusters so new ones are created. Is there anyway to find the difference between an optimal BI size, and one that’s a mammoth because of the lack of free clusters, i.e. as a result of poor configuration?

In my case I have a DB that shows up under PROMON as 11GB and has a BI file of 6192MB (6.2GB). I’m thinking that I have a performance problem here :) Since down time is a massive issue with this DB it’s not possible to just playing with parameters and see what happens. So the parameters I add will need to be based off some feedback from PROMON.

Cheers,

JW

 

JTWilson

New Member
I’ve been doing some searching around this site and have found the following:

A reference to Progress KB: 20566 that’s been quite helpful. Well in figuring out a good size for my BI Clusters. Its recommended checkpoint time is 120 seconds, while the DB currently has an average of 18 second, with the default cluster size of 512KB. Using the formula in this article (120 [desired checkpoint time] / 18 [current average]) * 512KB [current size] = 3413.333 aligned a multiple of 16, 3424KB. So next time I do a BI truncate I’m going to set this value.

The only worry is I currently have 12,682,496 clusters. So if I’m having a problem freeing up my BI Clusters, making the cluster size big could also make my BI bigger by a factor of 6.5 (120/18=6.5) and I would end up with a nice 36GB, that’s assume I’ll have the same number of clusters as I did before.

Anybody have any ideas on this?
 
OK - Lets see if we can help you out a bit.

Rule 1. The only thing about BI size that matters is that it shouldn't "just keep growing".

There is no rule 2 (except on older versions on Progress where 2GB was the hard limit for BI size)

Tuning the BI.
You should be aiming for checkpoints about every 1-2 minutes at peak load. A checkpoint every 18 seconds (call it 20 for round numbers) at 512KB, I would suggest a Cluster size of 4096 (its powers of 2 not multiples of 16 I think). Combine this with a Sensible BI Blocksize (no less than 4K and you should see the checkpoints start to smooth out.

To address the question of number of clusters.
The setting of the BI Cluster size is part of the Truncate BI command and will shrink your BI file to 4 (count them 1, 2, 3, 4!) clusters. You should (after a few tries) be able to guage how much BI you need by checking the size at the end of the day. If it is more than a few hundred clusters then you have some code issues that should be addressed (Long running transactions). Create Fixed Length Extents for your BI file to allow for the normal amount of usage + 10% (rule of thumb) and a variable length extent.

Now that the checkpoints are happening a little less frequently, you need to make sure that you are not getting large amounts of "flushes" at the checkpoint. This will make the performance lumpy, with noticable pauses. I'm not sure if your version of Progress (Enterprise or whatever) supports APW (Asynchronous Page Writers) and BIW (BI Writer). Check out PROBIW and PROAPW in the manuals, but definitely try to start the BIW and one APW for each database.

Now, sit back and watch it fly. Revisit these settings frequently!
 

ron

Member
I agree fully with Toby's comments.

Just to add some figures ... we have a 150GB database with 160 online users and run lots of batch jobs during the day as well as at night.

We use 16K byte clusters (16K BI blocksize). The BI was last truncated about two weeks ago ... and at the moment the BI has only grown to 5 clusters.

Ron.
 
Top