Performance degradation

We have recently experienced a marked degradation in performance of our system following the changing of certain database startup parameters. This is especially noticable when performing large batch processes that create a lot of records (up to 4 times slower).

The original parameter file is as follows
Code:
-B 50000 # Buffers
-H xxxxxx # Host name
-N TCP # Network type
-S xxxxxxx # Service name
-n 300 # Maximum number of users
-Mn 30 # Maximum servers per database
-Mi 5 # Minimum Clients per Server
-Ma 10 # Maximum Clients per Server
-bibufs 16 # Before image buffers
-aibufs 25 # After image buffers
-spin 2000 # Spin lock retries
-L 10000 # Lock Table overflow
-aistall # Do not crash if all after images full
(AIW,BIW,PROWDOG,3 APW's)

The amendments/additions made are as follows
Code:
-B 20000 # Buffers
-bibufs 50 # Before image buffers
-aibufs 100 # After image buffers
-tablerangesize 200 # Monitor Access to all tables up to 200
-indexrangesize 500 # Monitor Access to all indices up to 500
-groupdelay 250 # Number of milliseconds for BI flush
(AIW,BIW,PROWDOG,2 APW's)
The databases are on a Sun unix box E10K domain running Solaris 8 revision 108528-19. They have 2PC and AI enabled. Can anyone identify any parameter change, or combination, that would cause this excessive slowing of writing to disk?
 
Hi Norman

So you have cut the -B to 2/5 the original size and implemented tablerange and indexrange.

Ignoring the change in -B (which I presume is for a good reason and has not changed the buffer hit rate at all!) there was a suggestion on a older version of Progress that Tablerange and Indexrange could suck up significant amounts of CPU on a busy system.

Each read now has to perform multiple writes to the VSTs monitoring index usage. If you have a B-Tree index which you are following perfectly to get to a record which has a depth of 3 this will mean a possible 3 writes to the index reads VST and one to the data reads VST.

I think that I would check on the "overhead" of indexrange by disabling that at the next available opportunity.

I presume that the -spin / -n etc are unchanged - in which case I would lift the -spin to 10000.

Any promon stats available? How about SAR output? General feeling for the bottleneck?

If this is a domain of an E10K, have there been significant changes to the profile of the other apps in the E10K which would cause problems with the shared resources (system bus/IO controllers/etc)?
 

drunkahol

New Member
Machine performance

Norman,

I would investigate the overall performance of the system.

From memory, there are loads of databases/applications running on these boxes?

If that is correct, check the paging activity of the OS. I've had problems recently with an 8Gb RAM machine and setting a 2Gb -B buffer. Despite 6Gb RAM being available for other uses, the machine's regular system check complained about excessive page-in activity.

Odd to reduce the -B buffer by such an amount though. What did you base this reduction on? Did you get solid data showing that the buffer was only ever 1/4 used or something?

Other than that - hope everyone is doing fine up there in the cold North.

Cheers

Duncan
 
Reviewing this again (having spoken to some wise people) I am no longer convinced that tablerange etc will have any significant effect.

Schroedingers cat suggests that if you are using the VSTs this provides then there may be a performance hit...!

I see that a bunch of parameters got set that would indicate that 2PC was being turned on... Is this the case? If so, that would account for the performance hit.

I have also seen it suggested in a KBase that when 2PC is enabled it MAY be worth reducing the BI Block and Cluster sizes. I would certainly investigate this.

Toby
 
Top