ron
Member
We've just moved from:
8.3E, 2K db blocksize ... no AI ... no VST
to:
9.1D, 8K db blocksize ... WITH AI ... WITH VST (incl table stats)
... and performance has taken a very sharp hit (batch jobs take 4 times longer). We're poking-into everything to see if we can improve things - but I thought there might be someone "out there" who would like to comment on our situation.
We are running on a Sun V480 with Solaris 8. We have 16GB memory and the DB is on a RAID 1+0 (s/w) array spread over eight discs (+8 for the mirror). There are six DBs - but one dominates, which is 110 GB.
BI uses a 16K blocksize - and is on a solo disc (mirrored).
AI is also 16K blocksize - also on a solo disc (mirrored).
We've set-up variable AI extents - because fixed ones (in testing) caused a space problem because they attained a HWM.
Below is a typical iostat display when batch work is running. md14 is a single disc for AI; md17 is another single disc for BI -- and md23 is the DB (8 discs).
Clearly the AI %b is a worry, at 49%. During batch work it hovers between about 45% and 60%.
I have three particular questions: (1) Is it normal for i/o activity for AI to be so much higher than for BI? I would have expected them to be closer. (2) I know there is a performance penalty for using variable AI extents vs fixed extents. Can anyone quantify this? Is it a minor, or MAJOR concern? (3) All references indicate no performance for having VST. Is this true?
Thanks to anyone who can shed light on this matter ...
Ron.
extended device statistics tty cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
md0 0.0 1.3 0.0 8.8 0.0 0.0 24.2 0 2 0 61 19 5 15 61
md14 0.0 71.4 0.0 570.6 0.0 0.5 7.0 0 49
md17 0.0 25.1 0.0 200.1 0.0 0.1 5.8 0 14
md23 22.5 51.5 180.5 415.9 0.1 1.0 14.0 4 23
Here are the startup parameters for the major db:
-B 22500 # Blocks in Database buffers
-L 200000 # number-locks
-n 150 # Users
-spin 5000
-rr
-basetable 1
-tablerangesize 307
-baseindex 1
-indexrangesize 1093
-bibufs 30
8.3E, 2K db blocksize ... no AI ... no VST
to:
9.1D, 8K db blocksize ... WITH AI ... WITH VST (incl table stats)
... and performance has taken a very sharp hit (batch jobs take 4 times longer). We're poking-into everything to see if we can improve things - but I thought there might be someone "out there" who would like to comment on our situation.
We are running on a Sun V480 with Solaris 8. We have 16GB memory and the DB is on a RAID 1+0 (s/w) array spread over eight discs (+8 for the mirror). There are six DBs - but one dominates, which is 110 GB.
BI uses a 16K blocksize - and is on a solo disc (mirrored).
AI is also 16K blocksize - also on a solo disc (mirrored).
We've set-up variable AI extents - because fixed ones (in testing) caused a space problem because they attained a HWM.
Below is a typical iostat display when batch work is running. md14 is a single disc for AI; md17 is another single disc for BI -- and md23 is the DB (8 discs).
Clearly the AI %b is a worry, at 49%. During batch work it hovers between about 45% and 60%.
I have three particular questions: (1) Is it normal for i/o activity for AI to be so much higher than for BI? I would have expected them to be closer. (2) I know there is a performance penalty for using variable AI extents vs fixed extents. Can anyone quantify this? Is it a minor, or MAJOR concern? (3) All references indicate no performance for having VST. Is this true?
Thanks to anyone who can shed light on this matter ...
Ron.
extended device statistics tty cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
md0 0.0 1.3 0.0 8.8 0.0 0.0 24.2 0 2 0 61 19 5 15 61
md14 0.0 71.4 0.0 570.6 0.0 0.5 7.0 0 49
md17 0.0 25.1 0.0 200.1 0.0 0.1 5.8 0 14
md23 22.5 51.5 180.5 415.9 0.1 1.0 14.0 4 23
Here are the startup parameters for the major db:
-B 22500 # Blocks in Database buffers
-L 200000 # number-locks
-n 150 # Users
-spin 5000
-rr
-basetable 1
-tablerangesize 307
-baseindex 1
-indexrangesize 1093
-bibufs 30