ron
Member
Hi,
I'm working with a system that is widely dispersed - over 200 small servers over a large geographical area. Most are UnixWare / Progress 9.1D - but migration is under-way to Linux / Progress 10.2B. (All licences at remote sites are Workgroup; AI is in use everywhere.)
During the past year we've introduced an automated (well, semi-automated, anyway) process where the remote DBs get dumped/reloaded about every nine months. Before we did this users were very unhappy about performance - but now very much happier.
But there is something rather odd. At many sites the users start complaining about performance again only four or five months after a D/L - and if I do another D/L they are suddenly "happy" again. But when I look at a tabanalys report the scatter factors max-out at something 1.2, which is "puny".
I'm getting some pressure to do the D/L cycle more frequently than every (about) nine months - but I am very reluctant. I am 99.99% sure that the problem here is somewhere in the application - but I don't have access to the code.
What I'd like to do is to be able to produce details of scatter factors for some of the large tables by "section" (say each 10% of the table - rather than over the entire table (as the tabanalys report provides). I think it is possible that some high-hit tables may get very fragmented "at the end".
a. Any suggestions as to the kind(s) of problems I might look for?
b. Can anyone tell me the algorithm to calculate scatter factors from a stream of recids?
Thanks,
Ron.
I'm working with a system that is widely dispersed - over 200 small servers over a large geographical area. Most are UnixWare / Progress 9.1D - but migration is under-way to Linux / Progress 10.2B. (All licences at remote sites are Workgroup; AI is in use everywhere.)
During the past year we've introduced an automated (well, semi-automated, anyway) process where the remote DBs get dumped/reloaded about every nine months. Before we did this users were very unhappy about performance - but now very much happier.
But there is something rather odd. At many sites the users start complaining about performance again only four or five months after a D/L - and if I do another D/L they are suddenly "happy" again. But when I look at a tabanalys report the scatter factors max-out at something 1.2, which is "puny".
I'm getting some pressure to do the D/L cycle more frequently than every (about) nine months - but I am very reluctant. I am 99.99% sure that the problem here is somewhere in the application - but I don't have access to the code.
What I'd like to do is to be able to produce details of scatter factors for some of the large tables by "section" (say each 10% of the table - rather than over the entire table (as the tabanalys report provides). I think it is possible that some high-hit tables may get very fragmented "at the end".
a. Any suggestions as to the kind(s) of problems I might look for?
b. Can anyone tell me the algorithm to calculate scatter factors from a stream of recids?
Thanks,
Ron.