Forum Post: Re: Performance Degradation After Dump&load

  • Thread starter Thread starter George Potemkin
  • Start date Start date
Status
Not open for further replies.
G

George Potemkin

Guest
Piotr, > Moreover, record length is not homogenic over time. As system lived we added more and more fields so old records are smaller than new ones. To achieve 1024 blocks in RM chain I would have to set very high toss limit, but then I will get much bigger database with a lot of empty space (mainly when loading old, small records). You also wrote: > and then cca 4.6 mln records were created with production simulation Does it mean that you can create the records that imitate the production? > Maybe if I calculate ideal toss limit for each time period and then increase it during load, but I am not sure if it's worth such effort You can run chanalys. It will give our the detailed information about all 2,716,479 blocks on RM chain. I have a program that loads the "LIST OF RM CHAIN BLOCKS" from chanalys, sorts the blocks by "free space". So we can see how many blocks will stay on RM chain if the toss limit will be increased to any value of "free space". Based on chanalys' that I got from our customers the rule of thumb is: toss limit = mean rec size + 20%. Another approach is the virtual "simulation": AreaDefrag.p program creates the statistics of record sizes. You don't need to scan the whole area. 1000 or 10,000 records would be enough. We know the rules that Progress uses for RM chain. We can virtually create the "records" with random size (simply generate a random "size" value according to the found distribution), add them to the virtual "blocks" and count the blocks that should be added to RM chain based on Progress rules. I planned to write such program before the conference but unfortunately the time is limited. ;-( > At first sight at your code I agree - it should defragment all records. But apparently it doesn't. Maybe our assumption, that the database always keeps the record in one piece when it is created in one code block is not always true ? I did not see such results during my tests. Were you the only one who was connected to the database during the tests? What db log says? At the end of AreaDefrag.p run you can dump RecCopy.OldRecid and RecCopy.NewRecid. Or you can use the persistent RecCopy table (instead of temp-table). So you can check the last records that were defragmented by AreaDefrag.p. Another way: enable after-image and aimage scan will show who and when change the data in you database. Best regards, George

Continue reading...
 
Status
Not open for further replies.
Back
Top