Forum Post: Re: Performance Degradation After Dump&load

  • Thread starter Thread starter Piotr Ryszkiewicz
  • Start date Start date
Status
Not open for further replies.
P

Piotr Ryszkiewicz

Guest
Hello, We continued discussion with George in private regarding his AreaDefrag.p. By George's request now we return to public. Here is George's comment to the results I sent him together with my answers: *********************************************************************************************** > Piotr, > I added the timestamps to the names of dbanalys files: > 1. dbanal.2015-09-19T20.58.02.baseline default values of create limit and toss limit (just load) > 2. dbanal.2015-10-02T16.23.09.crlim32 (just load) Create limit 32, Toss limit 300 > 3. dbanal.2015-10-09T11.34.16.tosslim Create limit 100, Toss limit 400 (just load) > 4. dbanal.2015-10-12T13.25.46.tosslim.poload tosslimit 400,createlimit 300 Same as 3, but after creating some millions records with createlimit changed to 300 > Can you specify the toss/create limits for first 3 dbanalys files? >> If I understand the results correctly, toss and create limits during load were set correctly (2651 fragmented records out of 551 mln records looks ok), > I disagree. > First of all, the load can't prove if the create limit is correct or not. I was not precise enough here. I meant mainly toss limit here. > Secondly, as minimum the correct toss limit during data load should result in short RM chain (shorter than 2 data > clusters). But you got: > RM CHAIN ANALYSIS > --------------------------- > Number of Object Object > Blocks Type > ------------------------------------------------------------------------------------------------------ > 2716479 Table PUB.old_trans:1 > 29622159 block(s) found in the area. > 9% of all blocks are RM chain. > You can test the toss limit during the load but you don’t need to load 200 GB of records. > Area cluster size is 512 blocks. 2 clusters = 1000 blocks. Load 100,000 blocks. > Your have 20 records per block. Hence it's enough to load only 2 millions records. It would be less than 1 GB. I think that having 2 clusters (1024 blocks) in rm chain is easy when you have maybe 100000 blocks, but not with almost 30 millions blocks. Moreover, record length is not homogenic over time. As system lived we added more and more fields so old records are smaller than new ones. To achieve 1024 blocks in RM chain I would have to set very high toss limit, but then I will get much bigger database with a lot of empty space (mainly when loading old, small records). Maybe if I calculate ideal toss limit for each time period and then increase it during load, but I am not sure if it's worth such effort. > You run AreaDefrag 5 times: > 1. post_tr.area_7.2015-10-12T13.28.42 Blocks from 513 to 28147199, 315320 frags found, Defrag: not enabled, Elapsed time: 78429.372 > 2. post_tr.area_7.2015-10-13T12.40.53 Blocks from 27867649 to 28147199, 313002 frags found, Defrag: enabled, Elapsed time: 2668.86 > 3. post_tr.area_7.2015-10-13T13.36.33 Blocks from 27867649 to 28169727, 1722 frags found, Defrag: not enabled, Elapsed time: 867.796 > 4. post_tr.area_7.2015-10-13T13.58.15 Blocks from 27867649 to 28169727, 1747 frags found, Defrag: enabled, Elapsed time: 888.936 > 5. post_tr.area_7.2015-10-13T14.20.15 Blocks from 27867649 to 28169727, 8 frags found, Defrag: not enabled, Elapsed time: 890.01 > Between 12.40.53 and 13.36.33 you seemed to create new records. > I guess a few new records were also created between 13.58.15 and 14.20.15. No. I am absolutely sure I did not create any records. >> 4th run again with Allow-Defrag – it made almost all records. >Program should defragment all 100% records. Otherwise it will report the reasons why the fragmented records were skipped. But program's log says nothing. At first sight at your code I agree - it should defragment all records. But apparently it doesn't. Maybe our assumption, that the database always keeps the record in one piece when it is created in one code block is not always true ? I will try to analyze your code tomorrow, maybe something will come to my mind. >Would you mind to continue the discussion on Community forum? > I can be wrong in my statements. And I would be glad if other people will correct me. > You can copy the current discussion to Community as well. Sure. Here is it. Beste Regards, Piotr

Continue reading...
 
Status
Not open for further replies.
Back
Top