Forum Post: Re: Performance Degradation After Dump&load

  • Thread starter Thread starter Richard Banville
  • Start date Start date
Status
Not open for further replies.
R

Richard Banville

Guest
Here's how the auto defrag works: When a record is created, it is inserted using the minimum number for fragments given the record size. As records are updated and require more space, additional fragments can be created if there is not space available in the blocks of the current fragments. Create limit can help with this. The auto defrag happens at runtime as each record is updated (or deleted and rolled back). All the pieces (fragments) of the record are gathered up into one record buffer. The record buffer is then re-inserted starting with its current record location (since the rowed cannot change). If the record fits in the block with the first fragment then the record has been completely defragged. If not, a block with enough space to hold the remaining part of the record is searched for. Once the operation is complete, the maximum number of fragments the record will have is the minimum # of fragments given the record size + one (maybe). However, I do not believe this to be your problem. a dbanalys report after the load and after the subsequent production record insert operations can confirm this.

Continue reading...
 
Status
Not open for further replies.
Back
Top