Forum Post: Re: Dump_d.p + Bulkload Vs Binary Dump And Load

  • Thread starter Thread starter George Potemkin
  • Start date Start date
Status
Not open for further replies.
G

George Potemkin

Guest
> you can compress indexes online, which pretty much does the same as an index rebuild This is not entirely correct. Idxbuild rebuilds a free chain while idxcompact does not. In SAT2 the blocks that were in past owned by an index can be added to a free chain (for an unique index: to an index delete chain and then to a free chain) but later these free blocks can be re-used only by the same index. The order of blocks on the free chain does matter for the read speed. It's easy to test. Run idxifx/2. index scan /without/ record validation and compare its running time with the time of the "area block analysis" phase of ixanalys. Or make a copy of your production databases and use Data Dictionary to delete the indexes in the large index area. The clusters owned by the indexes will be added to the area's free clsuter chain. Check the time that dbanalys spends to scan the free cluster chain. Dbanalys reads only two blocks from each cluster: the first and the last ones. Most likely the time to scan the chain can be a few tens times slower than the time of the sequential area scan during the "area block analysis" phase. These tests have sense only for large indexes. Otherwise the filesystem cache will hide the difference. Also it's important to use a copy of real production database. IIRC, idxcompact was introduced in V9.0A.

Continue reading...
 
Status
Not open for further replies.
Back
Top