G
gus
Guest
> On May 4, 2016, at 1:26 PM, ChUIMonster wrote: > > If you are going to take the "one size fits all" approach then you should probably choose 128 or 256. Some people argue that 256 is pointless because nobody has records that small. I happen to know of some tables whose records are indeed that small. So I go with 256 when I am doing that. They have to be /very/ small. Here's the scoop on 256 rows per block: Block size 8192 bytes Data Block header 64 bytes Record block extra header 12 bytes Row directory for 256 records 512 bytes Per row overhead (17 x 256) 4352 bytes Create limit 300 bytes Total overhead 5240 bytes Space left for records 2952 bytes 2952 / 256 11.5 bytes for each record if they are all the same size. HOWEVER ******* with 256 rows per block, if you put just one record in the block (say 7,000 bytes) it would look like this: Block size 8192 bytes Data Block header 64 bytes Record block extra header 12 bytes Row directory for 1 record 2 bytes Per row overhead (17 x 256) 17 bytes Create limit 300 bytes Total overhead 395 bytes Space left for records 7797 bytes 1 7,000 byte record 7000 bytes Available space 797 bytes My point here is this: you do NOT have to sort all your tables into buckets of 1,2,4,8,16,32,64,128, and 256 buckets. You simply have to make sure that the maximum rows per block is large enough. 7,000 byte records can be stored in areas that are NOT 1 row per bloc or 2 rows per block just fine. You lose nothing by putting 7,000 byte records into a block that can handle 64 or 128 rows. In Type 2 areas (which you should use exclsuively) rows from diferent tables are not mixed. Stop doing all the extra pointless analysis you have been browbeaten into doing. The times when it is worthwhile are rare.
Continue reading...
Continue reading...