T
tbergman
Guest
I’ve run into an interesting performance problem with a temp-table (TT1). The two fields of interest are decimal fields, StartNum and EndNum. These both participate in the only index on the table. The length of the values in these fields varies between 8 and 10 digits and the values represent whole numbers. EndNum is always GE StartNum. It’s only a decimal because I needed to store values GT 2^31 and a lot of this was written before int64. There are about 55k records in TT1. The program logic goes through another temp-table (TT2) with a similar decimal field (foo) that has values varying in length between 8 and 10 digits. It has about 7 million records. I executed a query like this. for each TT2: Find first TT1 where TT1.StartNum LE TT2.foo and TT1.EndNum ge TT2.foo no-error. end. Early tests seemed fine. Going through a thousand or so records in TT2 performed very well and I was ably to extrapolate the likely time for the entire set. It's when I began running this for more records that things began to go down hill. After tearing my hair out for a while, I put an etime around the find and output the measurement to a text file. I found that the find would execute in < 1ms when the value of foo was 8 or 9 digits in length, as soon as it was 10 digits, it would take around 20ms. Since TT1 was looping in the order of the foo field, the problem stated to show up only after about 20k records as that's when the value of foo changed to 10 digits. I switched to Int64 and had exactly the same problem. Changing everything to character performed well with all finds taking <1ms Is this expected? What causes this? Thanks, Tom
Continue reading...
Continue reading...