What do the ATM test results mean?

Dear Santa(s),
I have been rather bad this year. So YouTube had recommended me to watch Gus Bjorklund presentation - "Time And How To Get Rid Of it -- Revisited".

The presentation raised some questions in my poor head but I can't find the answers on my own. For Christmas, can I get the answers to my questions to have a happy empty mind as it was before?

In addition, I have the detailed statistics for a couple of ATM tests. The tests used to be a way to get the answers. Unfortunately, the test scenario wasn’t mine and their results only raised more questions.

Gus' presentation:
we're interesting in how long the transactions take”
ATM transaction duration
slow 82 ms (-spin 5,000)
fast 31 ms (-spin 50,000)


IMHO, we also need to know a transaction duration in a single-user test where there are no resource competition. It can be used as a baseline. I bet the result will surprise. The number of sessions run in a test significantly affects its result. Maybe more significantly than the spin value. If you would like to get the shortest transaction duration /and/ the maximal number of the commits per second then the best thing to do is to find out the optimal number of the sessions to run.

Is the optimal number of the sessions determined by the number of CPU threads? I bet NO. My guess - the optimal number is much smaller. Non testatum. For a record - In Gus’ tests the number of the concurrent users was 150 and the number of CPU threads was 16 (4 quad-core 2.4 GHz intel processors).

At the end of the presentation Gus said:
The other thing that happens as you're using these CPU cycles is the operating system eventually is going to say okay you've had the CPU long enough, it's time to give somebody else a chance. So you get stopped and somebody else gets scheduled. Well, if that other person who's scheduled wants the same lock that you have and you stopped so you can't release it then you could end up in a situation where 100% of the CPU goes to all the people who are trying to get the lock and the lock holder gets starved other.

We can estimate how likely such a situation (an inactive process owns a latch) is. My estimation – the chances are low.

But there is another aspect of the same thing. A quantum or CPU time slice (the period of time a scheduler allows a particular process to be served by CPU) is about 10-100 ms. The transaction durations in ATM tests is of the same order as a quantum. If a transaction is not committed during a quantum then the out of service period will be added to the transaction duration. We will get the significant jumps in transaction durations.

Gus wrote:
What do we learn from all this?
3) -spin should be higher than we thought


The optimal spin value in Gus’ tests is 50,000.
How about the best practice: “-spin should be set as a starting point to pi * birth year of DBA”?

Why the optimal value of the spin differs between ATM and readprobe tests? As far as I understand, the optimal spin value is when a process is spinning about the same time as the latch is locked by other process. So if we are forced to use the higher value of the spin, the one of the latches is probably locked longer than we thought. The question is: which latch? Are there any direct evidence of the long latch locks? Are there any theoretical explanation of this? Gus seems to say the opposite:
Latches are typically held for very short times.
maybe as little as 100 nanoseconds on modern computers
btw: with 2.4 GHz processor, 200 nsec is 479 clocks


Pathologists know: MTX latch can be owned by a process while it is writing to a disk. Disk IO is a slow operation. But during ATM tests the most (if not all) of MTX locks happen outside disk operations. Even MTX locks should stay short.

The total time while a process locks any latches is a very tiny part of the transaction duration. Other phases of transaction processing are much longer. The time of these “latch free” phases may vary randomly due to the different factors and these variations, IMHO, can be longer than a phase when a process is using the latches.

The optimal spin value on more /powerful/ server than the one used by Gus is 500,000 rather than 50,000! “More powerful” = more CPUs.
-spin: “On multi-processor machines, the default value is 6000 * # of CPUs
The common opinion seemed to be “the default value is wrong”.
Is this opinion wrong?

Gus’ server: 4 quad-core 2.4 GHz intel processors
I have some results for the server: Intel Xeon Processor E7 v4 Family 2.80 GHz (max turbo frequency: 3.50 GHz), CPU threads (according cpuinfo): 80

Unfortunately, I have the detailed statistics only for the preliminary tests on this server that were run with:
The concurrent users: 150 and 4
-spin 10000
-nap 10
-napmax 250
Code:
Activity: Resource Queues
                150 sessions     4 sessions
Resource Queue  Requests Waits   Requests Waits (all per sec)
TXE Commit Lock     2164   209       8181     6
The team of 4 users is 4 times more efficient than 150 sessions! One user from the magnificent four can beat the whole band of 150 sessions! But 8 sessions create almost te same number of transactions as 4 sessions. The power of the small teams quickly weakens. To remind: CPU threads is 80, -spin 10,000.

Code:
         150 sessions    4 sessions
Tx duration     69 ms       0.44 ms

Summary.log:
 Cl Time    Trans      Tps  Conc Avg R Min R 50% R 90% R 95% R Max R
--- ---- -------- -------- ----- ----- ----- ----- ----- ----- -----
150  600  1435891   2393.2 149.9   0.1   0.0   0.0   0.2   0.2   3.4
  4  600  4895286   8158.8   3.8   0.0   0.0   0.0   0.0   0.0  12.8
Of course, 4 users create the shortest transactions.

What was the bottleneck for 150 sessions? Of course, the naps and waits. But I would like to know the name of the concrete resource.

The tests prove – the spin value does matter (as well as the number of users):

Code:
Clients  spin  bibufs    B       TPS  Tx Durations (ms)
    4   10,000  100  4,194,304   8,181   0.49 (have the detailed stats)
  100  480,000   64  4,194,304  13,984   7.15
  100  480,000  100  4,194,304  13,491   7.41
  100  480,000   64  4,194,304  13,405   7.46
   20  400,000  120  8,388,608  13,160   1.52
  100  480,000  100  8,388,608  12,737   7.85
  100  400,000  120  8,388,608  12,560   7.96
   80  400,000  120  8,388,608  12,395   6.45
  100  100,000   64  4,194,304  12,155   8.23
  100  300,000  100  8,388,608  11,638   8.59
  100  120,000   64  4,194,304  11,022   9.07
  100  150,000   64  4,194,304  10,166   9.84
  100  200,000   64  4,194,304   9,735  10.27
  100   20,000   64  4,194,304   5,406  18.50
  100   10,000   64  4,194,304   3,006  33.27
  150   10,000  100  4,194,304   2,164  67.01 (have the detailed stats)
Does the spin value optimize the access namely to the MTX latch? Of course, MTX latch got the most naps. But it would be better to check how busy the latches are. I did it for the 150 user test with the spin 10,000: MTX latch was busy only 14% of the time. Every transaction in ATM tests locks the latch exactly 8 times and forces to nap on the latch 0.30 times in average – 0.04 naps per latch lock. Is it enough to blame the MTX latch for the bad results in the test? I’m not sure. Whom if it’s not MTX?

How busy the latches when the spin value is optimal? Unfortunately, I don’t have data from the tests with the large spin.

Moreover, the transactions spend in the “begin” status 85% (150 users) and 77% (5 users) of its duration, in other words, the most of the time they are not trying to update a database! Correspondently, the “active” phase lasts only 10 ms and 0.1 ms in the absolute numbers. Bad application code can keep the transactions in the “begin” status as long as you want but it’s not the case for the code used in ATM test. Why the begin phase of the simple transactions is their longest part?

“Phase 2” phase that is a rudiment of the forgotten Two-Phase Commit feature is 3 times shorted than the Active phase (150 users test). Nevertheless, it’s almost 10 times longer than a whole transaction in the 5 users test. What database resources are used during “phase 2”? Why the “phase 2” is not instant when 2PC is not enabled?

“None” status is very short – about 0.3% of transaction duration but they are still visible. Progress seems to do “nothing” during the “none” phase but it still takes some time.

Some data from the tests with 150 and 5 users as well as the additional questions are at the post below.
 
Last edited:
Gus' presentation:
ATM transaction duration
slow 82 ms (-spin 5,000)
fast 31 ms (-spin 50,000)

My formula to estimate the durations of the short transactions:
Average transaction duration = Average number of the simultaneous transactions / Commits per Sec

Our results:

Code:
         150 sessions    4 sessions
Tx duration     69 ms       0.44 ms

Summary.log:
 Cl Time    Trans      Tps  Conc Avg R Min R 50% R 90% R 95% R Max R
--- ---- -------- -------- ----- ----- ----- ----- ----- ----- -----
150  600  1435891   2393.2 149.9   0.1   0.0   0.0   0.2   0.2   3.4
  4  600  4895286   8158.8   3.8   0.0   0.0   0.0   0.0   0.0  12.8
Users in ATM tests create the identical transactions. People usually report the statistics per second. IMHO, it’s better to use the statistics per transaction. The most database resource requests will not change when you are varying the database startup parameters. So the only thing that we would need to compare – the resource waits per transaction.

Code:
Activity: Latch Counts
       ------ 150 sessions -----  ------- 4 sessions -------
         Per Tx        Per Sec       Per Tx        Per Sec
Latch  Locks Naps    Locks  Naps   Locks Naps    Locks  Naps
MTX     8.07 0.30   17,462   643    8.07 0.001  65,994     7
BIB     7.12 0      15,416     0    7.11   0    58,200     0
TXT     4.00 0       8,655     0    4.00   0    32,723     0
LKT    22.00 0.00   47,608     4   22.00   0   179,988     0
TXQ    12.02 0.02   26,021    48   12.02   0    98,373     0
LKF    14.00 0.05   30,298   108   14.00   0   114,541     0
BHT    32.24 0.69   69,760 1,487   25.52   0   208,799     0
CPQ     2.17 0       4,704     0    3.19   0    26,061     0
LRU     2.37 0.03    5,136    67    1.92   0    15,722     0 (–lruskips 100)
BUF    59.68 0.54  129,139 1,177   51.16   0   418,505     0

Activity: BI Log
                    150 sessions         4 sessions
                  Per Tx    Per Sec   Per Tx    Per Sec
Total BI writes     0.04      85.90     0.04     320.32
Records written     7.10  15,298.43     7.07  57,812.98
So the formulas:
BIB Locks = number of the recovery notes
MTX Locks = BIB Locks + 1

The most of MTX latch locks are not used during disk IO: 8 MTX latch locks and only 0.04 BI writes.

People often estimate how busy are the latches by the naps. But not all naps equal each other (unless –napmax equates –nap) . The large spin may reduce or even eliminate the naps completely. Nevertheless, the latches will stay busy. There is an alternative estimation: read the latch table X times. If you catch up a latch owner Y times then the latch is busy Y / X of the monitoring interval. Of course, this method will not work for the latch families. It works only for 5 of 10 latches used in the ATM tests. Reading the latch table is very inexpensive – you can read _Latch VST many times per second. My script read the latch table every second during the whole test.

Code:
Activity: Latch Counts -> Owner (checked 120 times)
     150 sessions  4 sessions
Latch Owned/Busy   Owned/Busy
MTX      23 19.1%     17 14.2%
BIB       3  2.5%      7  5.8%
LRU       3  2.5%      0
TXQ       1            1
LKF       1            0
Note: the same processes lock two latches simultaneously - MTX and BIB. 8th lock of MTX latch is really the first lock that is used to allocate a transaction. But as we can see, MTX latch is 2.5 times busier than BIB latch. Explanation: MTX latch is locked 2.5 longer than BIB latch.

Code:
Activity: Record
       150 sessions       4 sessions
      Per Tx  Per Sec   Per Tx  Per Sec
Read    3.01 6,491.77   3.00  24,542.72
Update  2.98 6,414.61   3.00  24,538.04
Create  0.98 2,103.06   1.00   8,179.94

Activity: Resource Queues
                 150 sessions       4 sessions
                     Per Tx           Per Tx
Resource Queue  Requests Waits   Requests Waits
Record Lock         4.98  0.00       5.00  0
DB Buf I Lock       1.01  0.00       1.01  0.00
Record Get          2.99  0          3.00  0
DB Buf Read         1.97  0          1.76  0
DB Buf Write        0.21  0          1.14  0
DB Buf S Lock      17.09  0.49      15.98  0.00
DB Buf X Lock       6.69  2.57       5.03  0.03
BI Buf Write        0.04  0.00       0.04  0.00
TXE Share Lock      5.00  0          5.00  0
TXE Update Lock     0.03  0.01       0.02  0.00
TXE Commit Lock     1.00  0.10       1.00  0.00
Who is the main troublemaker in the 150 users test? MTX latch or the exclusive buffer locks (DB Buf X Lock)?

The phases of the transactions.
We can read the transaction table repeatedly to count the total number of the opened transactions as well as the numbers per their status. My script did this once per 10 seconds but it can be done much frequently. All users create the identical transactions. So the transaction statistics per sessions is identical to the statistics of the status durations inside the transactions.

Code:
Status: Active Transactions
         150 sessions    4 sessions
Status   Count Percent   Count Percent
Total   149.31  99.54%    3.67  91.7%
Begin   127.23  84.82%    3.08  77.0%
Active   14.62   9.75%    0.58  14.5%
Phase 2   7.00   4.66%    0      0
None      0.46   0.31%    0      0
The changes of the transaction status during the whole test with 150 users:

TransactionStates.jpg
The picture does not look like the white noise. We can see the рills and ravines. Their width is about 30 seconds or a thousand times longer that the transaction duration. They have nothing common with the checkpoints. Why the “begin” phase is sometimes shrinking on the long time intervals? Who is the “conductor”? System scheduler?
 
Last edited:
> What was the bottleneck for 150 sessions? Of course, the naps and waits. But I would like to know the name of the concrete resource.

Status: Blocked Clients answers:
69.3% (104 of 150 in the average per snapshots) clients were blocked;
94.7% waits were in the area 15: "History1 Table":15,128;512;
87.3% of these waits in the area were BKEX (exclusive buffer locks), the rest 12.7% are BKSH;
2.8% waits - TXEC (TXE Commit Lock);
0.4% waits - TXEU (TXE Update Lock).

Status: Buffer Locks adds:
Average Usect (how many processes are currently sharing this buffer) in area 15 is 1.059.

So the bottleneck is the creating of the "history" records.

Why the spin value matters for the results when the main bottleneck is the buffer waits?

It would be interesting to run the ATM tests with RPB 1 for "History1 Table" but I doubt it will help. It's the problem of the space management algorithms used by Progress: all processes are trying to insert new record in the same first block on the RM chain - even if the block already have no resources for their records. The most processes will be "disappointed" when they get an access to the block and will be forced to try their luck again.

If someone will have the opportunity to run ATM tests and is ready to collect the statistics using my dbmon script, I would glad to participate in the analysis of their results.
 
> Why the spin value matters for the results when the main bottleneck is the buffer waits?

A possible explanation is based on the fact that the small number of the processes gives the best results in ATM tests. When the processes are spinning, they are not doing anything useful including the buffer locks. A few lucky processes that avoided the spinning will do their tranactions without waiting for the buffers. In other words, the large spin value just eliminates the extra processes from a game.

How to prove the theory? Set the -spin to the very large value (-spin 2147483647 is the maximum). I bet the results of ATM tests will not degrade compared to the best spin value. In both cases the waits on semaphores will be low compared to the ones for the reasonable spin value like 5000 or so. And I expect the significant performance degradation in the readporbe tests with the huge -spin value.
:
 
It would be interesting to figure out how uniform the wait times are for all of the processes. From a "real world" perspective you probably don't want a situation where one or two ATM machines out of several hundred are fast but the remainder are ridiculously slow.
 
And the Blocked Clients chat shows the wave-like changes - up and down – where the period of the changes is about 30 seconds. There are no database startup parameter with the dimension of the seconds that would match the periods of these cycles. How about the phases of the database operations?

It’s not the checkpoints.

I see only one mechanism that can lead to the cyclical slowdowns in the database. Let’s assume, at some moment 100 processes are waiting for a first block on RM chains to insert a new record of the history table. Let’s assume, the block had only one free slot. Eventually all processes will get an access to the block but 99 of them will be unable to use it and will be forced to join a new queue for the next block in the RM chain. In other words, their buffer waits will be doubled. Next block had 128 free slots at the beginning. But a bit later the situation will repeat itself. Each transaction creates one record in the history table. Test created 2000-3000 transactions per second. The block in the “History1 Table” area will be filled in about 64 ms or faster. The time looks suspiciously like the duration of the “fast” transactions in the tests with the large number of users. But the time does not looks like the periodicity on the chart above.

The time to fill the whole data cluster is 512 times longer - about 33 seconds. Bingo? The processes will rise the area HWM and they should update the area’s object block. This is where we get the splashes of DB Buf Share Locks (the difference between two lines on the chat – between total count of the blocked clients and the number of DB Buf X Locks).

AI scans can be used to get the statistics similar to the one in the _ActSpace VST but separately for each RM chain (rather than per the whole database like in _ActSpace). Also the scans will show the exact time when the area’s HWM were changed. So it’s possible to compare these times with cycles on the chat above. The match may confirm the suggested theory. Or we could re-run the tests with the smaller size of the data clusters in the "History1 Table" area.

BTW, AI scans can be used to analysis the transaction performance as well. It's not a perfect tool but it can see what promon/VSTs can't. Though it’s a different story.
 
Last edited:
Another question about the internals of Progress transactions. First the fact – the recovery notes are generated by a client’s session. Even by a remote client that sends each recovery note to its server using a separate network message. The recovery notes can be generated, of course, only when the space management algorithms found a block for a new or updated record. The question: does a client generate the recovery notes WHILE a database block chosen for a new record is exclusively locked by the client? Is it the same for the index blocks?

One more fact - a session can simultaneously lock a few database blocks but only the index blocks on the path from a root block to a leaf block. Record blocks are locked by each session one at a time.

In summary, it looks like:
1. Process waits for an exclusive buffer lock of the first block on RM chain;
2. If the block can be used to store a record or its fragment then the process will wait for MTX latch;
3. Then the process generates the RL_RMCR recovery note (is it why the latch name is the "microtransaction"?);
4. Then it locks the BIB latch (the waits for the BIB latch are possible but they are very unlikely) to add the recovery note to the BI buffer;
5. Finally, the process releases both latch locks and only then it releases the buffer lock.

And the explanation of the long "begin" phase of the transactions in ATM tests reported in the posts above. A process locks first MTX latch to create an entry in Transaction Table. It's the beginning of the "begin" phase. Then the process spends long time in the queue to get the exclusive lock for a record block. Only after that it locks the first pair of MTX and BIB latches. At this moment the transaction status is changing to "active". Then process does the same with other database blocks (record's and index blocks). Pomon's "Status: Blocked Clients" says - there are no long buffer lock queues other than in "History1 Table" area. I bet the first record created in the ATM transactions is the history table.
 
Last edited:
> Pomon's "Status: Blocked Clients" says - there are no long buffer lock queues other than in "History1 Table" area. I bet the first record created in the ATM transactions is the history table.

Indeed, the notes for the history1 table are the first ones in the transactions.
4GL code from atm1.p:
Code:
    do for account, branch, teller, history1 transaction:

    /* retrieve and update account */

    find account where account.id = theAccount exclusive-lock.
        assign account.balance = account.balance + delta.

    /* create history record. note we need the new account balance for that */

    create history1.
    assign
        history1.trx_id    = txId
        history1.trx_date  = txDate
        history1.trx_time  = txTime
        history1.account   = theAccount
        history1.teller    = theTeller
        history1.branch    = theBranch
        history1.delta     = delta
        history1.balance   = account.balance
        .

    /* retrieve and update teller */

    find teller where teller.id = theTeller exclusive-lock.
        assign teller.balance = teller.balance + delta.

    /* retrieve and update branch */

    find branch where branch.id = theBranch exclusive-lock.
        assign branch.balance = branch.balance + delta.

    end. /* of transaction block */

But all tables except the history1 are updated at the end of transaction - the changed fields are not indexed.
AI scan confirms:
Code:
# Code     Area
- -------- -------------------------
1 RL_TBGN
2 RL_RMCR  "History1 Table":15
3 RL_CXINS "History1 Index":16
4 RL_RMCHG "Teller Table":25
5 RL_RMCHG "Branch Table":9
6 RL_RMCHG "Account Table":7
7 RL_TEND

Activity per transaction:
Code:
BI notes        7
Table Reads     3 (teller branch account)
Table Updates   3 (teller branch account)
Table Creates   1 (history1)
                 150 sessions     4 sessions
Resource Queue  Requests Waits   Requests Waits
DB Buf S Lock  17.09  0.49      11.98   0.001
DB Buf X Lock   6.69  2.57       5.03   0.03
4 users create the expected minimal number of buffer locks (X = 5 and S = 12).
150 users create the extra buffer locks of both types per each transaction and each transaction waits for DB Buf X Lock more than once.

150 sessions:
The average transaction duration is 69 ms.
The “begin” phase of transaction lasts about 85% of transaction duration.
During the “begin” phase a process is waiting for buffer locks (most likely for more than one lock).
Hence the waits for buffer locks can be roughly estimated in tens ms.

More sessions => longer the queue to the first block on RM chain => longer the waits in the queue.

BTW, according to "Status: Blocked Clients", the DB Buf S Lock waits are the waits for the object blocks - mostly for the object block of the history table and less for the object block of its index and not for the object blocks of the areas. Hence the cluster size of the areas does not seem to matter.

> And the Blocked Clients chat shows the wave-like changes - up and down – where the period of the changes is about 30 seconds. There are no database startup parameter with the dimension of the seconds that would match the periods of these cycles. How about the phases of the database operations?

My new bet - the period is equal to about an average transaction duration multiplied by the number of sessions. Transaction duration will grow with the number of sessions - maybe faster. The tests playing with the number of sessions would show how the number of the blocked clients is changing during the test.

Happy New Year to all of you!
 
Last edited:
If I were running tests, I would do it like this:
1. Change the number of users in the range from 1 to 100.
What number of users will provide the maximum number of transactions per second?
Expectation: the highest result will be with 4-6 users provided the number of CPUs is larger than this number.
What is the ratio of the best result to the transactions per second created in single user test? Less than the number of users?
The average transaction duration will vary about 100 times depending from the number of users - from less than a millisecond to 60 ms or so. The ETIME or DATETIME function will not provide a sufficient accuracy for measuring the duration of short transactions but we can estimate the average duration using other method. Will the average transaction duration increase linearly with the number of users? Faster? Slower?
How does the number of BKEX/BKSH locks and waits per transaction change depending on the number of users?

2. Repeat the first point using the -spin 5000, 50000 and 500000. Maybe -spin 2147483647 as well.
The start of the graphs will be, of course, the same, but as the number of users increases, we will see the difference (about 2 times) at least between the -spin 5,000 and 50,000. What about the difference between the -spin 50,000 and 500,000?

3. Run 100 user test with RPB 32 for the history area instead of 128 as by default. Maybe with RPB 1 as well. Maybe using the -spin 5,000 and 50,000.
Real number of records per block for the history table is about 55 (for 4K db blocksize). We will get more db writes but the blocks will leave the RM chain faster and the table’s object block will be also updated more frequently. How this affect the length of BKEX queue to the first block in the RM chain and the length of BKSH queue to the object block? promon -> Status: Blocked Clients or _Connect-Wait* VST.

4. Run 100 user test using atm4.p (4 history tables instead of one). Maybe using the -spin 5,000 and 50,000.
Gus Björklund and Mike Furgal told me this did not improve the results. The question is "warum?". Using 4 history tables should, of course, affect the BKEX/BKSH queues and waits. Will we see the expected changes? Will we see the changes of the “begin” phase durations?
 
Last edited:
atm4.p
Code:
    nhist = random (1, 4).

    /* now do one transaction */

    txStartMs = etime.
    do for account, branch, teller, history1 transaction:

    /* retrieve and update account */

    find account where account.id = theAccount exclusive-lock.
        assign account.balance = account.balance + delta.

    /* create history record */

        case nhist:
            when 4 then do:
            create history4.
[snip]
            when 3 then do:
            create history3.
[snip]
            when 2 then do:
            create history2.
[snip]
        when 1 then do:
            create history1.
[snip]
        end case.
The scope of the history1 table is the "do for" block. The scope of other history tables is wider. Can this affect the results?
 
Last edited:
If I were brave enough to fix Gus's code, I would collect the statistics of all transaction durations in milliseconds (the count and msec fields in temp-table). Expected distribution for the tests with large number of users: 3% (or less) of all transactions will be committed in 1 ms or faster. The durations of other transactions will be divided into a few groups around the values: n * quantum where n = 1, 2. Maybe more if the -spin value is not raised to the heaven (e.g. just -spin 5,000). And the “quantum” constant equates to the tens milliseconds. In other words, the graph of the distribution will have a few humps and the distance between the humps will depend from OS (e.g. the average wait time on semaphores and/or CPU time slices).

driver.p sais:
Code:
    /* Calculate and display concurrency and residence time.

       Concurrency is an approximate measure of the number of transactions
       which were executing concurrently. This will be less than
       the number of generators because each generator executes
       code which is not part of the transaction being measured.

       "residence" time is approximately the time from transaction
       begin to transaction end.
     
       We are only interested in the 50, 90, and 95 percentiles.
       Perhaps we should do 20 increments of 5%. */
Result:
Code:
 Cl Time    Trans      Tps  Conc Avg R Min R 50% R 90% R 95% R Max R
--- ---- -------- -------- ----- ----- ----- ----- ----- ----- -----
150  600  1435891   2393.2 149.9   0.1   0.0   0.0   0.2   0.2   3.4
  4  600  4895286   8158.8   3.8   0.0   0.0   0.0   0.0   0.0  12.8
 
Last edited:
> I would collect the statistics of all transaction durations in milliseconds (the count and msec fields in temp-table).

Let’s name them as the stat records.
Other fields of these records:

Checkpoint counter – check the number of checkpoints before and after transaction. If the number is changed then increase the counter. The stat records with non-zero checkpoint counters most likely have the long durations. We can exclude them from the statistics if we should/want to.

The time intervals between the values. Save the current time of transaction with the given duration. Calculate the time interval since previous time. Calculate min/max/average for the interval. Are the transactions of a given duration distributed evenly across the test or are there the “rush hours”?

The increments of TRIDs between the values. Save the current TRID with the given duration. Calculate the increment since previous TRID. Calculate min/max/average for the TRID increments. If a database was started with -n as a power of 2 then TRIDs will not have the gaps. The “rush hours” are the ones where the time intervals between the values are large but the increments of TRIDs are small.

DbAccess, BIRecWrites - the increments of _UserIO-DbAccess, _UserIO-BIRecWrites before and after transaction. We know the expected numbers for the atm transactions. But some transactions will be forced to use the extra buffer locks and/or to create the extra recovery notes (e.g. the changes of RM chain). Calculate min/max/average values. These values can help to explain why the transaction durations are long.

For all fields for which the average value is calculated, the squared deviations from the mean (SDM) should also be calculated as well. The helpfulness of SDM seems to be underestimated.

Per transaction statistics could make the analysis of the test results much easy.
 
How to estimate how long the db buffers are locked in atm tests? It’s easy.

When a process need to change the RM chain it should update the table’s object block. While the block is exclusively locked by this process all other processes that would like to read the block using the BKSH lock will wait. Each transaction in atm test creates only one record in the history table. So each transaction is requesting only one BKSH lock on the object block of the history tables. The requests per second is TXE Commit Locks per second.

Promon -> Status: Blocked Clients will shows the queues to the block but the snapshots catch the queues with the length somewhere between 1 and max value. So the average caught queue length is a half of the max length.

The duration of the BKEX lock can be estimated as the estimated max length of BKSH wait queues divided by the average TXE Commit Locks per second.

150 user test:
21 snapshots of Status: Blocked Clients caught 736 processes waited for BKSH 65536:15 (history1 Table area).
In the average - 35.05 processes per snapshot. Max caught queue length is 113.
Average TXE Commit Lock per sec was 2327.

So the estimated BKEX lock duration is 2 * 35.05 / 2327 = 30 ms.
Is it faster or longer than CPU time slices (CPU quantum)?

Gus' presentation:
ATM transaction duration
slow 82 ms (-spin 5,000)
fast 31 ms (-spin 50,000)

So these transactions had one or two buffer waits per transaction.
Duration of the transactions without the buffer waits and without the latch timeouts is less than one millisecond.
 
Last edited:
Let’s assume a latch is locked 100 ms. What the -spin value we need for the spinning to take the same 100 ms?

Idea of the test:
Let’s lock the MTX latch by enabling the quiet point.
Start a transaction that does not update a database but just creates and deletes an entry in the transaction table.
ABL session will try to get the MTX latch by spinning and napping in loop.
Disable the quiet point and check how many naps we get while the quiet point was enabled.

We must take into account the peculiarities of the quiet point:
When the quiet point is enabled the broker not only locks the MTX, BIW, AIW latches but also resets the -spin to 1 and -napmax to 7000. Obviously, we can’t use VST to change any db startup parameters during the quiet point but we can do this using promon (Adjust Latch Options).
Another peculiarity - it seems to be a database broker who is responsible for updating the statistics counters. We will not see the naps while the quiet point is enabled - neither in promon, nor through VSTs - but the counters will be updated right after the quiet point is disabled.

Test can be run on any database (including the empty db):
1. proquiet db -C enable
2. Reset the -spin, -nap and -napmax using promon/Adjust Latch Options.
In the test I always set the -napmax equal to the -nap. The recent Progress versions allow to set the -nap to 1 but the minimal value of the -napmax is still restricted by 10 ms. That is why I used the -nap only between 10 and 500.
3. Run LockMTX.p;
4. Wait a few seconds (or more);
5. proquiet db -C disable
At this moment the LockMTX.p will write the results to the waits.txt file.

The results:
Code:
   spin nap   Time  Time - nap
      1  10  15.55   5.55
      1 100 108.75   8.75
      1 500 510.57  10.57
   1000  10  15.57   5.57
   1000 100 108.79   8.79
   1000 500 510.73  10.73
  10000  10  15.51   5.51
  10000 100 108.96   8.96
  10000 500 511.27  11.27
 100000  10  15.55   5.55
 100000 100 108.89   8.89
 100000 500 510.70  10.70
9999999  10  30.73  20.73
9999999 100 115.81  15.81
9999999 500 514.63  14.63

Time (in milliseconds) is the transaction duration divided by the number of MTX waits (naps).
The last column is the same time but without the -nap value. It’s expected to be a spinning time but it rather looks like an inaccuracy of the nap setting.

The -spin value does not seem to have any effect. What did I miss?

LockMTX.p
Code:
DEFINE VARIABLE vOutputFile AS CHARACTER NO-UNDO INITIAL "waits.txt".

DEFINE VARIABLE vLocks LIKE _Latch._Latch-Lock  NO-UNDO.
DEFINE VARIABLE vWaits LIKE _Latch._Latch-Wait  NO-UNDO.
DEFINE VARIABLE vOwner LIKE _Latch._Latch-Owner NO-UNDO.
DEFINE VARIABLE vBgnTime  AS DATETIME  NO-UNDO.
DEFINE VARIABLE vEndTime  AS DATETIME  NO-UNDO.
DEFINE VARIABLE vInterval AS INTEGER   NO-UNDO.
DEFINE VARIABLE vSpin     AS CHARACTER NO-UNDO.
DEFINE VARIABLE vNap      AS CHARACTER NO-UNDO.
DEFINE VARIABLE vNapMax   AS CHARACTER NO-UNDO.

FOR EACH DICTDB._DbParams NO-LOCK:
  CASE DICTDB._DbParams._DbParams-Name:
    WHEN "-spin":U   THEN ASSIGN vSpin   = DICTDB._DbParams._DbParams-Value.
    WHEN "-nap":U    THEN ASSIGN vNap    = DICTDB._DbParams._DbParams-Value.
    WHEN "-napmax":U THEN ASSIGN vNapMax = DICTDB._DbParams._DbParams-Value.
  END CASE.
END.

FOR FIRST DICTDB._Latch NO-LOCK WHERE DICTDB._Latch._Latch-Id EQ 2:
  ASSIGN vOwner = DICTDB._Latch._Latch-Owner.
END. /* DICTDB._Latch */

MESSAGE "Check db parameters:" SKIP
        "-spin"     vSpin   SKIP
        "-nap"      vNap    SKIP
        "-napmax"   vNapMax SKIP
        "MTX Owner" vOwner
VIEW-AS ALERT-BOX INFORMATION BUTTONS OK.

FOR FIRST DICTDB._Latch NO-LOCK WHERE DICTDB._Latch._Latch-Id EQ 2:
  ASSIGN vLocks = DICTDB._Latch._Latch-Lock
         vWaits = DICTDB._Latch._Latch-Wait
  . /* ASSIGN */
END. /* DICTDB._Latch */

OUTPUT TO VALUE(vOutputFile) APPEND.
PUT UNFORMATTED SKIP
  SUBSTRING(ISO-DATE(NOW), 12, 12) "  Transaction is going to begin."
  " -spin "     vSpin
  " -nap "      vNap
  " -napmax "   vNapMax
  " MTX Owner " vOwner
SKIP.
OUTPUT CLOSE.

ASSIGN vBgnTime = NOW.
DO TRANSACTION:
  FIND FIRST DICTDB._User NO-LOCK NO-ERROR.
END. /* TRANSACTION */
ASSIGN vEndTime = NOW.

FOR FIRST DICTDB._Latch NO-LOCK WHERE DICTDB._Latch._Latch-Id EQ 2:
  ASSIGN vLocks = DICTDB._Latch._Latch-Lock - vLocks
         vWaits = DICTDB._Latch._Latch-Wait - vWaits
  . /* ASSIGN */
END. /* DICTDB._Latch */

ASSIGN vInterval = INTERVAL(vEndTime, vBgnTime, "milliseconds":U).
OUTPUT TO VALUE(vOutputFile) APPEND.
PUT UNFORMATTED
  SUBSTRING(ISO-DATE(vEndTime), 12, 12) "  Transaction is completed."
  " Locks " vLocks
  " Waits " vWaits
  " Interval " vInterval
  SPACE TRUNCATE(vInterval / vWaits, 2) " ms"
SKIP.
OUTPUT CLOSE.

MESSAGE "Check" vOutputFile VIEW-AS ALERT-BOX INFORMATION BUTTONS OK.
 
Last edited:
> The -spin value does not seem to have any effect. What did I miss?

Stupid me! The contribution of the spinning phase was expected about 100 nanoseconds or 0.1 ms. Only bonehead will set the -nap to the hundreds ms!

I added the results for the -spin 9999999 - the max value allowed to be set by promon.
The time of the spinning phase with such spin is about 5-10 ms.
So 100 ns is the -spin 100,000.
 
Last edited:
How often were the latch conflicts resolved during the spinning phase - without getting the naps?

The quotes from above:
"Every transaction in ATM tests locks the (MTX) latch exactly 8 times and forces to nap on the latch 0.30 times in average."
"MTX latch was busy only 14% of the time."

So at least one (= 14% of 8) MTX latch request per transaction happened when the latch was locked by another process by only 30% of these conflicts ended by a nap.

If we will set the spin lower than the latch duration we are guaranteed to get one nap per a conflict.

From Gus' presentation:
Code:
-napmax 10; vary -spin
  -spin Tx(ms) Extra naps
  1,000  174   13.90
  5,000   75    4.00
 10,000   48    1.30
 25,000   36    0.10
 50,000   35    0
100,000   35    0

The last column are the estimated extra naps added due to the low spin value - I substracted 35 ms (= min value) from the transaction durations and divided the result by 10 ms (= the -nap value).

Latch statistics per transaction from 150 user test:
Code:
Activity: Latch Counts
Latch Locks Naps
MTX    8.07 0.30
BIB    7.12 0
TXT    4.00 0
LKT   22.00 0.00
TXQ   12.02 0.02
LKF   14.00 0.05
BHT   32.24 0.69
CPQ    2.17 0
LRU    2.37 0.03
BUF   59.68 0.54
     ------ ----
     163.67 1.63

It would be interesting to compare the changes of the naps per each latch when the -spin was varying.

If we get a nap while requesting the MTX latch to create a history record then our transaction becomes longer by 10 ms. But also the transaction will keep a BKEX buffer lock longer. So the other 100+ users who are waiting in the queue to get the BKEX lock on the same block should wait it longer. In other words, some naps can be much more painful for the performance then other ones even if they are the naps for the same latch.
 
Back
Top