[Progress Communities] [Progress OpenEdge ABL] Forum Post: The changes of "On lru chain"

Status
Not open for further replies.
G

George Potemkin

Guest
OpenEdge Release 11.7.3 on Linux 64-bit Table’s areas are mapped to the primary buffer pool. Index’ areas are mapped to the alternate buffer pool. The sizes of the pools are proportional to the total size of the corresponding areas. Number of LRU force skips (-lruskips): 100 Number of LRU2 force skips (-lru2skips): 100 Statistics below is for 5-minute interval. Problem: During the specific operations running at night we have the high number of naps per sec on LRU latch – upto 3103 Naps/Sec. LRU latch is almost 100% busy. LRU locks is upto 9800 Locks/Sec. LRU2 locks is is just 830 Locks/Sec and 0 Naps/Sec. Record/Index operations are ~ 62,200 Per Sec Primary Buffer Pool: Logical reads ~63,050 Per Sec Alternate Buffer Pool: Logical reads ~ 71,220 Per Sec Expectation: LRU Locks ~ Logical reads / (-lruskips) Formula works for the alternate buffer pool but not for the primary one. Private buffers take the blocks from and return the blocks back to a primary buffer pool only. Is it possible that the unexpectedly high LRU locks (the primary buffer pool) are caused by the client’s sessions that often update the _MyConn-NumSeqBuffers? Indeed the number of blocks on LRU chain is changing up and down. But we got such changes for both chain types (LRU and LRU2). More over the changes of LRU and LRU2 chains are dissymmetrical! Why?

Continue reading...
 
Status
Not open for further replies.
Top