[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: Pro2 ReplQueue Record Locks

  • Thread starter Thread starter Valeriy Bashkatov
  • Start date Start date
Status
Not open for further replies.
V

Valeriy Bashkatov

Guest
>Is "trigger compression" done by default in ver 5 ( in ver 6 )? There are several templates for different tasks, one of them is tplt_repltrig_with_compression.p. In the fourth version it seems there should be this template. You only need to replace the standard template in DEL_TRIG_TEMPLATE and WRI_TRIG_TEMPLATE properties. And regenerate the Processor Library. >However we are on Ver 4 where max number of threads is 5. The maximum number of threads, including sub-threads, in v5 is 100. Of course, more than standart 5 threads require additional licenses for the DataServer. My big client has almost all 100 threads involved. We have more performance benefits from this. Next graph shows an example of the replication queue of a single process. This line forms the standard replication trigger without compression and without multithreading. Note that we are constantly in the red zone. This means that the size of the replication queue is almost always more than ten thousand changes notes, and on the graph we see several million notes in the queue. We've seen hours of gaps between Oracle and OpenEdge. And next an example graph clearly shows how the performance of Pro2 has increased after the implementation of Compression triggers with the Split Replication Threads with up to ten sub-threads. We are moved to the green zone (no more that 500 change notes in the queues). Now all the changes notes are evenly distributed between these sub-threads. The size of replication queues for each thread has decreased significantly. As result, the time taken to process replication notes was significantly reduced and the gap between Oracle and OpenEdge has almost been eliminated. I will talk in detail about this on Friday at FinPUG 2019, as well as at the EMEA PUG Challenge if my session is approved there.

Continue reading...
 
Status
Not open for further replies.
Back
Top