Question Copy replication trigger assignments.

ron

Member
(Sorry - I've removed this until I do some more investigating.)
 
Last edited:

TomBascom

Curmudgeon
Having seen the emailed version of the deleted post a couple things spring to mind that might be things to check:

1) are the connections shared memory or client/server?
2) where are the triggers in PROPATH? long propaths with the runnable code at the end can be trouble
3) where is the code stored? Is it on something like a congested NAS?
4) is the trigger code compiled? Or are you compiling .p code on the fly?
5) has the bi file been properly tuned?
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
It seems like you're taking a good methodical approach to finding the bottleneck. That is quite a surprising jump from no triggers to triggers that only instantiate a variable.

Measuring overall system performance requires looking at the system holistically. Like Tom, I have questions about client-side and server-side configuration. You are comparing performance against the (presumably good) time of about 29 minutes. But maybe your hardware is capable of much better. We don't know without knowing all the parts of the system, how the connect to each other, and how they are configured. Just one unoptimized setting might make a big difference.

Examples: client connection type, session compile vs. .r, using -q or not, BI block/cluster size, helper processes in use, quality/type/location of underlying storage, DB broker parameters, client startup parameters, location of session temp files, etc.

Profiling the client during to find out where the most time is being spent could give clues on how to improve. Monitoring client temp file I/O, if possible, could show if they are a bottleneck.

Running a test with AI and then analyzing the output of ai scan verbose might reveal something interesting.

Have you tested your actual trigger code to incorporate @Patrice Perrot's suggestion to remove the create before the buffer-copy?
 

ron

Member
Tom and Rob -- thank you, your help is always very valuable!

I deleted the post because I realised that it was my mistake -- I saw the compiler messages warning that some code would never be executed (of course!) -- but forgot that the compile script would deliberately NOT replace the existing .r code if there were any errors or warnings! So -- the executed triggers still ran the full code. :eek:

I got around that as soon as I realised what had happened -- and continued with the testing adding or suppressing one thing at a time to find what code was causing what length of time. I haven't finished the tests -- but the results so far have been enlightening. I will share the details when it's all done.
 

ron

Member
As I was explaining .... the replication system is working quite well -- except that it is slow. I checked the trigger code several times and could see no obvious reason for the problem. Each trigger deals with a different table, of course, but have exactly the same pattern. They:

1. Create and populate a record which is a copy of the updated source record.
2. Create a small "pointer" record.
3. Execute a Linux command "whoami" to get the user ID of who caused the update.
4. Create a small "tally" record used to account for records processed.

I carried-out a series of tests running a very large batch job -- each test starting with a database restore to ensure the test was against exactly the same data.

The "culprit" (to my considerable surprise) was the "whoami" command. When it was allowed to execute it doubled the execution time of the job!

I ran the test a few times to make sure -- and without a doubt -- when that command was allowed to execute it doubled to execution time.

I then made a small ABL test program that did nothing other than execute "whoami" in a loop. The loop executed 100,000 times and the result was that each execution took 6.4 Msec -- which is an "eternity" in code execution terms.

The actual instructions used are:

Code:
   INPUT THROUGH VALUE("whoami") NO-ECHO.
   IMPORT lc-usr.
   INPUT CLOSE.

For the time-being I will suppress this section of code. When I get a chance I will see if I can find a better way to do this.

Ron.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Can you use the user's database userid rather than their OS userid so you don't have to shell out?

Alternatively, assuming a given user's context doesn't change, could you run whoami once at the beginning of the session and then cache it in a variable or property?
 

ron

Member
Yes, Rob, I think there are a number of ways this can be addressed -- but the big 'relief' is that the cause of the problem has been found .. it was really bugging me. (I'm sure you can imagine.)
 
Top