Resolved Permanent Endless-Job and (5408) WARNING: -l exceeded. Automatically increasing from 1580 to 1590

Jochen0911

New Member
Hello,

Progress 11.7. Windows Server.

Perhaps the concept is wrong. Don't know. I have a Queue and a permanent Job, that's running on the Queue.

In my Job Procedure i have a never ending repeat block.
Code:
  Verarbeitung: repeat on error  undo, throw
                       on endkey undo, leave
                       on stop   undo, leave:
later i have some calls

run procedure 1 ... 2 .. 3

In the procedures are something like
Code:
  Main:
  do on error  undo Main, leave Main
     on endkey undo Main, leave Main:

    for each table no-lock
      on error undo Verbuchen, retry Verbuchen
      on endkey undo, leave
      on stop   undo, leave:

      if retry then do:

        ylvr_rowid = rowid(table).
        
        run yip_schreibe_retry_fehler (c_Fehlertext,ylvr_rowid).
        c_Fehlertext             = '':U.
        next verbuchen.

      end. /* retry */

      TRANS1:
      do  transaction
          on error undo Verbuchen, retry Verbuchen
          on endkey undo, leave
          on stop   undo, leave:

  {adm/template/incl/dt_log00.if &LogMsg = "substitute('{&LINE-NUMBER} - Transaktion: &1':U,transaction)"}
Later i'am searching for some tables exclusive. In a Log i write the transaction status. It seems like ok. Every time on the beginning of the for each it logs 'no' on that place.

After a while i get database warnings like in headline. After one day, it es extreme big and the Windows Progress process will have about 7 GB RAM.

Is there a way to commit all open transactions or (shared) locks and start the repeat again?

I hope the rudimentary information is enough for now.

Thank you,
Jochen
 
Last edited by a moderator:
So your transaction scope is bleeding out to be scoped to the whole endless loop. It's hard to establish why from your very basic code, but something is causing the scope to bleed out.
 
I would suggest 2 things:
1. Check if this part of code is not a nested transaction
2. Trace through the code and check _Lock table contents
 
Not sure, based on your example, but my experience is that if you encounter strange problems with locking, you can do yourself an enourmous favour by using explicit buffers for each table you reference. No exceptions; always use a buffer. I've seen multiple times strange and seemingly unsolvable problems suddenly completely disappear when explicit buffers were used.

A clear sign that you are doing things wrong is when you use the RELEASE statement. At least, in 99% of the cases.
 
It is, of course, important to properly scope transactions but...

dash ell, lower case, is not about LOCKS. That would be -L (upper case).

Lower case -l is for the "local buffer". This is memory used by variables, workfiles (not temp-tables), and local copies of records.


As the documentation says, it is a soft limit. So when it grows beyond the initial startup size you see those messages about it being increased for you.

It means that something is growing. Maybe you are appending a lot of data to a variable. Or maybe you're growing a workfile. Or perhaps you are recursively calling some code that piles up a lot of "stuff" somewhere.
 
It is, of course, important to properly scope transactions but...

dash ell, lower case, is not about LOCKS. That would be -L (upper case).

Lower case -l is for the "local buffer". This is memory used by variables, workfiles (not temp-tables), and local copies of records.


As the documentation says, it is a soft limit. So when it grows beyond the initial startup size you see those messages about it being increased for you.

It means that something is growing. Maybe you are appending a lot of data to a variable. Or maybe you're growing a workfile. Or perhaps you are recursively calling some code that piles up a lot of "stuff" somewhere.
Well spotted that man.
 
Great help. Thank you for all the tips and hints.
I did increase the -l buffer parameter and control the LOCK status with some log statements of transaction. So, it work fine now without thousands of database warnings.
 
Back
Top