Confused on Record Bleeding

tamhas

ProgressTalk.com Sponsor
I would like to reinforce Tom's last statement. The key source of the problem here is allowing the transaction to be scoped to the procedure. One can get in a whole bunch of different kinds of trouble that way, not the least of which is inadvertently propagating a transaction to a subprocedure. Always, always, scope to a block and life becomes much more predictable.
 

RealHeavyDude

Well-Known Member
+1.

Sometimes I feel kinda like a missionary man when I recommend to carefully design scopes, be it transactions or buffer. That's the reason why I recommend to use defined buffers for database updates and strong scope them to match the transaction scope. This is one of the easiest things to do to keep yourself out of trouble. If you don't care about scoping, most likely, it won't hurt you in a development environment - but in a production environment with lots of records, many concurrent users and batch processes competing against each other it will.

Heavy Regards, RealHeavyDude.
 
I was wrong. I forgot that the whole thing is inside a transaction scoped to the procedure block.

So, in this case, you aren't seeing a SHARE because of the transaction scoped to the procedure -- it encloses the strong scoped sub-transaction and thus there is no downgrade because a TRX is still active.

This stuff is tricky. Transactions scoped to procedures are bad, bad, bad.
Hi Tom,
Below is the LISTING output and it doesn't say TRANS is active at procedure level. Its strictly at DO block -

c:\progress\trans1.p 06/28/2014 00:32:16 PROGRESS(R) Page 1
{} Line Blk
-- ---- ---
1 def buffer bcust for customer.
2 find customer where customer.custnum = 10 no-lock.
3 1 do for bcust transaction:
4 1 find bcust where bcust.custnum = 10 exclusive-lock.
5 1 bcust.name = "Strong Scope".
6 1 message 1 view-as alert-box.
7 end.
8 message 2 view-as alert-box.
c:\progress\trans1.p 06/28/2014 00:32:16 PROGRESS(R) Page 2
File Name Line Blk. Type Tran Blk. Label
-------------------- ---- ----------- ---- --------------------------------
c:\progress\trans1.p 0 Procedure No
Buffers: sports2000.Customer
c:\progress\trans1.p 3 Do Yes
Buffers: sports2000.bcust
 
Your example above is missing the UPDATE that causes a transaction to be scoped to the procedure.
Yes, I did to see whether any _LOCK entry is there or not at message 2, but there is nothing. If we use UPDATE it is there with EXCL if we don't use any UPDATE outside DO block then no entry in _LOCK. If there is no entry in _LOCK at message 2, why it's even available in SHARE-LOCK which allows it to update and upgraded to EXCL.
 

TomBascom

Curmudgeon
Your listing is of the program that lacks the UPDATE. Which does not exhibit the problem that this thread is all about....

Code:
./trans.p  06/29/2014 10:00:56  PROGRESS(R) Page 1

{} Line Blk
-- ---- ---
  1  def buffer bcust for customer.
  2  find customer where customer.custnum =  10 no-lock.
  3  1 do for bCust transaction:
  4  find bcust where bcust.custnum = 10 exclusive-lock no-wait no-erro
  4  1 r.
  5  1  bcust.name = "Strong Scope".
  6  end.
  7  update customer.name.
^L./trans.p  06/29/2014 10:00:56  PROGRESS(R) Page 2

  File Name  Line Blk. Type  Tran  Blk. Label
-------------------- ---- ----------- ---- --------------------------------
./trans.p  0 Procedure  Yes
  Buffers: s2k.Customer
  Frames:  Unnamed

./trans.p  3 Do  Yes
  Buffers: s2k.bcust
 

andre42

Member
After thinking about it I came to the conclusion that it is a dangerous thing to have more than one buffer that holds exactly the same record. Whenever you change the record in one buffer and write it back to the database the other buffers won't get refreshed and will still contain the versions of the same record at the time when it was read into the buffer. Therefore you will most likely end up with different version of the same record in different buffers.

One might think that this could be useful for some kind of before imaging - but I would strongly recommend to use temp tables for such a mechanism instead and thereby utilizing optimistic locking and transaction scopes that are as small as possible to ensure concurrency.

I'm pretty sure that the client only keeps one copy of each record so multiple buffers which point to the same record will be in sync.
Small example to show this (using a temp-table but should work the same with a real table):

Code:
DEFINE TEMP-TABLE ttx NO-UNDO
  FIELD xy AS CHARACTER.

DEFINE BUFFER b1 FOR TEMP-TABLE ttx.
DEFINE BUFFER b2 FOR TEMP-TABLE ttx.

CREATE b1.
B1.xy = 'ABC'.
VALIDATE b1.

FIND FIRST b2.

b1.xy = 'DEF'.

MESSAGE b2.xy
  VIEW-AS ALERT-BOX INFO BUTTONS OK. /* prints DEF */
IMHO different versions of the same record can be seen only in the OLD BUFFER in trigger programs.

See also the documentation for -rereadnolock.
 

RealHeavyDude

Well-Known Member
You need to be aware that temp-tables behave different when it comes to transactions, especially the default buffer that comes with them. Futhermore, there is no lock on a temp-table.

You are correct in that the -rereadnolock paramater influences the behavior when database records are fetched by their rowid with no-lock. The parameter was introduced come V9 to ensure backwards compability for applications that used to re-fetch records using find ... rowid instead of find current.

Please don't get me wrong: IMHO of course - I would not advise to fetch the same database record into different buffers. I would always copy them into differently named temp-tables. My point is not whether the AVM will keep them in sync or not because when thinking about before imaging you definately would not want that. My point is that I don't want to have code that relies on such "default" behavior in a way that is not very transparent and might change behavior completely depending on a startup parameter - to say the least. Too many times I had to fix sloppy ABL code that did just that. Pretty sure might not be enough for production code that handles millions of transactions per day and transfers billions of dollars from one end of the world to the other in a glimpse of an eye.

Heavy Regards, RealHeavyDude.
 

PratyayN

New Member
Does Record bleeding occur in version 11.3?
If not, I am not getting what's wrong with below code. It allows me to update NO-LOCK buffer customer.

Code:
def buffer bcust for customer.
find customer where customer.custnum =  10 no-lock.
do for bCust transaction:
  find bcust where bcust.custnum = 10 exclusive-lock no-wait no-error.
  bcust.name = "Strong Scope".
end.
update customer.name.
I have been horribly surprised to see this which coincidentally im trying from yesterday. But my problem is not what is expressed by many here. The only thing i know or knew to be true is strong scopes (DO FOR) in this case should not allow reference of the record/buffer of any kind outside. But from some version of OE, i can see strong scopes aint the way they used to be. It is allowing strong and weak scope in the same program and also strong scope is allowing records to be available outside for read/update.
Now taking this as a bug or an OE update (which is definitely bad) the resolution i find is treat strong record scope as ‘not strong’ anymore. One should release the buffer within a transaction ie before a transaction ends ie use RELEASE bCust and ur lock is removed(ie NO-LOCK) outside transaction. If your first customer record was taken in a share-lock, then after transaction ends and u released the bcust buffer the customer buffer will be available in share-lock again. Hence, after executing release of bcust, it comes back to the lock that the record was previously holding.
P.S. In handbook, there is an explanation that the lock will be there on a record until the release of the record buffer or the transaction end whichever is ‘later’. And if a record is not released (considering u have updated in excl-lock within it) before a transaction ends it is always downgraded to a share-lock after the transaction commits. So it still adheres by the rules and got only confused where there should not be required a RELEASE when the transaction is binding a record in strong scope.
*Sorry for the extremely long and boring reply. My OE version is 12.3 and cross-checked the same in 11.4
 
Last edited:

tamhas

ProgressTalk.com Sponsor
My first question is, why would you ever write code like that? If, for example, you consistently used bcust throughout, I am sure the compiler would complain.
 

TomBascom

Curmudgeon
...my problem is not what is expressed by many here.

Then it would be very sensible to open a new thread that is dedicated to your actual problem rather than digging up a 7 year old thread that is not about your problem.
 
Top