Confused on Record Bleeding

Does Record bleeding occur in version 11.3?
If not, I am not getting what's wrong with below code. It allows me to update NO-LOCK buffer customer.

Code:
def buffer bcust for customer.
find customer where customer.custnum =  10 no-lock.
do for bCust transaction:
  find bcust where bcust.custnum = 10 exclusive-lock no-wait no-error.
  bcust.name = "Strong Scope".
end.
update customer.name.
 
Last edited by a moderator:

GregTomkins

Active Member
Hmmm, I expected multiple educated replies on this already.

I tried this and was surprised to get the same answer you did. I thought this was only supposed to happen if you used -brl, but we don't.

In real life, I have never had problems with this (at least, not that I am aware of) probably because, as you likely know, the issue only happens if the exact same record is being used in both buffers (eg. if the 'find bcust' was for custnum=11, then it would work differently/arguably correctly). So I confess the BRL problem is not one that I'm very familiar with.
 

Cringer

ProgressTalk.com Moderator
Staff member
On 11.2.1 I get the "Transaction Keyword within transaction scope" warning when I run it.
 
On 11.2.1 I get the "Transaction Keyword within transaction scope" warning when I run it.
Yes Cringer, I used update statement specially for this example. As with this statement TRANSACTION will be at procedure level and explicitly starting of TRANSACTION within TRANSACTION will give the warning message what you are getting.
But my doubt is why NO-LOCK is getting upgraded to SHARE-LOCK.
On 11.2.1 I get the "Transaction Keyword within transaction scope" warning when I run it.
 

GregTomkins

Active Member
I hope TomB is all right ... as soon as he reads this you can expect a complete explanation ! Maybe he's too busy getting ready for PUG.

There's a KB entry which the way I read it specifically states that -brl should cause this behaviour, and implying that without -brl, it shouldn't happen. I forgot to copy the link but if you search 'bleeding' it should pop up, their more involved example might better illuminate the issue.
 

RealHeavyDude

Well-Known Member
After thinking about it I came to the conclusion that it is a dangerous thing to have more than one buffer that holds exactly the same record. Whenever you change the record in one buffer and write it back to the database the other buffers won't get refreshed and will still contain the versions of the same record at the time when it was read into the buffer. Therefore you will most likely end up with different version of the same record in different buffers.

One might think that this could be useful for some kind of before imaging - but I would strongly recommend to use temp tables for such a mechanism instead and thereby utilizing optimistic locking and transaction scopes that are as small as possible to ensure concurrency.

In over 20 years of programming experience with the Progress ABL ( or 4GL as it was called back then ) beginning with V6 I never came across a use case where such a scenario would make sense. Even back then I used work files because temp tables were not available in the language yet.

Don't get me wrong: That does not mean that there is no valid use case - I just can think of one.

Heavy Regards, RealHeavyDude.
 

anandknr

Member
I came across something similar recently.

define buffer x-customer for customer.
find customer where cust-num = 53000021 no-lock .
find x-customer where recid (x-customer) = recid (customer) exclusive-lock.
customer.cust-sh-na = "test".


customer.cust-sh-na was actually a typo for x-customer.cust-sh-na but it didn't gave any issues at run time.
Such typo errors can occur anytime to anyone; particularly when we have buffer defined with almost similar table name.

PS: I am in 11.0 version
 

TomBascom

Curmudgeon
Every now and then I have to do that "work" thing...

FWIW this behavior is also found in 10.2B. And probably all the way back to the dawn of time.

I don't have a particularly good explanation for it and I don't like it. But for what little it is worth I think the issue is that the underlying "real" record inherits the lock status from whatever statement last touched it -- a record only has one lock status, the 4gl doesn't track lock status by buffer and re-lock (or unlock) buffers as they are referenced at runtime. At runtime locks are managed by the db engine -- not the language. The db doesn't know anything about "buffers" it just sees the various manipulations of the record in the sequence that the language sends them along.

To fix it I think the 4gl would need to keep track of lock state at runtime and inject some extra lock status change messages to the db engine. That might have a performance cost (although it sounds minor to me) and it would also break code that it inadvertantly taking advantage of this behavior (personally, I think breaking such code would be a good thing).

Currently the only way that I know of to try to defend against it is to treat "transaction already active" warnings and listings that show a trx scoped to the procedure block seriously. Very seriously.
 

Cringer

ProgressTalk.com Moderator
Staff member
Sounds like a good explanation. The worry I have is that if you change the update to an assign then the transaction is still scoped to the procedure, but you don't get the warning.
As you say, that's where a listing comes into play, but it's an extra step many probably don't take.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
The worry I have is that if you change the update to an assign then the transaction is still scoped to the procedure, but you don't get the warning.
As you say, that's where a listing comes into play, but it's an extra step many probably don't take.

True. But that isn't the compiler's fault. I've seen a lot of completely preventable scoping problems that never would have happened if the developers had used the tools available to them.
 
Thanks to all .
What my understanding is, if NO-LOCK upgraded to SHARE-LOCK then it's fine TRANSACTION scope at procedure level.

I just shared to make sure what Progress Handbook says that record bleeding doesn't exist after some version of 7.x.

Yes DB engine maintains lock at RECID not at buffer.
I thought again and yes it should be managed by DB engine, once transaction is over (here in this case it is even strong scope), it should downgrade the locked RECID back to NO-LOCK.

I'll find the examples from my project and will share same.
 
After thinking about it I came to the conclusion that it is a dangerous thing to have more than one buffer that holds exactly the same record. Whenever you change the record in one buffer and write it back to the database the other buffers won't get refreshed and will still contain the versions of the same record at the time when it was read into the buffer. Therefore you will most likely end up with different version of the same record in different buffers.

One might think that this could be useful for some kind of before imaging - but I would strongly recommend to use temp tables for such a mechanism instead and thereby utilizing optimistic locking and transaction scopes that are as small as possible to ensure concurrency.

In over 20 years of programming experience with the Progress ABL ( or 4GL as it was called back then ) beginning with V6 I never came across a use case where such a scenario would make sense. Even back then I used work files because temp tables were not available in the language yet.

Don't get me wrong: That does not mean that there is no valid use case - I just can think of one.

Heavy Regards, RealHeavyDude.
Yes its correct, but I am surprised on the behaviour. As my learning is through Progress docs, exercising & reading this blog.
 

TomBascom

Curmudgeon
After a transaction commits the lock state goes from EXCLUSIVE to SHARE-LOCK. Not NO-LOCK. In this thread the underlying record is going from NO-LOCK in the first FIND to EXCLUSIVE-LOCK in the bcust FIND and then downgrades to SHARE-LOCK at the end of the strong-scoped transaction block. The UPDATE then upgrades the SHARE lock to EXCLUSIVE, commits the TRX and downgrades again to SHARE.

Insert a few PAUSE statements and experiment with PROMON or VSTs in a test environment and you should see all of that.
 
After a transaction commits the lock state goes from EXCLUSIVE to SHARE-LOCK. Not NO-LOCK. In this thread the underlying record is going from NO-LOCK in the first FIND to EXCLUSIVE-LOCK in the bcust FIND and then downgrades to SHARE-LOCK at the end of the strong-scoped transaction block. The UPDATE then upgrades the SHARE lock to EXCLUSIVE, commits the TRX and downgrades again to SHARE.

Insert a few PAUSE statements and experiment with PROMON or VSTs in a test environment and you should see all of that.
Yes I agree but my concern only at the end of trans recid must be downgraded back to no-lock not in share-lock. Anyways its happening so have to accept :(
 
One more strange result, when looking at _lock or promon results - In below code, at alert message 1 _lock shows status in EXCL lock while at message 2, there in no entry in _lock table. Means its downgraded to NO-LOCK as in other session I am able to get same in EXCL lock at message 2.
But if we try to update customer buffer after transaction block, it again upgraded to EXCL lock.

And I am not able to see customer record in SHARE-LOCK at any stage :(

def buffer bcust for customer.
find customer where customer.custnum = 10 no-lock.
do for bcust transaction:
find bcust where bcust.custnum = 10 exclusive-lock.
bcust.name = "Strong Scope".
message 1 view-as alert-box.
end.
message 2 view-as alert-box.
 

TomBascom

Curmudgeon
It is a small thing but I believe that your concern is somewhat misplaced.

This is not just a feature of the situation in this thread -- it is always this way. If the record is still in scope it goes from EXCLUSIVE to SHARE. It doesn't go "back to" whatever it was before (if it was anything).

At the end of a transaction records are never downgraded to no-lock status. They either go out of scope and are not AVAILABLE (and thus do not have a lock status at all), or they are downgraded to SHARE-LOCK.

If a record is available and has SHARE or EXCLUSIVE lock status you can explicitly make it NO-LOCK with FIND CURRENT ... NO-LOCK. But not just by running off the end of a transaction block.
 

TomBascom

Curmudgeon
I was wrong. I forgot that the whole thing is inside a transaction scoped to the procedure block.

So, in this case, you aren't seeing a SHARE because of the transaction scoped to the procedure -- it encloses the strong scoped sub-transaction and thus there is no downgrade because a TRX is still active.

This stuff is tricky. Transactions scoped to procedures are bad, bad, bad.
 
Top