Values getting updated (released) to DB, need help in understanding the given scenarios

Scenario 1 -
Code:
create table1.
assign
   table1.field1 = 1
  table1.field2 = 2.
assign
  table1.field3 = 3
  table1.field4 = 4.
pause.

Now if we try to access this record in other session with no-lock, it will be available but with values which are assigned in first assign statement. Values which are being assigned in second or afterwards statements will only be available either on completion of transaction or explicitly releasing the record.

I was expecting, if values with first assignment are available, so must be of second but it wasn't.

Scenario 2 -
Code:
find first table1 exclusive-lock no-wait no-error.
if available table1 then
do:
  assign
   table1.field1 = 11
   table1.field2 = 22.
assign
  table1.field3 = 33
  table1.field4 = 44.
end.
pause.

Now if we try to access this record in other session with no-lock, values which are being assigned in all the assign statements will only be available either on completion of transaction or explicitly releasing the record.
 
Last edited:
There several things that factor in:

Only as soon as the transaction is committed successfuly it is ensured that all values are written to the database. As long as the transaction is not committed, changes may be written to the database partially. If the transaction gets rolled back for whatever reason these partial writes will be undone.

Another factor is buffer scope: At the end of the buffer scope the contents of a modified buffer are written back to the database. The same can happen in some scenarios when use ( againts everybodies avice ) the release statement. But these changes can still be undone when the transaction scope is larger and the transaction is rolled back.

A read with the no-lock is always a dirty read. Means that you will always get the record as-is.

Therefore, in order to understand the behavior you need to understand buffer and transaction scope.

In your first scenario ( just taking the revealed code into account ) most likely the transaction is still uncommitted ( because the scope is the whole procedure, most likely ) when the pause statement is executed ( and therefore halting execution ). Therefore you may see the partially changed record. I'll take it that the same is true for your second sample.

Bottom line: Your transaction scope does not meet your expectations.

Heavy Regards, RealHeavyDude.
 
There several things that factor in:

Only as soon as the transaction is committed successfuly it is ensured that all values are written to the database. As long as the transaction is not committed, changes may be written to the database partially. If the transaction gets rolled back for whatever reason these partial writes will be undone.

Another factor is buffer scope: At the end of the buffer scope the contents of a modified buffer are written back to the database. The same can happen in some scenarios when use ( againts everybodies avice ) the release statement. But these changes can still be undone when the transaction scope is larger and the transaction is rolled back.

A read with the no-lock is always a dirty read. Means that you will always get the record as-is.

Therefore, in order to understand the behavior you need to understand buffer and transaction scope.

In your first scenario ( just taking the revealed code into account ) most likely the transaction is still uncommitted ( because the scope is the whole procedure, most likely ) when the pause statement is executed ( and therefore halting execution ). Therefore you may see the partially changed record. I'll take it that the same is true for your second sample.

Bottom line: Your transaction scope does not meet your expectations.

Heavy Regards, RealHeavyDude.
Thanks Dude,

My only concern is why 2 assignments statements are beahving differently, in scenariono 1 - first assignement changes are avaialble as soon as execution of statement regardless transaction & buffer scope, while not of second. Buffer scope and transaction scope is same for both statements.
 
Have you tried it with a strongly scoped transaction?
Code:
do for table1 transaction:
  create table1.
  assign
     table1.field1 = 1
    table1.field2 = 2.
  assign
    table1.field3 = 3
    table1.field4 = 4.
  end.
pause.
 
Tamhas is correct.

Assignments to indexed fields are done immediately. They become visible to "dirty reads" at that moment. (This is done optimistically in order to prevent collisions in indexed fields...)

This is yet another of the reasons why you want your transactions to be very tight blocks of code -- a single ASSIGN right after the CREATE and without any additional logic is the ideal. Aside from the logical conundrums, multiple ASSIGN statements also mean multiple IO ops and they increase the chances of record fragmentation. Multiple ASSIGNS, each of which involve indexed fields, or which re-update the *same* indexed fields are even worse.
 
Tamhas is correct.

Thanks Tom for the explanation...!!
Assignments to indexed fields are done immediately. They become visible to "dirty reads" at that moment. (This is done optimistically in order to prevent collisions in indexed fields...)

This is yet another of the reasons why you want your transactions to be very tight blocks of code -- a single ASSIGN right after the CREATE and without any additional logic is the ideal. Aside from the logical conundrums, multiple ASSIGN statements also mean multiple IO ops and they increase the chances of record fragmentation. Multiple ASSIGNS, each of which involve indexed fields, or which re-update the *same* indexed fields are even worse.
My Transaction scope is larger, it's just one table/buffer getting created. And its scattered in multiple assignement statements. I was sure that whatever after create statment must be avaialble, but it wasn't only first assignment statement's values avaialble, while rest after completion of transaction.
 
I can only emphasize on what Tom said: Keep your transactions tight and strong scoped, and, don't split assignments into multiple statements whenever possible. That is one reason why you should use the assign statement although you might find lots of code that omits it.

Heavy Regards, RealHeavyDude.
 
I'm not totally sure but I think this will make the changes visible for other sessions without the necessity to complete the transaction or to release the record. Of course I don't know if this will help the OP.
(I assume it is not necessary to explain what validate is supposed to do.)
 
This is one of those "tricks" which people learn and shouldn't. In many cases, like RELEASE, it isn't really doing what you think it is and whether or not it actually works in any one particular context is largely chance.

The key to this whole issue is small, tightly scoped transactions. Accumulate the values for the new record without touching the table and then in a block scoped to the buffer, create the record, assign the values, and get out. Do that and all of these questions disappear.
 
Officially, VALIDATE "Verifies that a record complies with mandatory field and unique index definitions." the "notes"section also mentions "If a field or table has been modified, the VALIDATE statement causes WRITE events and all related WRITE triggers to execute." which is probably why people use it to try to "force" a record to be flushed back to the db.

If you have code that works "better" because you use VALIDATE then you also have major issues with your transaction architecture. Similar to RELEASE, its use strongly suggests that the code is not well understood and is a maintenance nightmare.
 
+1.

Some ABL statements like validate and release are often misused to cure a symptom where the root of the issue is not understanding what is going on. Transaction and buffer scope and the block behavior are the basic building blocks of the ABL since I know it ( some 20+ years ). In order to produce stable, scalable and performant applications you need to understand these. There are so many questions which will never come to your mind or issue that will never arise when you are able to use them to your advantage. I know this is not fancy stuff, but it is the foundation.

Heavy Regards, RealHeavyDude.
 
Thank you all.

I just posted this to know the cause, why it is happening like this.
Yes with proper requirement, transaction, record scope we can control.
But I was surprised why happening like this.
 
Last edited:
Point being that if you code in such a way that you have to understand how the engine works in order for the code to behave correctly, then you are coding the wrong way.
 
Thanks you all.

I just post this to know the cause, why it is happening like this.
Yes with proper requirement, transaction, record scope we can control.
But I was surprised why happening like this.

Has the behaviour been adequately explained? Or are you still confused?
 
I tried with indexed field, and found that whereever indexed fields will be assigned (doesnt matter 2nd, 3rd or 4th statement), all values till that assign statement will be avaialble in buffer, after that it wont till than it is released explicitly or transaction got over.
Strong scope is not impacting anything.


do for testtt transaction:
create testTT.
assign
F1 = "KG1"
F3 = "KG3".
assign
F2 = "KG2PUA". /* Indexed field PAU */
assign
F4 = TRUE.
pause.
end.


It was a real time scenario for one of my code, a table was getting created in 2 different statements, first in include file with indexed and some date, time & user fields after create statement, second some specific values as per requirement without any conditions.
So when I was working on a defect, I was not sure whether exactly this code getting executed or some else, as only include file values were getting displayed in the buffer, but just after include file second statement values were not getting displayed. The only thing transaction was larger and was takin some time so I was able to get this situation.
 
Back
Top