Progress Transactions Indexes in n-tier

enoon

Member
Recently I ran into a problem that states like this:

A databse server for sports, 2 opened sessions, progress 10.1C

procedure 1.p

Code:
DO TRANSACTION:
    CREATE state. 
    ASSIGN state.state = 'ww'.
    MESSAGE 1
        VIEW-AS ALERT-BOX INFO BUTTONS OK.
    UNDO, LEAVE.
END.
and procedure 2.p

Code:
FOR EACH  state NO-LOCK 
    WHERE state.state > 'w': 
    DISPLAY state. 
END.
The discussion started with a transaction and a multi-user architecture.
The problem is: we are in a transaction, that has not been ended and the other users can already see the record (that supposed to be nonexistent till the end of the transaction).
What I found out that every time an indexed field is updated the record can already be seen in the database.
I've added an index on state.statename (active index is enough), and did the creation and assignation on state.statename and the FOR EACH searching for a state.statename > 'w' the record is available for reading. But if the FOR EACH is on state.state > 'w' the record cannot be seen.
Progress reindexes after creating and assigning an index field, but why is it available if it is in a transaction, and why only when searching using that particular index that has been updated?

Any thought on this? Any explanation?
 
Why isn't this "dirty-read" term well defined? I mean, what if a user relies on this "dirty-read" data? Could you explain please why is it an expected behaviour, for a transaction? Anyway the problem is I think with the transaction not the reading of data, why can you read some data that does not exist?
 
It is well defined and even relatively well known. You've described it fairly accurately too.

Think of it as data that "tentatively exists". After all 99.44% of the time that record will be created ;)

IOW, what exactly is the harm if Customer 1234 (tentatively) exists? Some inquiry that quickly counts the total number of customers is possibly off by 1? Is that really a problem?

If you have some process that really and truly absolutely cannot tolerate the possibility that a dirty read might impact your results (IMHO such processes are rare and exist mostly in the imagination of the programmer. But I digress...) then you probably shouldn't be using a NO-LOCK.

You might also want to take a good hard look at the fields that you are using in the key -- chances are you've got a design issue if the mere act of tentatively creating a record can present data that actually matters to your application.

That design issue might also be the scope of the transaction containing the record creation. Chances are that it is much broader than it should be.
 
Also note that you are seeing this behavior because you have introduced user interaction within the transaction scope, which is a no-no. Without that message statement, the whole transaction, whether complete or undone would only last a small fraction of a second so the chance of some other process catching it between the time that the index fields were created but the rest of the record was not would be extremely!!!! small.

Moreover, if you don't start the transaction until all the data is collected you can do the assignments to all fields in a single instruction or even a buffer copy.

Moreover, for most of the reasons why the transaction would be undone, e.g., user pushing cancel, the transaction never starts! The only time that the transaction would get undone is some problem during the actual update like a duplicate key or something.

I.e., if you program correctly and separate user input and data persistence, wrapping the transaction around a very small set of code that does the update in one or two lines, the likelihood of your managing to notice a partially updated record is infinitesimally small and 99.99999% of the time, even when that happens, it will be noticing a valid record which will exist at the end of the transaction.
 
Thanks for your answers!

http://www.progresstalk.com/member.php?u=9633TomBascom: It was not areal life situation, but more a simulation. But the only thing I'm not agreeing is the functioning of a transaction. The existence of this "tentative creation" together with the "reading without blocking" a record.

tamhas: by the message statement (interpreted as user interaction), i meant a some processing that takes long time to execute, some complicate queries on many record, maybe a report or something.
 
Like it or not it is the way that it works.

25 years of Progress applications and zillions of lines of Progress code have successfully worked with it.
 
tamhas: by the message statement (interpreted as user interaction), i meant a some processing that takes long time to execute, some complicate queries on many record, maybe a report or something.

Yes, but you have no business having a transaction open for a long time there either. 99.999% of the time, that would be bad design. Transactions should be short and atomic, not long lasting and all inclusive. Note in particular that if you have a transaction open for a long time and do a lot of work and then undo the transaction, it will take a *long* time to undo as well.
 
Back
Top