user lock tables

This is a very general question, the type best answered by the documentation or knowledge base articles. I'll give you a general answer and then point you to the documentation where you can learn the basics.

The lock table is a virtual system table. Its name is _Lock and it is located in database shared memory. It is where the database tracks locks of various types that are obtained, or queued to be obtained, by or on behalf of database clients. In modern releases of OpenEdge the size of the lock table defaults to 8192 entries, and it can be given a different size with the -L database broker startup parameter.

Read the manual ABL Essentials, particularly the chapter Handling Data and Locking records, for more information. Then read the Database Essentials manual.
 
Additionally... _LOCK is a VST. It is a *virtual* table not a real table. It is a representation of an internal data structure mapped onto a table-like mechanism to make it easier for a 4gl programmer to work with. The actual internal data structure is not documented but it is probably some sort of hash table.

_LOCK is troublesome to work with from the 4gl. Releases prior to 11.4 are especially pernicious. There are many posts on this forum and others explaining why. You are very strongly discouraged from writing code that queries _LOCK.

In so far as the internal structure known as the "lock table" goes it has no direct impact on performance per se.

In this sense "the lock table" is just a way of keeping track of who has locked records. It is a central repository for that information which allows Progress to make sure that users do not interfere with each other and that they receive a consistent view of the data.

Obviously if your code locks records and if people have to wait on locks that can be seen as performance in some sense. But that is really more of an application behavior thing than it is a "performance of the engine" thing.
 
Tom, are there possible scalability concerns as you go to very large lock table sizes? E.g. does a remote server have to scan the lock table to free locks when a user suddenly disconnects?

I have heard advice (about sizing the lock table for a given application) to just set -L to a very large value, say 1 or 2 million, and then watch the HWM mark over time, and later set it to a lower value based on the HWM. I don't do that, not only because of performance concerns, but because lock table overflows keep developers more honest.
 
Sometimes if you are stuck with a pig application (and without source) you have to set -L pretty high. I know from previous conversations Tom and I differ on our opinions on how high is too high :)

My default is to set -L very low on dev and QA databases to try and catch as much as possible... and to set it much higher on production to avoid any "outages". But still monitor the HWM to find any new culprits and get them corrected.

The only performance related issue would be when the code scans the _Lock table. Higher -L values will severely slow down reading _Lock; not just double the time for double the entries but worse. Unless of course you set -L so high you run out of memory.
 
We may not be as far apart as you imagine ;)

Although I don't generally lose much sleep over -L 100000.

Having said that I am no fan of 7 digit -L. Or worse. I do know of an end user that sets it to 10 digits and thinks that they "have to" because they have a "transaction" that they feel must be all or nothing from a db perspective. I think that there are (much) better ways to fry that particular fish.

I prefer to think about how many locks any given user of the application might reasonably need and then think about how many users might reasonably need those locks at the same time. If you can get that level of input from the right people I think that is a much better approach. Sadly I am usually reduced to trial and error.

Rob's scenario is interesting. I think that would be a great question to ask Rich the next time someone sees him.
 
Wow... I could never get away with a -L of 512 in most of the dev environments... must be nice :)

There are some true all or nothing events that need to happen but they are usually in the 20-40k record range for the worst case. Managers like the comfort of the DB handling things when millions of dollars are literally on the line. Especially when the programmers are ever changing and not so experienced in most cases.

For most things you are correct.. the transaction just doesn't need to be that big. I guess we answered the performance question of -L though... 10 digits and it isn't crashing and burning. At least not because of -L :)
 
I guess we answered the performance question of -L though... 10 digits and it isn't crashing and burning. At least not because of -L :)
Burning is analog... there can always be more or less. The fact that it isn't burning to the ground doesn't mean it couldn't benefit from being less combustible. :)
 
True :)

But I know values in the millions (while not optimal) aren't causing any production issues. These are on sites with thousands of concurrent users. But they seldom see high water marks of over 200K before the root issues are caught.

Making one transaction that locks 50 million records on the other hand....
 
Back
Top