Lock Table.

RealHeavyDude

Well-Known Member
Each database row that get's fetched with an EXCLUSIVE-LOCK ( most likely to be updated or deleted immediately thereafter ) requires an entry in the lock table so that the database can keep track what is locked and what isn't.

The multi-user default for the lock table is 8192. This setting defines how many database rows can be locked simultaneously by all processes that are connected to the database. If the application tries to blow that limit you will exactly receive the error message you describe and the transaction will be rolled back.

Obviously you can avoid blowing the lock table by setting the limit higher. You can specify the -L parameter when starting the database broker.

But there is more: The need for a big lock table usually indicates bad application ( and/or ) user behavior, namely large transactions caused by pessimistic locking and/or bad transaction scoping. For this usually you would blame - you guess who - the application developers. Rarely have I seen a business case that proofed the necessity of an extremely huge locking table. If bad application behavior is your issue then there is almost no other solution as to change the bad code or limit the amount of users that are allowed to work simultaneously with the application ( most likely that won't be an option ) ...

Heavy Regards, RealHeavyDude.
 
Top