Lock Table overflow - a question.

I have a report database that I place data into for view by managers. The database is a subset of the order information that drives the business (sales orders, purchase orders, work orders, etc.). The processes of loading the report database is a nightly update. However, when I have to perform a "reload" of the database I run into issues with lock file overflow. I am deleting the records and adding them back. One of the tables has over a million records in it. I could make my -L 2000000, but then in a couple of years, I have trouble again. I could also perform my build in smaller chunks, but I really prefer not to do so. Is there a way that I can do this without having to continue to jack up my -L?
[FONT=r_ansi][FONT=r_ansi][/FONT][/FONT]
[FONT=r_ansi][FONT=r_ansi]for each rpt.po_rcpt:[/FONT][/FONT]
[FONT=r_ansi] delete rpt.po_rcpt.[/FONT]
[FONT=r_ansi]end.[/FONT]
[FONT=r_ansi][/FONT]
[FONT=r_ansi][FONT=r_ansi]do i = 1 to num-entries(domain_list):
dom-domain = entry(i, domain_list).
run extract-po_rcpt-data(input dom-domain, input v-s-date,
input v-e-date).
end.

procedure extract-po_rcpt-data:
define input parameter dom-domain as char.
define input parameter v-s-date as date.
define input parameter v-e-date as date.
for each prod.po_rcpt
where prod.po_rcpt.po_dom = dom-domain
and (prod.po_rcpt.rcp_date >= v-s-date
and prod.po_rcpt.rcp_date <= v-e-date)
no-lock no-error:
create rpt.po_rcpt.
assign rpt.po_rcpt.po_dom = prod.po_rcpt.po_dom
.
.
.
rpt.po_rcpt.po_rcp_date = prod.po_rcpt.po_rcp_date.
end.
end procedure.

This is not the whole procedure. I had to condense it down, but the gist of it is here.

Any ideas?
[/FONT]
[/FONT]
 
Are you saying that this is deliberately a single large transaction? And that you want to keep it that way?

If so then, no, you're stuck with an ever growing lock table.

If you are willing to abandon that then you could also probably improve performance by running multiple parallel streams of a process that does this in smaller chunks. The main "trick" would be to find a way to track you progress so that the overall process is easily restartable.
 
Note that there are actually two separate suggestions in Tom's post. One is to do it in chunks, i.e., create a transaction block of say 100 deletes or adds and a loop around that to keep going for the rest. If it fails, you have to start over building a fresh database, but that should be very unlikely for a simple load.

The other suggestion is to consider doing the load in parallel processes. E.g., one per table or even a part of a table.

Also, if these tables are in a separate database, I would keep an empty copy around and use that instead of paying the cost for the deletes Deletes are surprisingly expensive. If they aren't in a separate database, consider putting them in their own area. Tom can tell you how to do a rapid drop and add.
 
Thank you both so much for your time and suggestions. I do agree that the delete is overhead, but the end user is not going to be me and I was trying to eliminate the need for maintenance on this database. I have basically created this as a side database so that the exposue to my ERP is limited. I am not even the person writing the code, so that makes it even more so that I keep this at arms length. I will simply tell the fellow that he must load in chunks (by year or something) and be done with it.

Again, thank you both for your time!

David Langschied
 
Separate database makes it easy. Just delete the database and copy over a new empty one. Very fast! And, easy to automate.
 
You might consider another option:
Change code so it is not one single transaction (move delete part, top 3 lines, to sub-procedure). This way number of records will not be an issue.

If you need to undo upload on error, then just call delete sub-procedure when you encounter error, or output errors to log file (or both).
 
I am tying up all loose threads. This issue was developer generated. As it turned out, there was some part of the code where he was pulling in a whole table, which was large enough to cause this problem. This issue is now resolved.
 
Back
Top