Lock table overflow during binary load

RealHeavyDude

Well-Known Member
OS is Sun Solaris 10 64Bit.

This is the first time I am experiencing this. The binary load is the only thing running against the database (as you can see the database is started without -S to prevent any remote client from login):

Database (blocksize 8K) is started as follows:
$DLC/bin/proserve $dirname/$phname -B 50000 -spin 5000 -bibufs 25 -i >> $logfile
$DLC/bin/proapw $dirname/$phname >> $logfile
$DLC/bin/proapw $dirname/$phname >> $logfile
$DLC/bin/probiw $dirname/$phname >> $logfile
This is the script:
if [ -f $dumpdir/$1.bd ] ; then
cur_date=`date "+%d.%m.%Y %H:%M:%S"`
echo "$cur_date Start Loading $1 ..."
echo "$cur_date Start Loading $1 ..." >> $logfile
if [ -f ${dumpdir}/$1.lst ] ; then
$DLC/bin/proutil $dirname/$phname -C load \
${dumpdir}/$1.bd -dumplist ${dumpdir}/$1.lst \
-TB 31 -TM 32 -B 500 -i -T ${dumpdir} >> $logfile
else
$DLC/bin/proutil $dirname/$phname -C load \
${dumpdir}/$1.bd \
-TB 31 -TM 32 -B 500 -i -T ${dumpdir} >> $logfile
fi
retcode=$?
if [ $retcode -gt 0 ] ; then
echo " ERROR: Exit-Code $retcode "
echo " ERROR: Exit-Code $retcode " >> $logfile
fi
cur_date=`date "+%d.%m.%Y %H:%M:%S"`
echo "$cur_date Ende Loading $1 ..."
echo "$cur_date Ende Loading $1 ..." >> $logfile
echo ""
echo "" >> $logfile
fi
This is what I got in the log:

10.06.2010 15:56:32 Start Loading ats_tsurl ...
OpenEdge Release 10.1C01 as of Fri Jun 6 22:08:07 EDT 2008

Binary Dump created on Wed Jun 9 14:18:32 2010
from database /db/data02/GUG/onedbch/t_onedbch. (6203)
Loading table ats_tsurl, Table number 1386
starting with record 1, section 1. (6204)
Lock table overflow, increase -L on server (915)
Error creating record 8328, error -1218.
Binary Load failed. (6255)

ERROR: Exit-Code 255
10.06.2010 15:57:29 Ende Loading ats_tsurl ...
I was not aware of the fact that one could get a lock table overflow during a binary load. I have to mention that this table contains a CLOB. But from my point of view this should have nothing to do with the lock table.

Does anybody have an idea what could have caused this. In the interim I'm trying to re-run the load with a exceptionally high -L parameter.


Thanks in advance and best regards,
RealHeavyDude.
 
That's a new one on me. Multi-user default is 8k. I suppose that you might run into a problem if you had enough parallel processes running. But it seems like you'd have to really work at it.
 
Why are you bothering with a server process at all if you are only running the load and are not allowing any client logins.
In the past I have always performed loads of any type in single user mode when at all possible
 
Running a database in multiuser mode while performing binary loads allows for multiple loads at the same time. It is much faster. That is the reason I do it anyway.
 
Back
Top