Quickly Empty A Table??

Cool_Dude_eh

New Member
Here is the scenario.
I have a table with a large number of records in it (45 million)
There are several Indexes on this table.
I want to delete say 44 million or the records.
for each where .... delete .... takes a looonnnnggggg time.

I was wondering if I could take an approach like this.
Dump the 1 million records I want with for each where .....
Then some way of blanking the entire table.(this is the part I am looking for a solution to)
Then pull back in the 1 million records i saved earlier?

Anyone have any ideas??

Porgress 8.3b
 
If this is a one time only process ... and doesn't need to be automated. Then I'd dump the one million records. remove the table from the DB. Re-add the table to the DB (blank) then import the one million records. If there are a large number of indexes on the table then it may make sense to deactivate the indexes before reloading the 1mill records. Then perform an index rebuild ...

Later,
Gordon
 
[Cross post]

If you can take the database offline, drop (delete) the table, then reload the df.

Otherwise you're stuck with FOR EACH...

Also, see:

KB P36834
Title: "What is the fastest way to remove all records from a database table?"

http://tinyurl.com/77u8a

HTH

Lee
 
If this was version 9 or greater you could put the table in a separate storage area and use "truncate area".

But since you have version 8 you can't do that.

With old releases you can put such a table into a stand-alone database and just delete the whole db. Then recreate it and load the records that you want to keep.

If this is too risky you could also work on optimizing the FOR EACH ... DELETE approach -- use a very large bi cluster size, "chunk" the transaction into large groups of records (rather than 1 record at a time), make sure to specify EXCLUSIVE-LOCK, use -i if you can amd so forth.
 
Thanks for the responses everyone.
This is kind of what I had been thinking, but wanted to get some second opinions.

Does anyone know if it will take a lot of time when I actually delete the table?
or should it be pretty much instant?
(my concern - the space in the DB still has to be marked as free and indexes deleted etc ..... the 44 million records occupy about 8 GB)
 
Tried a test

Ok, I tried a test using the plan of dumping the data I want to keep and then deleting the table a then re-creating the table and pulling the data back in.
I had to use the drop table method to delete the table because it would not delete through the editor.
It went away for about an hour and then came back with the following error.

SYSTEM ERROR: rlbinext: past end of cluster -2147483648 (3829)
** Save file named core for analysis by Progress Software Corporation(439)
Press space bar to continue.

I basically crashed the progress v8 file size limit on my last variable BI extent.

My question is, can I do this delete table without anything being written to the BI file?
Something about a no integrity option is coming to mind, but any advice would be appreciated.
 
In version 8 dropping the table is going to generate a really big transaction and be essentially the same as a FOR EACH ... DELETE loop.

If you have to stick with v8 you really only have a few options. The distinct database option is probably the fastest (depending on how much data you're reloading...) But you might find that gouping records into a transaction block of 100 or so records is a lot better than the simple minded delete loop.
 
No-integrity will reduce, but not eliminate, bi notes. It will run faster but it probably won't cure your problem.
 
Back
Top