Deleting records from table

Pavan Yadav

Member
Hi All,

I am trying to delete around million records from a table in single shot, but the size of BI and AI is getting increased. I used RELEASE statement and now BI size is not increasing, but AI still getting increased.

Any way so that while deleting records disk space is not utilized or can make minimum use of DISK space?

Thanks & Regards,
Krishan Gopal
 

rzr

Member
do you want all deletes to be part of single transaction? Why don't you break it up into smaller chunks?
 

Pavan Yadav

Member
Well Transaction scope is not an issue for me....but will it really help?
Because deleting Million records, and writing code into pieces. Anyways size of AI file will get increased whether I break it into chunks or not?
Thanks a lot rzr for replying.
 

Cringer

ProgressTalk.com Moderator
Staff member
Code:
DO Transaction:
  for each table:
    delete table.
  end.
end.

That will cause all sorts of problems for deleting a million records. Lock table overflows being one. Much better to do the below:
Code:
for each table transaction:
  delete table.
end.

This should solve your problems with AI too IIRC.
 

rzr

Member
yes, transaction scope is VERY important. It will define how much of work to undo, how much to commit, how long to hold record locks.. etc.. etc..

you only have to delete records in chunks, not break your code into chunks... maybe you'll have to add a couple of lines of extra code to your current logic to loop for a specific count and delete records...
 

Pavan Yadav

Member
Currently I am using below Code -

FOR EACH table:
DELETE table.
RELEASE TABLE.
END.

Please reply how below piece of code is different then above one in terms of TRANSACTION scope -

FOR EACH table TRANSACTION:
DELETE table.
RELEASE TABLE.
END.
 

rzr

Member
there isn't...

but as noted by Cringer above in his example... a DO TRANSACTION: on top of FOR EAH .... END... will make all delete's inside the for each part of one single large transaction.

btw... RELEASE in both cases will not release the record untill the transaction ends - so has no effect here.
 

RealHeavyDude

Well-Known Member
Never, never, never use the RELEASE statement unless you are 100% positive what it does. The RELEASE statement does not affect the transaction in any way. It actually writes the changes on the buffer back to the database "in a dirty way" - dirty because the transaction is not committed yet and others who fetch the same record with NO-LOCK will see this although it still could be undone when the transaction is rolled back. Usage of the RELEASE statement is - IMHO of course - in almost all cases bad practice. Instead one deliberately should work with buffer and transaction scope.

Why is there no difference between the two FOR EACH? The simple answer is transaction scope. In both cases the transaction scope is the same. In the first case the transaction scope is extended to the next outer block with transaction scope capabilities, which is the FOR EACH block. In the second case you supply the TRANSACTION key word - which is meaningless - as the transaction is already scoped to the FOR EACH block anyway. In any of these two cases, as it is an iterating block, the transaction scope is a single iteration. If you want all iterations to be part of one large transaction, you must deliberately extend the transaction scope in surrounding the FOR EACH block with another block which has the TRANSACTION keyword.

Concerning the AI ( after image): The AI is a log of all transaction committed since the last backup. That means transaction scope itself only indirectly affects the size of the AI log. Less large transaction cause less transaction header markers whereas more small transactions cause more transaction header markers in the AI. Usually the changes in the data use much more space in the AI log, but YMMV ...

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
The AI space used is not going to change significantly no matter what you do. The whole point of after-imaging is to be able to REDO *all* transactions.

The BI size can vary because it only has to UNDO whatever was active at a given moment. Change the size and scope of transactions and that will change the amount of BI space needed.

For large create/update/delete transactions code similar to this is usually best:
Code:
define variable i as integer no-undo.

outer: do for customer transaction while true:
  inner: do while true:
    i = i + 1.
    find next customer exclusive-lock where ... no-error.
    if not available customer then leave outer.
    delete customer.
    if i modulo 100 = 0 then next outer.
  end.
end.
 

Pavan Yadav

Member
Thanks a lot to all.


RELEASE statement I just used to decrease the size of the BI file or not to allow to grow BI file.

Data Integrity and Consistency is not important here in my case as it's only one time activity.
Again my problem is as I try to delete the records, it writes into AI file and I don't have much space available, so while deleting DB goes down.

Any way so that AI doesn't grow in size?

Will TomBascome's code will solve this AI size problem?
Even I don't want to maintain anykind of log for DB restore and etc.

again sincere thanks to all.
 

RealHeavyDude

Well-Known Member
Unless you disable the AI there is no way you can prevent it from growing when you change the database. Full stop.

But keep in mind, as soon as you disable the AI you lose the possibility to restore the database up to the point in time when a disaster happens - you will fall back to your last good backup. Therefore I would only disable the AI in rare maintenance scenarios where nobody else but me is changing the database.

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
My code will only impact the BI file, not the AI file.

I've not tested it but it does not seem likely that RELEASE had any impact on the size of the BI file. RELEASE doesn't do what people think it does. It definitely will not impact the AI files.

The amount of data written to AI files is related to the total amount of data that you change. Modifying the "style" of those changes will not have any impact on the amount of data being logged to the AI files -- you're still deleting the same number of records with the same amount of data in them no matter how you go about it. The BI file size is related to the size of a transaction that might need to be backed out at any particular moment -- this is sensitive to "style" as well as being sensitive to everything else that is going on on your system at that moment.

You do, however, have a couple of options if AI space is an issue:

1) Add disk space. Or move your AI extents to a filesystem that already has more disk space. Possibly temporarily.

2) Switch and archive AI extents more rapidly. You may find it useful to add a PAUSE in you process to allow this to occur.

3) Disable after-imaging. You should not do this unless you have a good backup and you are prepared to use it. Obviously you would re-enable ai after this process.
 

tamhas

ProgressTalk.com Sponsor
If you don't want AI of the delete, turn it off while you do the delete ... but you won't have a record of it.
 

cj_brandt

Active Member
It is often easier to copy the records you want to keep to a temp table and then use the SQL command "drop table". Poof - table is empty and you can copy the records back that were saved in the temp table.
 
Top