Question Sqlexp - Not Able To Delete Tables

Progress Version: 9.1E

I am not able to delete tables using SQLEXP and getting the below error. Can't I use SQLEXP in this case and rely on incremental DF to drop the 2 tables (close to 100 GB each)? Incremental DF takes around 7 minutes to delete these 2 tables.

=== SQL Exception 1 ===
SQLState=HY000
ErrorCode=-211013
[JDBC Progress Driver]:SQL92 cannot alter or drop a table, or index, created by 4GL or SQL89. (7877)

I couldn't find much of help from progress articles or prokb as well - I may be wrong though.
 

RealHeavyDude

Well-Known Member
As the error message indicates, it is not possible to perform these actions via the SQL92 engine.

The problem with deleting tables that contain lots of records is that the ABL engine deletes each record first - and this can be a lengthy process depending on your i/o capability and how well the database is performance tuned.

Such big tables, IMHO, should reside exclusively in their own storage areas. If that is the case you easily truncate the area and thus drop the table contents in a glimpse of an eye. This is the fastest option you have.

Heavy Regards, RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Is this a v9 SQL92 restriction? I can do this in v11:
Code:
$ prodb demo demo
$ proserve demo -S 444
$ sqlexp demo -S 444

SQLExplorer> drop table PUB.customer;
SQLExplorer> commit work;

$ mpro demo

for each customer:
display customer.

Unknown or ambiguous table customer. (725)
**  Could not understand line 1. (196)
 
Such big tables, IMHO, should reside exclusively in their own storage areas.
I agree, but it's an old application and the tables/area categorization are messed up. We have a list of tables that are not in use and need to delete (all of them are not on separate area); the ones in separate area as you said we will use TRUNCATE for sure.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
We have a list of tables that are not in use and need to delete (all of them are not on separate area);
Depending on the sizes of the tables that you want to keep in those areas, it might be faster to move or D&L those tables to different areas, leaving the large tables that you can then truncate away.
 
yes Rob, that's the plan for small/medium ones. But we have 2 big tables (close to 100 GB) on different area that are bothering us which is currently clubbed with other bigger tables (around 50 GB) on the same area.

Let's say;
Area 1 - T1 (50 GB), T2 (100 GB), T3 (25 GB), t4 (60 GB) [in which we wanted to delete T2]
Area 2 - T5 (100 GB), T6 (65 GB), t7 (20 GB), T8 (5 GB), T9 (40 GB) [in which we wanted to delete T5]

As part of migration we are categorizing all these tables properly as per my other thread on the forum. But worried about this cleanup activity where we can get downtime but will be very less (may be an hour).

If I had to upload the DF to DROP these 2 tables on 9.1E what will be the impact to AI/BI? How about the AI growth? I think it will write only minimal notes but wanted to be sure AI doesn't grow too large for these 2 table drops.
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
Is this application being upgraded to 11.6 or is it staying on 9.1E? There is potentially another option for moving these large tables, online, but it requires 11.4+.
 
This application will be migrated to 11.6, right now it has 2 TB size of db but by doing this cleanup operation eventually we can bring it down to 1.2 TB which is what we are aiming at in 9.1E and doing this will make our life easy during migration process as we will have only 48 hours of downtime for migration activity.
 
If we end up migrating this table, as per the current strategy this table will be loaded into separate area and can be truncated. But the plan is to perform cleanup on 9.1E and give it 2 or 3 weeks in production to make sure we don't see any impact in terms of functionality and then start the migration process.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
If your plan is to migrate to 11.6 via D&L, as opposed to proutil conv910/conv1011, couldn't you just get rid of the two large tables by not dumping them? You're not going to D&L in-place, are you?
 
@Rob Fitzpatrick - We haven't taken a call on that yet. I am testing both approaches on our test machine right now. As you pointed out if it's just D&L we can skip DL for this 2 tables (just have to make sure we don't get into dependency issues with downstream applications) but if we have to take the other option proutil conv we will have to do the complete clean up first let the application run for 3 weeks in production and then start the migration process.

We are right now working on multiple stuffs in parallel and have to get all these bits and pieces together. :rolleyes: It gets confusing and messy sometimes, but too many suggestions from too many people out here in our team meetings. So testing all best possible options to come up with a proper approach.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Sounds good, good luck. You will want to D&L in the short term anyway, as you're obviously on Type I storage and have some reorganization you want to do. But if you can avoid dumping some data, so much the better.
 
You will want to D&L in the short term anyway, as you're obviously on Type I storage
Yes Rob, my proposal is to categorize the 950+ tables into different areas based on size where big and medium size tables will merit separate area. If they agree for that approach am sure we will have to do D&L and we can't go with PROUTIL CONV utility. Is my understanding right?

But after all testing and these efforts I hope they don't say just migrate to higher version and leave the tables as such in their areas. then I will be like o_O

Fingers Crossed :)
 
By the way, the technique I was referring to was online dump & load via Table Partitioning.
Good option to have; had a glance at the ppt and would love to try when I have sometime. Sadly on 9.1E and doing all circus to move to 11.6 (easy to deal with technology but with people at meetings ;) :oops:)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
If they agree for that approach am sure we will have to do D&L and we can't go with PROUTIL CONV utility. Is my understanding right?
In order to use 11.6 binaries with your database it needs to have its meta-schema at the appropriate version.

If you D&L, you will be using 11.6 to create the target database (via prorest or prostrct) and load the data, so it already has up-to-date 11.6 meta-schema. In that case, proutil conv* isn't required.

If you don't D&L, you still have a database created with 9.1E (or earlier), so you need to run the conversion routines to get it to look like a database that the 11.6 _mprosrv wants to open.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Good option to have; had a glance at the ppt and would love to try when I have sometime.
Me too. One caveat: Table Partitioning is a database add-on product, so to use it you have to license it separately unless you have bought the all-in-one Advanced Enterprise RDBMS license.
 

cj_brandt

Active Member
You can use the proutil conv utility to convert the database to 11, then using OE 11 start moving tables to new type II storage areas. Version 11 can use multiple threads with the binary dumps, might be helpful.
 
Top