What is the fastest way to move a very large table to a new area.

Cecil

19+ years progress programming and still learning.
Soon as I saw the Question, the first thing that popped into my head was to use the parameter -i. But I can't remember if that would work with Tablemove. I might have to do some reading myself.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Soon as I saw the Question, the first thing that popped into my head was to use the parameter -i. But I can't remember if that would work with Tablemove. I might have to do some reading myself.
No.

Using -i (no-integrity) or -r ("unreliable", or asynchronous, BI I/O) can help to speed up an offline bulk load. But those parameters should never be used when a production system is online and in use. With -r you lose the guarantee of transaction durability. With -i you lose much more, with little extra gain: transaction durability, crash recovery, and online backup. These parameters should never be used outside of controlled offline maintenance activities, and only by people who understand the consequences. Getting it wrong can be a career-changing mistake.

The point of proutil tablemove is that it allows a table to be moved online, without taking a maintenance window. That's fundamentally incompatible with using -i or -r; not from a technical perspective but from a business perspective.
 

TomBascom

Curmudgeon
It will “work” but... the main reason to want to try to use tablemove is because you are wanting an online (ish) operation on a live database. To implement -i you will need to shutdown which defeats the “stay online” objective.

Also if any tiny little thing goes wrong when -i is active you get to throw away that database and restore from backup. Which probably isn’t compatible with “live databases”.

The -r parameter OTOH gives you almost all of the performance benefits of -i while being less likely to be unrecoverable. Still can’t add it online and not really anything you would ever use on a live production db - but better than -i.
 

TomBascom

Curmudgeon
There is a myth that -i eliminates bi writes. That is a myth. The -i parameter does make things go faster but it does that by using less reliable IO options. So much less reliable that database integrity is compromised. Do not use -i unless you are prepared to throw away your database.
 

TomBascom

Curmudgeon
If you are paid by the hour that’s a great idea. You could quite possibly get a whole day of pay from a 1gb tablemove.

You can get some bonus pay if you happen to sell disk space for the needed bi explosion.
 
Correct Tom :)

But if we reached (80%) Type-1 area limit and we don't have downtime window for next 2-3 weeks than i think we should better do TABLEMOVE online for all the tables having size from 0-600MB.

Regards,
 

TomBascom

Curmudgeon
If you are trying to find circumstances where I will endorse using tablemove for anything other than empty tables it is not going to happen. There are no circumstances other than literally empty tables where I will endorse the use of tablemove.

You have been warned.

FWIW if you are going to insist on pursuing such an idea start with the smallest tables and work your way up.

Write your script in such a way that you can easily abort it between tables or manually perform the table moves.

Moving a few trivially sized tables isn't going to do much about the fact that a type 1 area is running out of space.

Still having type 1 storage areas in 2018 is evidence of dereliction of duty as a DBA. (And if it applies -- still running v9 should be a criminal offense.)
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
But if we reached (80%) Type-1 area limit and we don't have downtime window for next 2-3 weeks than i think we should better do TABLEMOVE online for all the tables having size from 0-600MB.
This is getting off topic. If you have a genuine problem to be solved then I think you should open your own thread where you describe it in real detail. This thread is about the fastest way to move a table and it definitely isn't proutil tablemove, which you seem to have a fondness for.

The size limit of a Type I area depends on two things: the maximum number of blocks in the area, which is a function of area RPB, and the database block size. The product of number of blocks and block size is the theoretical limit of the area size. At a maximum, this number could be several terabytes. Even with a relatively high RPB (and thus relatively low maximum number of blocks), the maximum area size is hundreds of gigabytes.

So as Tom says, moving a few hundred megabytes of data, likely only a few percent of the total, out of that area via tablemove basically accomplishes nothing apart from huge BI growth and its attendant problems. It still contains very large and likely fast-growing tables . You are still faced with a dump and load in the short term.

I don't know what the significance of the next two to three weeks is for you. Are you projecting that you will reach maximum capacity in that time? That sounds like an explosive rate of growth if you are at 80% of maximum now. If so, you should have been doing capacity planning and projecting your growth toward the limit long before now, in plenty of time to have access to multiple downtime windows. You should also be practicing your dump and load on a non-production copy of the database so you have well-tested procedures and scripts.

@Rajat Sharma
Please start a new thread if you want to continue this conversation.
 
Top