Moving Tables to New Storage Area

IvanM

New Member
Hello Experts!
I’m facing with problem which I actually don’t know how to solve. We are using Progress 9.1D SP6 on HP_UX11 Server, and currently DB size is 175 GB, initially the database is started with one storage area and index area, now the last table extent in the table area is file number 123, as I know Progress DB has limit to 128 files extents. Last weekend I’ve created new storage and index area and also I started moving tables there with “proutil tablemove”. The result was very disappointing, my plan was to move for the weekend around 6 GB, but only for Saturday (11 hours working day) I moved 9 tables around 900 MB, what is more table sized 237.2MB (2216486 records) was moved for around 3 hours. DB contains tables over 25 GB size, the worst fact is that I can not stop the system for more than a weekend. So please if somebody knows tools or any other solution how to move tables faster I will be very thankful.

IvanM
 
Hi,

I thought the maximum # extents per area was 256....
But anyway, what happens with your BI if you use tablemove with such big tables?

I say you use dump and load to reorganize your DB.

There are much more reasons to use more area's then the schema area apart from the max# of extents!

Look at this article to get an idea:
http://www.greenfieldtech.com/articles/storage_areas.shtml

Reagrds,

Casper.
 
Hi Casper,
Thanks for your answer, I will check again maximum # of extents, I really hope that I’m wrong. Before starting to move tables I’ve read this article but actually I find this way too risky . About BI files, there was actually vastly growing, but when I started to move next table I truncated BI file and then was the actual movement. The main problem in dump/load process is that from one side I have no so much disk space, the disks are big enough but they are in mirror configuration so I can not use them with their full capacity, on the other hand I’m finding really hard to predict how much time will be needed to dump/load DB with size 175GB, that’s why I’ve decided to use “move tables” hoping that at least 6GB can be moved for a weekend when the system was down, unfortunately it is very slowly so may be I will move table until the end of my live J.

Thanks again, I will think about dump and load.

Ivan
 
Hi Ivan,
Found again about the extents:
KB p96088.
Further limits are 1000 area's max (994 data-area's),
Max 2 billion records per area.

If your biggest tables are > 25GB then it seems to me that they should be in a separate area's.
Thumb rule I use is to put big tables ( more then 1 million records) in seperate area's, their corresponding indexes also in separate area's. Put smaller tables together in area's with optimum rpb.

When was the last time a dump and load was performed on this database?

Much performance gain can be won from a dump and load of a database due to the scattering which (unfortunately) occurs.

The only way to find out how long a dump and load will take is to test it. With a database that big it would be nice for you to have a test situation before doing the actual dump and load.

The problem with tablemove is that the process delete's record in one area and creates it in the new area, that's the reason why the bi grows so fast during tablemove. Look at P18295 and P52224.

Regards,

Casper
 
Hi Casper,
Sorry for the late answer I was out of the office for some time. So dump and load was never performed on the our database and thats why it is so big currently. My strategy also is based on your rule, but unfortunately wneh the database was created previous DBAs does not separated tables in different areas, so now I have to do this, my estimations for tables more than 25 GB are more then 4-5 days, which can not be performed in our company. I have two servers the second one is test server so i can perform dump and load, i just have to check the disk space on the server.

You are saing that 994 data extents is the limit, am i uderstanding right that 128 is not the end of the table area which currently is used.

Thanka sagain for your support.

Regards,
Ivan
 
Hi Ivan,

FYI, 994 is the maximum number of (data) storage area's not the maximum number of extents per area!
The maximum number of extents per area is 256 (see kb 18168):

The current limit to the number of storage areas you can have in a
database is 1000. You can have 256 extents in each storage area.
Each extent has a maximum size of 2 gigabytes.

The dump takes lots more time then the load in a fragmented database.
Did you check dbanalys for the scatter factor of the big tables?

regards,

Casper.
 
Hi Casper,
Sorry for some moment I understood the information wrong, yes I saw in database documentation that the number is 256 so the current situation is not so criticle. The factor for the table sized 25.1 G is 4.0, table size 16G is 3.0

Ivan
 
1) Storage areas are not "risky". They're a very mature product feature that is an essential tool.

2) Table move is known to be extremely slow with any non trivial table size. The feature has been called "useless" by famous database developers.

3) Dumping and loading 175GB should be possible in a weekend with moderately reasonable hardware and some preparation. The more testing you do ahead of time the better your results can be. More hardware helps too :awink:
 
Back
Top